Java Intermediate

0% completed

Previous
Next
Design Patterns Related to Thread Synchronization

Design Patterns Related to Thread Synchronization

Certain classic concurrency design patterns illustrate how synchronization constructs are used in practice. We will discuss three common patterns:

  • Producer-Consumer
  • Read-Write Lock
  • Thread Pool

Producer-Consumer Pattern

The Producer-Consumer (or Bounded Buffer) pattern is a way to safely communicate between threads when one or more producer threads create data (or tasks) and one or more consumer threads process that data. The key challenge is coordinating access to the shared buffer or queue that connects producers and consumers, to ensure data is not lost or corrupted. Producers should wait if the buffer is full, and consumers should wait if the buffer is empty – this requires synchronization.

Traditional Implementation

Historically, this is implemented with a shared queue and wait()/notifyAll(). For example, producers would wait() if the queue is full, and notify consumers when an item is added; consumers wait() if empty, and notify producers when they remove an item. This low-level approach uses intrinsic locks: the queue operations are synchronized, and conditions are managed via wait/notify. While effective, doing this correctly is tricky (one must manage loops around wait, use notifyAll vs notify carefully, etc.).

Modern Implementation with BlockingQueue

The Java concurrent library provides BlockingQueue which essentially implements the Producer-Consumer pattern for you. Producers call put() (which blocks if the queue is full) and consumers call take() (blocks if empty) as we saw in the earlier example. Internally, a BlockingQueue uses locks or other synchronization to coordinate threads, but externally you don't see it – the API abstractly handles the waiting logic.

Example using BlockingQueue

Java
Java

. . . .

In this code, if the producer gets ahead of the consumer, it will automatically block on buffer.put once 5 items are in the queue, until the consumer catches up and takes some items out. Conversely, if the consumer runs faster and empties the queue, it blocks on buffer.take until a producer puts more data. This decouples the producer and consumer speeds and uses synchronization internally to ensure thread-safe handoff of data.

Why it matters

The Producer-Consumer pattern allows you to balance work between threads and handle rate differences gracefully. You can have multiple producers and multiple consumers, and the thread-safe queue will manage all the synchronization. It’s used in many real-world systems – from pipelined processing of tasks, to multi-threaded servers where one thread accepts requests (producer) and worker threads process them (consumers).

Read-Write Lock Pattern

The Read-Write Lock pattern is useful when you have a shared resource that is read very often but written infrequently. A regular lock would allow only one thread at a time, even if many threads just want to read. A read-write lock differentiates between readers and writers: it allows multiple readers to hold the lock simultaneously (as long as there’s no writer holding it), but writers get exclusive access. This can significantly improve throughput in read-heavy scenarios by not blocking readers out from each other.

Java Support: In Java, this pattern is supported by ReentrantReadWriteLock (which implements the ReadWriteLock interface). It provides two Lock objects: .readLock() and .writeLock(). You acquire and release them similarly to normal locks. Under the hood, the lock ensures that multiple reads can happen in parallel, but a write will block other writes and also block readers (and a writer will wait until all readers have finished).

Example – Shared Data with ReadWriteLock

Java
Java

. . . .

In this class, any number of threads can safely call getValue() at the same time (concurrently holding the read lock). If a thread calls setValue(), it will acquire the write lock, which waits until no other thread has the read lock or write lock. This means it will wait until all readers are done, and it will temporarily block new readers from acquiring the read lock until the write is finished. Once the writer releases, readers can proceed again. The result is greater parallelism: readers don’t block each other, which is beneficial if reads are very frequent.

When to use

This pattern shines when you have a data structure that is mostly read (for example, a cache that is updated occasionally, or a configuration that is read often but rarely changed). By using a read-write lock, you allow high concurrency for reads. The JavaDoc notes that “A read-write lock allows for a greater level of concurrency in accessing shared data than that permitted by a mutual exclusion lock... it exploits the fact that multiple threads can concurrently read data as long as no thread is writing” (ReadWriteLock (Java Platform SE 8 )). However, if writes are frequent, a read-write lock might not perform better than a normal lock (because writes still serialize everything, and the overhead of managing the read-write lock might be higher). Also, careful: write-heavy patterns could lead to reader starvation or vice-versa depending on implementation (Java’s ReentrantReadWriteLock by default may prefer writers by not letting readers starve writers indefinitely).

Advantages

  • Greatly increases throughput for read-heavy workloads by allowing concurrent reads.
  • Still provides thread safety for writes.

Disadvantages

  • More overhead than a simple lock when contention is low or writes are common.
  • Potential fairness complexities (a continuous stream of readers could starve a writer or vice versa; Java’s implementation has policies to mitigate this, but one should be aware of the possibility).
  • Code complexity is a bit higher – you have to ensure using the correct lock for read vs write sections.

Thread Pool Pattern

The Thread Pool (also known as the Worker Thread) pattern is about reusing a fixed set of threads to execute many tasks, instead of creating a new thread for each task. Thread creation and destruction are expensive; a pool amortizes that cost. It also provides an upper bound on how many threads are active, which can prevent the system from being overwhelmed by too many threads. In Java, thread pools are provided by the ExecutorService framework.

How it works

You create a pool of N threads (workers) and submit tasks (usually Runnable or Callable tasks) to the pool. The pool has a work queue; if all threads are busy, incoming tasks wait in the queue. When a thread finishes a task, it picks up the next task from the queue. This way, threads are reused for multiple tasks. You typically obtain a thread pool via factory methods in Executors (e.g., Executors.newFixedThreadPool(N) or Executors.newCachedThreadPool()).

Example – Using a Thread Pool

Java
Java

. . . .

In this snippet, we created a fixed thread pool with 4 threads. We then submit 10 tasks. The pool will execute up to 4 tasks in parallel (since 4 threads), and the remaining tasks will wait in the queue. Each task prints a message and sleeps for a bit to simulate work. Even though we had 10 tasks, we did not create 10 threads; we reused 4 threads to handle all tasks. We call pool.shutdown() to initiate an orderly shutdown after tasks are done (always shut down thread pools to allow the program to exit cleanly).

Why is this synchronized pattern?

The thread pool itself uses synchronization internally to manage the work queue and coordinate threads (for instance, the queue operations and thread idle/wakeup are synchronized). The pattern allows controlled concurrency: you can limit how many threads execute at once. If tasks themselves need to access shared resources, you’d still use the earlier synchronization tools within the task, but the pool helps manage thread lifecycle and scheduling.

Advantages

  • Performance: Reusing threads reduces the overhead of thread creation. Creating a thread is relatively slow and uses memory; a pool amortizes that cost.
  • Throttling: By limiting the number of threads, you prevent aggressive spawning that could harm performance (too many threads can cause excessive context switching or memory exhaustion). A fixed pool gives a stable level of concurrency.
  • Convenience: The Executor framework provides a high-level API. You can get features like scheduling (with ScheduledThreadPool), callable tasks with futures (getting results asynchronously), etc., without dealing with low-level thread management.

Disadvantages

  • Queuing and Rejection: If tasks are submitted faster than they can be executed, they queue up. In extreme cases, the queue could grow and consume a lot of memory, or tasks might wait a long time. If the queue is bounded and fills up, new tasks could be rejected (the Executor can use a rejection handler to decide what to do).
  • Overhead when tasks are few: If tasks are infrequent, having a pool might keep threads alive idle (though CachedThreadPool can let threads die after inactivity). A fixed pool keeps threads alive; if not used, that's some overhead (though not usually a big issue).
  • Complexity in Tuning: Deciding the optimal number of threads or using advanced features (like custom thread factories, or monitoring the pool) can add complexity. Also, one must be careful to shut down the pool; forgetting to do so can cause a JVM to hang on exit because non-daemon threads are still alive in the pool.

In practice, thread pools are a cornerstone of concurrent programming in Java. For example, web servers, app servers, and many frameworks use thread pools to handle requests. It is both a design pattern and provided by the language’s utility classes, making it easy to adopt.

.....

.....

.....

Like the course? Get enrolled and start learning!
Previous
Next