Reader-Writer Lock: The Library Problem
The Problem
Imagine a library card catalog. Many people can read the catalog at the same time -- they are just looking things up and not changing anything. But when a librarian needs to update the catalog (add a new book, remove a book), nobody else should be reading or writing at the same time, because they might see a half-updated entry.
This is the reader-writer problem:
- Multiple readers can access the resource simultaneously (read-read is safe)
- Writers need exclusive access (read-write and write-write are both unsafe)
Using a plain synchronized block treats reads and writes the same -- only one thread at a time. That is wasteful when you have 100 readers and 1 writer. The readers should not block each other.
Let us See It Break
First, no synchronization at all:
1import java.util.HashMap;
2import java.util.Map;
3
4public class BrokenLibraryCatalog {
5 private static final Map<String, String> catalog = new HashMap<>();
6
7 static {
8 catalog.put("978-0134685991", "Effective Java");
9 catalog.put("978-0596009205", "Head First Design Patterns");
10 catalog.put("978-0201633610", "Design Patterns: GoF");
11 }
12
13 public static void main(String[] args) {
14 // 5 readers constantly looking up books
15 for (int i = 0; i < 5; i++) {
16 new Thread(() -> {
17 for (int j = 0; j < 100_000; j++) {
18 String book = catalog.get("978-0134685991");
19 if (book == null) {
20 System.out.println("BUG! Book disappeared during read!");
21 }
22 }
23 }, "Reader-" + i).start();
24 }
25
26 // 1 writer updating the catalog
27 new Thread(() -> {
28 for (int j = 0; j < 100_000; j++) {
29 catalog.put("978-NEW-" + j, "New Book " + j);
30 catalog.remove("978-NEW-" + j);
31 }
32 }, "Writer").start();
33 }
34}Run it. You might see:
ConcurrentModificationExceptionif a reader iterates while the writer modifiesNullPointerExceptionfrom corrupted internal HashMap state (bucket chains broken)- "BUG! Book disappeared during read!" if the writer triggers a resize that moves entries
- Or it might appear to work -- that is the worst outcome, because you ship a latent bug
HashMap is not thread-safe. A writer doing a put() can trigger an internal resize, which rehashes all entries. A reader calling get() during that resize can follow a stale pointer and get null or crash.
Why It Breaks
- HashMap is not thread-safe. Its internal array can be resized during
put(), leavingget()in an inconsistent state. - No happens-before relationship. Without synchronization, the reader thread might not see the writer's changes at all (CPU cache staleness).
- Structural modification during iteration. If any thread iterates the map while another modifies it, the iterator detects the change and throws
ConcurrentModificationException.
Fix #1: The Simple Way (synchronized everything)
1import java.util.HashMap;
2import java.util.Map;
3
4public class SynchronizedCatalog {
5 private final Map<String, String> catalog = new HashMap<>();
6 private final Object lock = new Object();
7
8 public void addBook(String isbn, String title) {
9 synchronized (lock) {
10 catalog.put(isbn, title);
11 System.out.println("[WRITE] Added: " + title);
12 }
13 }
14
15 public String findBook(String isbn) {
16 synchronized (lock) {
17 return catalog.get(isbn);
18 }
19 }
20
21 public void removeBook(String isbn) {
22 synchronized (lock) {
23 String removed = catalog.remove(isbn);
24 System.out.println("[WRITE] Removed: " + removed);
25 }
26 }
27
28 public static void main(String[] args) throws InterruptedException {
29 SynchronizedCatalog lib = new SynchronizedCatalog();
30 lib.addBook("001", "Effective Java");
31 lib.addBook("002", "Clean Code");
32 lib.addBook("003", "Design Patterns");
33
34 long start = System.nanoTime();
35
36 Thread[] readers = new Thread[10];
37 for (int i = 0; i < readers.length; i++) {
38 readers[i] = new Thread(() -> {
39 for (int j = 0; j < 100_000; j++) {
40 lib.findBook("001");
41 }
42 });
43 readers[i].start();
44 }
45
46 for (Thread r : readers) r.join();
47
48 long elapsed = (System.nanoTime() - start) / 1_000_000;
49 System.out.println("Synchronized: " + elapsed + " ms for 10 readers x 100k reads");
50 }
51}This is correct but slow. All 10 readers must take turns, one at a time, even though reads do not interfere with each other. The lock creates unnecessary contention.
Fix #2: The Better Way (ReentrantReadWriteLock)
ReadWriteLock provides two lock views: a read lock (shared) and a write lock (exclusive):
1import java.util.HashMap;
2import java.util.Map;
3import java.util.concurrent.locks.ReadWriteLock;
4import java.util.concurrent.locks.ReentrantReadWriteLock;
5
6public class ReadWriteCatalog {
7 private final Map<String, String> catalog = new HashMap<>();
8 private final ReadWriteLock rwLock = new ReentrantReadWriteLock();
9
10 public void addBook(String isbn, String title) {
11 rwLock.writeLock().lock();
12 try {
13 catalog.put(isbn, title);
14 System.out.println("[WRITE] Added: " + title);
15 } finally {
16 rwLock.writeLock().unlock();
17 }
18 }
19
20 public String findBook(String isbn) {
21 rwLock.readLock().lock();
22 try {
23 return catalog.get(isbn);
24 } finally {
25 rwLock.readLock().unlock();
26 }
27 }
28
29 public void removeBook(String isbn) {
30 rwLock.writeLock().lock();
31 try {
32 String removed = catalog.remove(isbn);
33 System.out.println("[WRITE] Removed: " + removed);
34 } finally {
35 rwLock.writeLock().unlock();
36 }
37 }
38
39 public static void main(String[] args) throws InterruptedException {
40 ReadWriteCatalog lib = new ReadWriteCatalog();
41 lib.addBook("001", "Effective Java");
42 lib.addBook("002", "Clean Code");
43 lib.addBook("003", "Design Patterns");
44
45 long start = System.nanoTime();
46
47 Thread[] readers = new Thread[10];
48 for (int i = 0; i < readers.length; i++) {
49 readers[i] = new Thread(() -> {
50 for (int j = 0; j < 100_000; j++) {
51 lib.findBook("001");
52 }
53 });
54 readers[i].start();
55 }
56
57 for (Thread r : readers) r.join();
58
59 long elapsed = (System.nanoTime() - start) / 1_000_000;
60 System.out.println("ReadWriteLock: " + elapsed + " ms for 10 readers x 100k reads");
61 }
62}How it works:
readLock().lock()-- acquired by readers. Multiple threads can hold the read lock simultaneously.writeLock().lock()-- acquired by writers. Blocks until ALL read locks and write locks are released. Once held, no reader or writer can proceed.- When a writer is waiting, new readers are typically blocked too (to prevent writer starvation).
Fix #3: The Java Way (ConcurrentHashMap)
For the specific case of a concurrent map, Java provides ConcurrentHashMap which uses fine-grained locking internally:
1import java.util.concurrent.ConcurrentHashMap;
2import java.util.Map;
3
4public class ConcurrentCatalog {
5 private final Map<String, String> catalog = new ConcurrentHashMap<>();
6
7 public void addBook(String isbn, String title) {
8 catalog.put(isbn, title);
9 System.out.println("[WRITE] Added: " + title);
10 }
11
12 public String findBook(String isbn) {
13 return catalog.get(isbn);
14 }
15
16 public void removeBook(String isbn) {
17 String removed = catalog.remove(isbn);
18 System.out.println("[WRITE] Removed: " + removed);
19 }
20
21 public static void main(String[] args) throws InterruptedException {
22 ConcurrentCatalog lib = new ConcurrentCatalog();
23 lib.addBook("001", "Effective Java");
24 lib.addBook("002", "Clean Code");
25 lib.addBook("003", "Design Patterns");
26
27 long start = System.nanoTime();
28
29 Thread[] readers = new Thread[10];
30 for (int i = 0; i < readers.length; i++) {
31 readers[i] = new Thread(() -> {
32 for (int j = 0; j < 100_000; j++) {
33 lib.findBook("001");
34 }
35 });
36 readers[i].start();
37 }
38
39 for (Thread r : readers) r.join();
40
41 long elapsed = (System.nanoTime() - start) / 1_000_000;
42 System.out.println("ConcurrentHashMap: " + elapsed + " ms for 10 readers x 100k reads");
43 }
44}ConcurrentHashMap is even faster than ReadWriteLock for map operations because:
- Reads require no locking at all (uses volatile reads internally)
- Writes lock only the affected bucket, not the entire map
- It uses CAS operations for many structural modifications
However, ReadWriteLock is the general-purpose solution. Not every shared resource is a map.
Performance Comparison: Full Benchmark
Here is a complete benchmark comparing all three approaches:
1import java.util.HashMap;
2import java.util.Map;
3import java.util.concurrent.ConcurrentHashMap;
4import java.util.concurrent.locks.ReadWriteLock;
5import java.util.concurrent.locks.ReentrantReadWriteLock;
6
7public class ReadWriteBenchmark {
8 private static final int NUM_READERS = 10;
9 private static final int READS_PER_THREAD = 500_000;
10 private static final int NUM_WRITERS = 2;
11 private static final int WRITES_PER_THREAD = 10_000;
12
13 public static void main(String[] args) throws InterruptedException {
14 System.out.println("=== Read-Heavy Workload Benchmark ===");
15 System.out.println(NUM_READERS + " readers x " + READS_PER_THREAD + " reads");
16 System.out.println(NUM_WRITERS + " writers x " + WRITES_PER_THREAD + " writes");
17 System.out.println();
18
19 benchmarkSynchronized();
20 benchmarkReadWriteLock();
21 benchmarkConcurrentHashMap();
22 }
23
24 static void benchmarkSynchronized() throws InterruptedException {
25 Map<String, String> map = new HashMap<>();
26 Object lock = new Object();
27 map.put("key", "value");
28
29 long start = System.nanoTime();
30 Thread[] threads = new Thread[NUM_READERS + NUM_WRITERS];
31
32 for (int i = 0; i < NUM_READERS; i++) {
33 threads[i] = new Thread(() -> {
34 for (int j = 0; j < READS_PER_THREAD; j++) {
35 synchronized (lock) { map.get("key"); }
36 }
37 });
38 }
39 for (int i = 0; i < NUM_WRITERS; i++) {
40 threads[NUM_READERS + i] = new Thread(() -> {
41 for (int j = 0; j < WRITES_PER_THREAD; j++) {
42 synchronized (lock) { map.put("key", "val-" + j); }
43 }
44 });
45 }
46
47 for (Thread t : threads) t.start();
48 for (Thread t : threads) t.join();
49
50 long ms = (System.nanoTime() - start) / 1_000_000;
51 System.out.println("synchronized: " + ms + " ms");
52 }
53
54 static void benchmarkReadWriteLock() throws InterruptedException {
55 Map<String, String> map = new HashMap<>();
56 ReadWriteLock rwLock = new ReentrantReadWriteLock();
57 map.put("key", "value");
58
59 long start = System.nanoTime();
60 Thread[] threads = new Thread[NUM_READERS + NUM_WRITERS];
61
62 for (int i = 0; i < NUM_READERS; i++) {
63 threads[i] = new Thread(() -> {
64 for (int j = 0; j < READS_PER_THREAD; j++) {
65 rwLock.readLock().lock();
66 try { map.get("key"); }
67 finally { rwLock.readLock().unlock(); }
68 }
69 });
70 }
71 for (int i = 0; i < NUM_WRITERS; i++) {
72 threads[NUM_READERS + i] = new Thread(() -> {
73 for (int j = 0; j < WRITES_PER_THREAD; j++) {
74 rwLock.writeLock().lock();
75 try { map.put("key", "val-" + j); }
76 finally { rwLock.writeLock().unlock(); }
77 }
78 });
79 }
80
81 for (Thread t : threads) t.start();
82 for (Thread t : threads) t.join();
83
84 long ms = (System.nanoTime() - start) / 1_000_000;
85 System.out.println("ReadWriteLock: " + ms + " ms");
86 }
87
88 static void benchmarkConcurrentHashMap() throws InterruptedException {
89 Map<String, String> map = new ConcurrentHashMap<>();
90 map.put("key", "value");
91
92 long start = System.nanoTime();
93 Thread[] threads = new Thread[NUM_READERS + NUM_WRITERS];
94
95 for (int i = 0; i < NUM_READERS; i++) {
96 threads[i] = new Thread(() -> {
97 for (int j = 0; j < READS_PER_THREAD; j++) {
98 map.get("key");
99 }
100 });
101 }
102 for (int i = 0; i < NUM_WRITERS; i++) {
103 threads[NUM_READERS + i] = new Thread(() -> {
104 for (int j = 0; j < WRITES_PER_THREAD; j++) {
105 map.put("key", "val-" + j);
106 }
107 });
108 }
109
110 for (Thread t : threads) t.start();
111 for (Thread t : threads) t.join();
112
113 long ms = (System.nanoTime() - start) / 1_000_000;
114 System.out.println("ConcurrentHashMap: " + ms + " ms");
115 }
116}Typical results on a modern machine with a read-heavy workload:
synchronized: 1200 ms
ReadWriteLock: 350 ms
ConcurrentHashMap: 80 msThe difference grows dramatically with more readers. With 10 readers, ReadWriteLock is 3-4x faster than synchronized. With 100 readers, it can be 10x+ faster.
Step-by-Step Trace
Consider 3 readers (R1, R2, R3) and 1 writer (W) with a ReadWriteLock:
| Step | R1 | R2 | R3 | W | Lock State |
|---|---|---|---|---|---|
| 1 | readLock() | readers=1 | |||
| 2 | reading | readLock() | readers=2 | ||
| 3 | reading | reading | readLock() | readers=3 | |
| 4 | reading | reading | reading | writeLock() BLOCKS | readers=3, writer waiting |
| 5 | readUnlock() | reading | reading | still blocked | readers=2, writer waiting |
| 6 | readUnlock() | reading | still blocked | readers=1, writer waiting | |
| 7 | readUnlock() | gets lock! | readers=0, writer=1 | ||
| 8 | readLock() BLOCKS | writing | writer=1, reader waiting | ||
| 9 | blocked | writeUnlock() | free | ||
| 10 | gets readLock() | readLock() | readers=2 |
Key observations:
- Steps 1-3: All readers enter simultaneously. No blocking between readers.
- Steps 4-7: Writer must wait for ALL readers to finish. This is the cost of exclusive access.
- Step 8: While writer holds the lock, new readers are blocked too.
- Step 10: After writer releases, readers can enter concurrently again.
Common Mistakes
- Using ReadWriteLock when writes are frequent. If writes happen as often as reads,
ReadWriteLockoffers no benefit oversynchronized(and has higher overhead). It shines when reads vastly outnumber writes. - Downgrade confusion. You CAN downgrade from write lock to read lock (acquire read lock, then release write lock). You CANNOT upgrade from read lock to write lock (it deadlocks -- the read lock blocks the write lock acquisition, and the write lock blocks the read lock release).
- Writer starvation. If readers keep arriving non-stop, the writer may never get the lock.
ReentrantReadWriteLockhas a fairness option:new ReentrantReadWriteLock(true). Fair mode prevents starvation but reduces throughput. - **Forgetting to unlock in
finally.** Same rule asReentrantLock. If an exception is thrown while holding a read or write lock, it must be released infinally. - Using ReadWriteLock for a Map when ConcurrentHashMap exists. For maps specifically,
ConcurrentHashMapis almost always the better choice. UseReadWriteLockfor non-map shared resources (caches, configuration objects, in-memory databases).
Interview Tip: When asked about the reader-writer problem, explain the trade-off: synchronized is simple but serializes all access. ReadWriteLock allows concurrent reads but adds complexity. ConcurrentHashMap is best for maps specifically. Then ask: "What is the read-to-write ratio?" If reads dominate (95%+), ReadWriteLock is a big win. If reads and writes are balanced, plain synchronization might be simpler and just as fast. Showing this kind of nuance impresses interviewers.
Try It Yourself
- Build a thread-safe cache with a
ReadWriteLock. Implementget(key),put(key, value), andcomputeIfAbsent(key, loader). ThecomputeIfAbsentmethod should acquire a read lock first, and only upgrade to a write lock if the key is missing. - Measure the crossover point. At what read-to-write ratio does
ReadWriteLockbecome faster thansynchronized? Write a benchmark that varies the ratio from 50/50 to 99/1. - StampedLock. Java 8 introduced
StampedLockwith an optimistic read mode that does not even acquire a lock. Read about it and compare its performance toReadWriteLockin your benchmark. When would you use one versus the other?
Summary
- Plain synchronized serializes all access. Safe but slow when reads dominate.
- ReadWriteLock allows concurrent reads while ensuring exclusive writes. Big win for read-heavy workloads.
- ConcurrentHashMap is the best choice for concurrent maps specifically (lock-free reads, bucket-level write locks).
- Choose based on read-to-write ratio: balanced workload means synchronized is fine; read-heavy means ReadWriteLock; map specifically means ConcurrentHashMap.