Hash-Based Executor: Per-Key Ordering
Before We Start โ What You Need to Know
You've learned about thread confinement โ giving each thread its own data so there's no sharing. Now let's apply that idea to a powerful pattern used in real-world systems: the hash-based executor.
Here's the scenario: you have thousands of tasks arriving concurrently, each associated with a key (like a user ID, order ID, or account number). You want parallelism (process multiple users simultaneously), but you also need a guarantee: tasks for the SAME key must be processed in order, one at a time.
You should know: what an ExecutorService is (a pool of threads that process submitted tasks), what a hash function does (maps keys to numbers), and thread confinement from Lesson 2.
What is it? (The Analogy)
Imagine a bank with 4 teller windows. Customers arrive in a stream. If we just have one queue feeding all 4 tellers, Customer Alice might have her deposit processed by Teller 1 and her withdrawal (submitted 1 second later) processed by Teller 2. If the withdrawal runs *before* the deposit finishes โ her account goes negative even though she had enough money!
Solution: Assign each customer to a specific teller based on their name. Alice always goes to Teller 1. Bob always goes to Teller 2. Charlie to Teller 3. And so on. Now, Alice's operations are always processed in order by the same teller, while Bob's operations run in parallel on a different teller. This is exactly the hash-based executor pattern.
The "hash" part: We use a hash function to map each key to a specific executor (teller). hash("Alice") % 4 = 1 means Alice always goes to executor 1. hash("Bob") % 4 = 2 means Bob always goes to executor 2. Different keys get parallelism; the same key gets ordering.
This is a design-level application of thread confinement: all tasks for a given key are confined to a single thread, so their mutable state (like an account balance) is only ever accessed by one thread. No locks needed!
The Problem It Solves
1import java.util.concurrent.ExecutorService;
2import java.util.concurrent.Executors;
3
4// BROKEN: Tasks for the same user can run out of order!
5class BrokenChatServer {
6 private final ExecutorService pool = Executors.newFixedThreadPool(4);
7
8 public void handleMessage(String userId, String message) {
9 pool.submit(() -> {
10 // Process the message (save to DB, notify recipients, etc.)
11 processMessage(userId, message);
12 });
13 }
14
15 private void processMessage(String userId, String message) {
16 // Simulated processing
17 System.out.println("[" + Thread.currentThread().getName()
18 + "] " + userId + ": " + message);
19 }
20}What goes wrong?
- Alice sends "Hello" then "How are you?" in quick succession.
- Both tasks enter the pool. The pool has 4 threads that grab tasks from a shared queue.
- Thread-3 grabs "How are you?" and Thread-1 grabs "Hello."
- Thread-3 finishes first because it was less loaded.
- Result: Messages are displayed OUT OF ORDER. "How are you?" appears before "Hello."
For a chat app, this is unacceptable. For a banking app, it's catastrophic.
How it works โ Step by Step
- Create N single-threaded executors (not one big pool). Each executor has exactly 1 thread and its own task queue.
- When a task arrives with a key, hash the key:
int index = Math.abs(key.hashCode() % N). - Submit the task to executor[index]: Since each executor has only 1 thread, all tasks for the same key are processed sequentially, in submission order.
- Different keys map to different executors (usually), giving you parallelism across keys.
- The guarantee: For any given key, tasks execute in FIFO order on a single thread. No locks on the per-key state are needed!
Key "Alice" -> hash % 4 = 1 -> Executor[1]: [Task1, Task2, Task3] (in order)
Key "Bob" -> hash % 4 = 3 -> Executor[3]: [Task1, Task2] (in order)
Key "Charlie"-> hash % 4 = 1 -> Executor[1]: [Task4, Task5] (same executor as Alice!)
Key "Diana" -> hash % 4 = 0 -> Executor[0]: [Task1] (in order)Note: Alice and Charlie both map to Executor[1]. Their tasks are interleaved in FIFO order on the same thread. That's fine โ each task is independent, and per-key ordering is maintained (Alice's tasks are in order relative to each other, and Charlie's are in order relative to each other).
Let's Build It Together
Let's build a hash-based executor from scratch and use it for a multiplayer game's event processing:
1import java.util.concurrent.ExecutorService;
2import java.util.concurrent.Executors;
3import java.util.concurrent.TimeUnit;
4
5/**
6 * A hash-based executor that guarantees per-key ordering.
7 * Tasks with the same key always run on the same thread, in order.
8 * Tasks with different keys can run in parallel.
9 */
10public class HashBasedExecutor {
11
12 private final ExecutorService[] executors;
13 private final int poolSize;
14
15 /**
16 * Create a hash-based executor with the given number of threads.
17 * Each thread has its own single-threaded executor and task queue.
18 */
19 public HashBasedExecutor(int poolSize) {
20 this.poolSize = poolSize;
21 this.executors = new ExecutorService[poolSize];
22 for (int i = 0; i < poolSize; i++) {
23 // Each executor has exactly ONE thread
24 executors[i] = Executors.newSingleThreadExecutor(r -> {
25 Thread t = new Thread(r);
26 t.setDaemon(true);
27 return t;
28 });
29 }
30 }
31
32 /**
33 * Submit a task associated with a key.
34 * All tasks with the same key will execute on the same thread,
35 * in the order they were submitted.
36 */
37 public void submit(String key, Runnable task) {
38 // Hash the key to pick an executor
39 int index = Math.abs(key.hashCode() % poolSize);
40 executors[index].submit(task);
41 }
42
43 /**
44 * Gracefully shut down all executors.
45 */
46 public void shutdown() throws InterruptedException {
47 for (ExecutorService executor : executors) {
48 executor.shutdown();
49 }
50 for (ExecutorService executor : executors) {
51 executor.awaitTermination(30, TimeUnit.SECONDS);
52 }
53 }
54}Now let's use it for a game where each player's events must be processed in order:
1import java.util.concurrent.ConcurrentHashMap;
2import java.util.concurrent.atomic.AtomicInteger;
3
4public class MultiplayerGame {
5
6 // Per-player state โ safe because each player's events
7 // are processed by a single thread!
8 private final ConcurrentHashMap<String, AtomicInteger> playerHealth =
9 new ConcurrentHashMap<>();
10
11 private final HashBasedExecutor executor;
12
13 public MultiplayerGame(int numWorkers) {
14 this.executor = new HashBasedExecutor(numWorkers);
15 }
16
17 /**
18 * Handle a game event. Events for the same player are guaranteed
19 * to be processed in order, on the same thread.
20 */
21 public void handleEvent(String playerId, String event, int value) {
22 // The playerId is the key โ all events for "Alice" go to
23 // the same executor thread
24 executor.submit(playerId, () -> {
25 // Initialize health if new player
26 playerHealth.putIfAbsent(playerId, new AtomicInteger(100));
27 AtomicInteger health = playerHealth.get(playerId);
28
29 switch (event) {
30 case "DAMAGE":
31 int newHp = health.addAndGet(-value);
32 System.out.println("[" + Thread.currentThread().getName()
33 + "] " + playerId + " takes " + value
34 + " damage -> HP: " + newHp);
35 if (newHp <= 0) {
36 System.out.println(" " + playerId + " is DEFEATED!");
37 }
38 break;
39
40 case "HEAL":
41 int healed = health.addAndGet(value);
42 System.out.println("[" + Thread.currentThread().getName()
43 + "] " + playerId + " heals " + value
44 + " -> HP: " + Math.min(healed, 100));
45 break;
46
47 case "SHIELD":
48 System.out.println("[" + Thread.currentThread().getName()
49 + "] " + playerId + " activates shield for "
50 + value + "s");
51 break;
52
53 default:
54 System.out.println("Unknown event: " + event);
55 }
56 });
57 }
58
59 public static void main(String[] args) throws InterruptedException {
60 MultiplayerGame game = new MultiplayerGame(4);
61
62 // Simulate rapid events for multiple players
63 // Alice's events will ALWAYS be in order (same thread)
64 game.handleEvent("Alice", "DAMAGE", 20);
65 game.handleEvent("Alice", "HEAL", 10);
66 game.handleEvent("Alice", "DAMAGE", 30);
67
68 // Bob's events run in PARALLEL with Alice's (different thread)
69 game.handleEvent("Bob", "SHIELD", 5);
70 game.handleEvent("Bob", "DAMAGE", 15);
71
72 // Charlie's events also parallel
73 game.handleEvent("Charlie", "DAMAGE", 50);
74 game.handleEvent("Charlie", "HEAL", 25);
75
76 game.executor.shutdown();
77 }
78}Output (order within each player is guaranteed):
1[Thread-1] Alice takes 20 damage -> HP: 80
2[Thread-3] Bob activates shield for 5s
3[Thread-1] Alice heals 10 -> HP: 90
4[Thread-0] Charlie takes 50 damage -> HP: 50
5[Thread-3] Bob takes 15 damage -> HP: 85
6[Thread-1] Alice takes 30 damage -> HP: 60
7[Thread-0] Charlie heals 25 -> HP: 75Notice: Alice's events are always on Thread-1, in order. Bob's on Thread-3, in order. Charlie's on Thread-0, in order. But Alice, Bob, and Charlie run in parallel.
What Happens Under the Hood
Let's trace the flow when handleEvent("Alice", "DAMAGE", 20) is called:
- Hash the key:
"Alice".hashCode()returns some integer (e.g., 63650266).Math.abs(63650266 % 4) = 2. Alice goes toexecutors[2].
- Submit to executor[2]: The
Runnablelambda is placed into executor[2]'s internalLinkedBlockingQueue.
- Executor[2]'s single thread picks up the task from its queue (FIFO) and executes it.
- Next call:
handleEvent("Alice", "HEAL", 10)โ same key, same hash, same executor. The heal task is queued *behind* the damage task. The single thread processes damage first, then heal.
- Meanwhile:
handleEvent("Bob", "SHIELD", 5)โ"Bob".hashCode() % 4might be 1. This goes to a different executor and runs in parallel.
Why single-threaded executors? Each Executors.newSingleThreadExecutor() internally creates a ThreadPoolExecutor with 1 core thread, 1 max thread, and an unbounded LinkedBlockingQueue. Tasks are processed strictly in FIFO order by that one thread. This is simpler and faster than using a multi-thread pool with key-based routing because there's zero contention on the queue (only one thread reads from it).
The hash function quality matters! A bad hash function that maps most keys to the same index creates a hotspot โ one executor gets overwhelmed while others are idle. String.hashCode() is generally good. For numeric keys, consider spreading the bits: (key * 0x9E3779B9) >>> 16 % poolSize.
Danger zone: Unbounded queues! If tasks arrive faster than they're processed, the internal queues grow without limit and eventually cause OutOfMemoryError. In production, consider using bounded queues with a rejection policy, or monitor queue sizes.
When to Use vs When NOT to Use
| Use Hash-Based Executor When | Don't Use When |
|---|---|
| Per-key ordering is required (chat, banking, event sourcing) | Tasks have no natural key (use a regular thread pool) |
| You want parallelism across keys but safety within a key | All tasks must be globally ordered (use a single-threaded executor) |
| Each key's state is independent (no cross-key transactions) | You need dynamic rebalancing (hash-based assignment is static) |
| The key space is large relative to the pool size (good distribution) | The key space is tiny (e.g., 2 keys with 8 executors โ waste!) |
| You're building an actor-like system without a full actor framework | Tasks for different keys need to coordinate (complex, need explicit sync) |
Common Mistakes & Gotchas
- **Using
key.hashCode() % NwithoutMath.abs().**hashCode()can return negative numbers!(-7) % 4 = -3in Java, which gives anArrayIndexOutOfBoundsException. Always wrap withMath.abs()or use(key.hashCode() & 0x7FFFFFFF) % N. - Choosing a pool size that's too large. If you have 100 active keys and 200 executors, 100 executors sit idle, wasting thread resources. Pool size should be proportional to your CPU cores, not your key space.
- Choosing a pool size that's too small. If pool size = 2 and you have 1000 keys, each executor handles 500 keys โ limited parallelism. A good default is
Runtime.getRuntime().availableProcessors()or 2x that. - Forgetting that keys sharing an executor share ordering. If Alice and Charlie both hash to executor[1], their tasks are interleaved (A1, C1, A2, C2). This is usually fine because cross-key ordering doesn't matter โ but be aware.
- Not handling executor shutdown properly. Call
shutdown()thenawaitTermination()to let in-flight tasks complete. UsingshutdownNow()interrupts running tasks, which might leave state inconsistent.
Interview Tip
This pattern appears in interviews as: *"How would you process events for millions of users with per-user ordering but cross-user parallelism?"* or *"Design a message broker that preserves per-partition ordering."* The answer is hash-based partitioning โ it's exactly how Kafka partitions work, how Akka's actor mailboxes are dispatched, and how database sharding routes queries. Show the interviewer you understand the trade-off: you get per-key ordering + cross-key parallelism, but you lose cross-key ordering and risk hotspots with skewed key distributions.
Quick Quiz
- Alice and Bob both hash to executor[2]. Alice submits Task A1, then Bob submits Task B1, then Alice submits Task A2. What's the execution order? Is Alice's per-key ordering preserved?
- You have a hash-based executor with 8 threads. One key ("whale-user") generates 90% of all tasks. What happens? How would you fix it?
- How is this pattern related to Kafka's partition assignment? What role does the hash function play in Kafka?
Summary โ Key Takeaways
- A hash-based executor routes tasks to specific threads based on a hash of the task's key, guaranteeing per-key ordering while allowing cross-key parallelism.
- Internally, it's an array of single-threaded executors, each with its own FIFO queue.
- This is thread confinement applied at the task level โ all tasks for a given key run on the same thread, so per-key state needs no locks.
- The same pattern powers Kafka partitions, actor systems, and database sharding โ it's one of the most important concurrency patterns in distributed systems.