ConcurrentHashMap is a thread-safe implementation of the Map interface designed for high concurrency scenarios. It provides better performance than synchronized HashMap by using sophisticated locking strategies.
1. Evolution and Architecture Overview
Historical Evolution:
- Java 1.5: Initial version using segment-based locking (16 segments)
- Java 8: Complete redesign using Node-based structure with fine-grained locking
- Java 11+: Further optimizations and performance improvements
Key Design Goals:
- Thread Safety: Allow concurrent reads and limited concurrent writes
- High Throughput: Minimize lock contention
- Scalability: Perform well under high concurrency
- Memory Consistency: Provide happens-before relationships
2. Java 8+ Internal Architecture
Core Data Structure:
// Simplified representation of ConcurrentHashMap internals
public class ConcurrentHashMap<K,V> {
// Main hash table - array of nodes
transient volatile Node<K,V>[] table;
// Next table to use during resizing
private transient volatile Node<K,V>[] nextTable;
// Base counter for size
private transient volatile long baseCount;
// Table initialization and resizing control
private transient volatile int sizeCtl;
// Node class representing hash table entries
static class Node<K,V> implements Map.Entry<K,V> {
final int hash;
final K key;
volatile V val;
volatile Node<K,V> next;
// Constructors, methods...
}
// Special node types
static final class TreeNode<K,V> extends Node<K,V> { /* For balanced trees */ }
static final class ReservationNode<K,V> extends Node<K,V> { /* For compute methods */ }
static final class ForwardingNode<K,V> extends Node<K,V> { /* During resizing */ }
}
Hash Table Structure Visualization:
Table (Array of Buckets/Bins): Index 0: [Node] → [Node] → [Node] // Linked List or Tree Index 1: [TreeBin] → TreeNode → TreeNode → TreeNode // Balanced Tree Index 2: null // Empty bucket Index 3: [Node] → [Node] // Short linked list Index 4: [ForwardingNode] // During resizing ...
3. Node Types and Bucket Organization
import java.util.concurrent.ConcurrentHashMap;
import java.lang.reflect.Field;
import java.util.Arrays;
public class ConcurrentHashMapInternals {
// Demonstration of different node types and bucket organization
public static class BucketOrganization {
public void demonstrateBucketStructures() {
ConcurrentHashMap<String, Integer> map = new ConcurrentHashMap<>();
// Add entries to create different bucket structures
for (int i = 0; i < 100; i++) {
// Keys with same hash to create collisions
String key = "key-" + (i % 10); // 10 different keys, some will collide
map.put(key, i);
}
// Add more entries to trigger treeification
for (int i = 100; i < 200; i++) {
String key = "key-" + (i % 5); // More collisions
map.put(key, i);
}
System.out.println("Map size: " + map.size());
System.out.println("Load factor: 0.75 (default)");
System.out.println("Treeification threshold: 8");
System.out.println("Untreeification threshold: 6");
}
}
// Hash computation and bucket indexing
public static class HashMechanism {
// ConcurrentHashMap's spread function (simplified)
public static int spread(int h) {
return (h ^ (h >>> 16)) & 0x7fffffff;
}
// Bucket index calculation
public static int bucketIndex(int hash, int tableLength) {
return hash & (tableLength - 1);
}
public void demonstrateHashDistribution() {
String[] keys = {"apple", "banana", "cherry", "date", "elderberry",
"fig", "grape", "honeydew", "ice cream", "juice"};
System.out.println("=== Hash Distribution Example ===");
int tableSize = 16; // Initial table size
for (String key : keys) {
int hashCode = key.hashCode();
int spreadHash = spread(hashCode);
int bucketIndex = bucketIndex(spreadHash, tableSize);
System.out.printf("Key: %-12s | Hash: %10d | Spread: %10d | Bucket: %2d%n",
key, hashCode, spreadHash, bucketIndex);
}
}
}
public static void main(String[] args) {
BucketOrganization org = new BucketOrganization();
org.demonstrateBucketStructures();
System.out.println();
HashMechanism hashMech = new HashMechanism();
hashMech.demonstrateHashDistribution();
}
}
4. Locking Mechanism and Concurrency Control
Fine-Grained Locking Strategy:
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.atomic.*;
import java.util.concurrent.locks.*;
public class LockingMechanism {
// Simplified representation of ConcurrentHashMap's locking
public static class ConcurrentHashMapLocking {
// synchronized on individual buckets (bin locking)
public V put(K key, V value) {
int hash = spread(key.hashCode());
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh;
if (tab == null || (n = tab.length) == 0)
tab = initTable();
else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
// No collision - use CAS for atomic insert
if (casTabAt(tab, i, null, new Node<K,V>(hash, key, value, null)))
break;
}
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else {
// Collision - synchronize on the first node of the bucket
synchronized (f) {
if (tabAt(tab, i) == f) {
// Handle linked list or tree insertion
if (fh >= 0) {
// Linked list handling
binCount = 1;
for (Node<K,V> e = f;; ++binCount) {
// ... linked list traversal and update
}
}
else if (f instanceof TreeBin) {
// Tree handling
// ... tree insertion logic
}
}
}
if (binCount != 0) {
if (binCount >= TREEIFY_THRESHOLD)
treeifyBin(tab, i);
break;
}
}
}
addCount(1L, binCount);
return null;
}
}
// CAS (Compare-And-Swap) operations demonstration
public static class CASOperations {
public void demonstrateCASMechanism() {
System.out.println("=== CAS Operations in ConcurrentHashMap ===");
// Simulate CAS operations used in ConcurrentHashMap
AtomicReferenceArray<Node<String, Integer>> simulatedTable =
new AtomicReferenceArray<>(16);
// Initial state - empty bucket
int bucketIndex = 5;
System.out.println("1. Empty bucket: " +
(simulatedTable.get(bucketIndex) == null));
// CAS operation to add first node
Node<String, Integer> newNode = new Node<>(
"key1".hashCode(), "key1", 100, null);
boolean casSuccess = simulatedTable.compareAndSet(
bucketIndex, null, newNode);
System.out.println("2. CAS operation to add first node: " + casSuccess);
System.out.println(" Bucket now contains: " +
simulatedTable.get(bucketIndex));
// Another thread tries to add to same bucket
Node<String, Integer> competingNode = new Node<>(
"key2".hashCode(), "key2", 200, null);
boolean casFail = simulatedTable.compareAndSet(
bucketIndex, null, competingNode);
System.out.println("3. Competing CAS operation: " + casFail);
System.out.println(" Bucket still contains: " +
simulatedTable.get(bucketIndex));
}
}
// Node class for demonstration
static class Node<K,V> {
final int hash;
final K key;
volatile V val;
volatile Node<K,V> next;
Node(int hash, K key, V val, Node<K,V> next) {
this.hash = hash;
this.key = key;
this.val = val;
this.next = next;
}
@Override
public String toString() {
return "Node{key=" + key + ", value=" + val + "}";
}
}
public static void main(String[] args) {
CASOperations casOps = new CASOperations();
casOps.demonstrateCASMechanism();
}
}
5. Treeification Mechanism
Linked List to Tree Conversion:
import java.util.concurrent.ConcurrentHashMap;
import java.util.*;
import java.lang.reflect.*;
public class TreeificationMechanism {
public static class TreeificationDemo {
public void demonstrateTreeification() throws Exception {
ConcurrentHashMap<CustomKey, Integer> map = new ConcurrentHashMap<>(64, 0.75f, 1);
// Create keys that will hash to the same bucket
List<CustomKey> collisionKeys = generateCollisionKeys(20);
System.out.println("=== Treeification Process ===");
System.out.println("Initial table size: 64");
System.out.println("Treeification threshold: 8 nodes in a bucket");
System.out.println("Untreeification threshold: 6 nodes");
// Add keys to trigger treeification
for (int i = 0; i < collisionKeys.size(); i++) {
CustomKey key = collisionKeys.get(i);
map.put(key, i);
if (i == 7) {
System.out.println("\nAfter adding 8 elements (treeification threshold):");
inspectBucketStructure(map, key);
} else if (i == 15) {
System.out.println("\nAfter adding 16 elements (tree should be formed):");
inspectBucketStructure(map, key);
}
}
// Remove elements to trigger untreeification
for (int i = 0; i < 10; i++) {
map.remove(collisionKeys.get(i));
if (i == 9) {
System.out.println("\nAfter removing 10 elements (untreeification threshold):");
inspectBucketStructure(map, collisionKeys.get(19));
}
}
}
private List<CustomKey> generateCollisionKeys(int count) {
List<CustomKey> keys = new ArrayList<>();
// All keys will have same hash for demonstration
for (int i = 0; i < count; i++) {
keys.add(new CustomKey("key-" + i, 12345)); // Same hash for all
}
return keys;
}
private void inspectBucketStructure(ConcurrentHashMap<?, ?> map, Object key)
throws Exception {
// Use reflection to inspect internal structure (for educational purposes)
Field tableField = ConcurrentHashMap.class.getDeclaredField("table");
tableField.setAccessible(true);
Object[] table = (Object[]) tableField.get(map);
int hash = key.hashCode();
int index = (table.length - 1) & hash;
Object firstNode = table[index];
if (firstNode == null) {
System.out.println(" Bucket " + index + ": EMPTY");
} else {
String nodeType = firstNode.getClass().getSimpleName();
System.out.println(" Bucket " + index + ": " + nodeType);
// Count elements in bucket
int elementCount = countElementsInBucket(firstNode);
System.out.println(" Elements in bucket: " + elementCount);
}
}
private int countElementsInBucket(Object firstNode) throws Exception {
// Simplified element counting
if (firstNode.getClass().getSimpleName().contains("TreeBin")) {
return 8; // Assume tree for demonstration
} else {
// Count linked list nodes
int count = 0;
Object current = firstNode;
while (current != null) {
count++;
Field nextField = current.getClass().getDeclaredField("next");
nextField.setAccessible(true);
current = nextField.get(current);
}
return count;
}
}
}
// Custom key class that allows controlling hash code
static class CustomKey {
private final String name;
private final int hashCode;
public CustomKey(String name, int hashCode) {
this.name = name;
this.hashCode = hashCode;
}
@Override
public int hashCode() {
return hashCode; // Controlled hash for collision demonstration
}
@Override
public boolean equals(Object obj) {
if (this == obj) return true;
if (obj == null || getClass() != obj.getClass()) return false;
CustomKey that = (CustomKey) obj;
return Objects.equals(name, that.name);
}
@Override
public String toString() {
return name;
}
}
public static void main(String[] args) throws Exception {
TreeificationDemo demo = new TreeificationDemo();
demo.demonstrateTreeification();
}
}
6. Resizing Mechanism
Progressive Resizing Strategy:
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.atomic.*;
import java.util.*;
public class ResizingMechanism {
public static class ResizingDemo {
public void demonstrateResizing() {
ConcurrentHashMap<String, Integer> map = new ConcurrentHashMap<>(16, 0.75f);
System.out.println("=== ConcurrentHashMap Resizing Process ===");
System.out.println("Initial capacity: 16");
System.out.println("Load factor: 0.75");
System.out.println("Resize threshold: 16 * 0.75 = 12 elements");
System.out.println();
// Track when resizing occurs
for (int i = 1; i <= 20; i++) {
map.put("key-" + i, i);
if (i == 12) {
System.out.println("After adding 12 elements (resize threshold):");
System.out.println(" - Table should be resizing or about to resize");
System.out.println(" - New capacity will be: 32");
} else if (i == 13) {
System.out.println("\nAfter adding 13 elements:");
System.out.println(" - Resizing in progress");
System.out.println(" - Some threads may help with transfer");
}
}
System.out.println("\nFinal state:");
System.out.println(" - Table capacity: 32");
System.out.println(" - Next resize at: 32 * 0.75 = 24 elements");
}
public void demonstrateMultiThreadedResizing() throws InterruptedException {
System.out.println("\n=== Multi-threaded Resizing ===");
ConcurrentHashMap<Integer, String> map = new ConcurrentHashMap<>(8, 0.75f);
int threadCount = 4;
int elementsPerThread = 10;
List<Thread> threads = new ArrayList<>();
CountDownLatch startLatch = new CountDownLatch(1);
CountDownLatch endLatch = new CountDownLatch(threadCount);
for (int i = 0; i < threadCount; i++) {
final int threadId = i;
Thread thread = new Thread(() -> {
try {
startLatch.await(); // Wait for all threads to be ready
for (int j = 0; j < elementsPerThread; j++) {
int key = threadId * 1000 + j;
map.put(key, "Value-" + key);
// Simulate some work
Thread.sleep(10);
}
endLatch.countDown();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
});
threads.add(thread);
thread.start();
}
// Start all threads simultaneously
startLatch.countDown();
// Monitor resizing
System.out.println("Starting " + threadCount + " threads, each adding " +
elementsPerThread + " elements");
System.out.println("Initial capacity: 8, Resize threshold: 6");
// Wait for completion
endLatch.await();
System.out.println("All threads completed");
System.out.println("Final map size: " + map.size());
}
}
// Size counting mechanism
public static class SizeCounting {
public void demonstrateSizeCounting() {
System.out.println("\n=== Size Counting Mechanism ===");
// ConcurrentHashMap uses a distributed counting approach
System.out.println("Base counter + CounterCell[] for concurrent size counting");
System.out.println();
System.out.println("When contention is low:");
System.out.println(" - Use baseCount with CAS operations");
System.out.println();
System.out.println("When contention is high:");
System.out.println(" - Use CounterCell[] to distribute counting");
System.out.println(" - Each thread updates its own counter cell");
System.out.println(" - Total size = baseCount + sum(counterCells)");
System.out.println();
System.out.println("Benefits:");
System.out.println(" - Reduces contention on single counter");
System.out.println(" - Improves scalability under high concurrency");
}
}
public static void main(String[] args) throws InterruptedException {
ResizingDemo resizingDemo = new ResizingDemo();
resizingDemo.demonstrateResizing();
resizingDemo.demonstrateMultiThreadedResizing();
SizeCounting sizeCounting = new SizeCounting();
sizeCounting.demonstrateSizeCounting();
}
}
7. Iteration and Views
Weakly Consistent Iterators:
import java.util.concurrent.ConcurrentHashMap;
import java.util.*;
import java.util.concurrent.*;
public class IterationMechanism {
public static class IteratorDemo {
public void demonstrateWeaklyConsistentIteration() throws InterruptedException {
ConcurrentHashMap<String, Integer> map = new ConcurrentHashMap<>();
// Populate initial data
for (int i = 0; i < 10; i++) {
map.put("key-" + i, i);
}
System.out.println("=== Weakly Consistent Iterator Demo ===");
// Start iteration in one thread
Thread iterationThread = new Thread(() -> {
System.out.println("Iteration started...");
int count = 0;
for (Map.Entry<String, Integer> entry : map.entrySet()) {
System.out.println("Iterating: " + entry.getKey() + " = " + entry.getValue());
count++;
// Slow down iteration to allow concurrent modification
try {
Thread.sleep(100);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
break;
}
if (count >= 5) {
System.out.println("... (stopping iteration output for brevity)");
break;
}
}
System.out.println("Iteration completed");
});
// Start modification in another thread
Thread modificationThread = new Thread(() -> {
try {
// Wait for iteration to start
Thread.sleep(50);
System.out.println("Modification thread: Adding new entries...");
map.put("key-new-1", 100);
map.put("key-new-2", 200);
Thread.sleep(200);
System.out.println("Modification thread: Removing some entries...");
map.remove("key-2");
map.remove("key-5");
Thread.sleep(200);
System.out.println("Modification thread: Updating entries...");
map.put("key-1", 999);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
});
iterationThread.start();
modificationThread.start();
iterationThread.join();
modificationThread.join();
System.out.println("\nFinal map contents:");
map.forEach((k, v) -> System.out.println(k + " = " + v));
}
public void demonstrateKeySetViews() {
ConcurrentHashMap<String, Integer> map = new ConcurrentHashMap<>();
for (int i = 0; i < 5; i++) {
map.put("key-" + i, i);
}
System.out.println("\n=== KeySet Views ===");
// Different keySet views
Set<String> keySet = map.keySet();
Set<String> newKeySet = map.newKeySet(); // Java 8+
Set<String> mappedKeySet = map.keySet(100); // Java 8+ with mapped value
System.out.println("Standard keySet: " + keySet);
System.out.println("New keySet (independent): " + newKeySet);
System.out.println("Mapped keySet: " + mappedKeySet);
// Demonstrate view behavior
map.put("key-new", 99);
System.out.println("After adding to map - keySet: " + keySet);
System.out.println("After adding to map - newKeySet: " + newKeySet);
}
}
// Bulk operations
public static class BulkOperations {
public void demonstrateBulkOperations() {
ConcurrentHashMap<String, Integer> map = new ConcurrentHashMap<>();
for (int i = 0; i < 10; i++) {
map.put("key-" + i, i);
}
System.out.println("\n=== Bulk Operations ===");
// forEach - parallel processing
System.out.println("forEach (sequential):");
map.forEach((k, v) -> System.out.println(k + " = " + v));
System.out.println("\nforEach (parallel, threshold 2):");
map.forEach(2, (k, v) -> System.out.println(Thread.currentThread().getName() + ": " + k));
// search - find first matching element
String foundKey = map.search(2, (k, v) -> v > 5 ? k : null);
System.out.println("\nSearch result (first value > 5): " + foundKey);
// reduce - combine all entries
Integer sum = map.reduce(2, (k, v) -> v, Integer::sum);
System.out.println("Reduce result (sum of values): " + sum);
}
}
public static void main(String[] args) throws InterruptedException {
IteratorDemo iteratorDemo = new IteratorDemo();
iteratorDemo.demonstrateWeaklyConsistentIteration();
iteratorDemo.demonstrateKeySetViews();
BulkOperations bulkOps = new BulkOperations();
bulkOps.demonstrateBulkOperations();
}
}
8. Performance Characteristics and Benchmarks
import java.util.concurrent.*;
import java.util.*;
import java.util.concurrent.atomic.*;
public class PerformanceAnalysis {
public static class ConcurrentHashMapBenchmark {
public void benchmarkOperations() throws InterruptedException {
int elementCount = 100000;
int threadCount = 8;
System.out.println("=== ConcurrentHashMap Performance Benchmark ===");
System.out.println("Elements: " + elementCount + ", Threads: " + threadCount);
System.out.println();
// Benchmark ConcurrentHashMap
benchmarkMap(new ConcurrentHashMap<>(), "ConcurrentHashMap", elementCount, threadCount);
// Benchmark synchronized HashMap for comparison
Map<String, Integer> synchronizedMap = Collections.synchronizedMap(new HashMap<>());
benchmarkMap(synchronizedMap, "SynchronizedHashMap", elementCount, threadCount);
// Benchmark Hashtable for comparison
benchmarkMap(new Hashtable<>(), "Hashtable", elementCount, threadCount);
}
private void benchmarkMap(Map<String, Integer> map, String mapName,
int elementCount, int threadCount) throws InterruptedException {
System.out.println("Testing: " + mapName);
// Write performance
long writeTime = benchmarkWrites(map, elementCount, threadCount);
System.out.printf(" Write time: %,d ms%n", writeTime);
// Read performance
long readTime = benchmarkReads(map, elementCount, threadCount);
System.out.printf(" Read time: %,d ms%n", readTime);
// Mixed operation performance
long mixedTime = benchmarkMixedOperations(map, elementCount, threadCount);
System.out.printf(" Mixed time: %,d ms%n", mixedTime);
System.out.println();
}
private long benchmarkWrites(Map<String, Integer> map, int elementCount, int threadCount)
throws InterruptedException {
CountDownLatch startLatch = new CountDownLatch(1);
CountDownLatch endLatch = new CountDownLatch(threadCount);
AtomicLong totalTime = new AtomicLong();
for (int i = 0; i < threadCount; i++) {
final int threadId = i;
new Thread(() -> {
try {
startLatch.await();
long startTime = System.currentTimeMillis();
// Each thread writes a portion of elements
int elementsPerThread = elementCount / threadCount;
int start = threadId * elementsPerThread;
int end = (threadId == threadCount - 1) ? elementCount : start + elementsPerThread;
for (int j = start; j < end; j++) {
map.put("key-" + j, j);
}
long endTime = System.currentTimeMillis();
totalTime.addAndGet(endTime - startTime);
endLatch.countDown();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}).start();
}
startLatch.countDown();
endLatch.await();
return totalTime.get() / threadCount; // Average time per thread
}
private long benchmarkReads(Map<String, Integer> map, int elementCount, int threadCount)
throws InterruptedException {
// First populate the map
for (int i = 0; i < elementCount; i++) {
map.put("key-" + i, i);
}
CountDownLatch startLatch = new CountDownLatch(1);
CountDownLatch endLatch = new CountDownLatch(threadCount);
AtomicLong totalTime = new AtomicLong();
for (int i = 0; i < threadCount; i++) {
new Thread(() -> {
try {
startLatch.await();
long startTime = System.currentTimeMillis();
Random random = new Random();
for (int j = 0; j < 10000; j++) {
int keyIndex = random.nextInt(elementCount);
map.get("key-" + keyIndex);
}
long endTime = System.currentTimeMillis();
totalTime.addAndGet(endTime - startTime);
endLatch.countDown();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}).start();
}
startLatch.countDown();
endLatch.await();
return totalTime.get() / threadCount;
}
private long benchmarkMixedOperations(Map<String, Integer> map, int elementCount, int threadCount)
throws InterruptedException {
CountDownLatch startLatch = new CountDownLatch(1);
CountDownLatch endLatch = new CountDownLatch(threadCount);
AtomicLong totalTime = new AtomicLong();
for (int i = 0; i < threadCount; i++) {
final int threadId = i;
new Thread(() -> {
try {
startLatch.await();
long startTime = System.currentTimeMillis();
Random random = new Random();
for (int j = 0; j < 5000; j++) {
int operation = random.nextInt(100);
int keyIndex = random.nextInt(elementCount);
String key = "key-" + keyIndex;
if (operation < 60) { // 60% reads
map.get(key);
} else if (operation < 90) { // 30% writes
map.put(key, threadId);
} else { // 10% removes
map.remove(key);
}
}
long endTime = System.currentTimeMillis();
totalTime.addAndGet(endTime - startTime);
endLatch.countDown();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}).start();
}
startLatch.countDown();
endLatch.await();
return totalTime.get() / threadCount;
}
}
// Memory usage analysis
public static class MemoryAnalysis {
public void analyzeMemoryUsage() {
System.out.println("=== Memory Usage Analysis ===");
Runtime runtime = Runtime.getRuntime();
// Test with different map sizes
for (int size : new int[]{1000, 10000, 100000}) {
System.gc();
long memoryBefore = runtime.totalMemory() - runtime.freeMemory();
ConcurrentHashMap<Integer, String> map = new ConcurrentHashMap<>();
for (int i = 0; i < size; i++) {
map.put(i, "Value-" + i);
}
long memoryAfter = runtime.totalMemory() - runtime.freeMemory();
long memoryUsed = memoryAfter - memoryBefore;
System.out.printf("Size: %,6d | Memory used: %,8d bytes | Per entry: %6.2f bytes%n",
size, memoryUsed, (double) memoryUsed / size);
}
}
}
public static void main(String[] args) throws InterruptedException {
ConcurrentHashMapBenchmark benchmark = new ConcurrentHashMapBenchmark();
benchmark.benchmarkOperations();
MemoryAnalysis memoryAnalysis = new MemoryAnalysis();
memoryAnalysis.analyzeMemoryUsage();
}
}
9. Best Practices and Common Pitfalls
import java.util.concurrent.ConcurrentHashMap;
import java.util.*;
import java.util.concurrent.atomic.*;
public class BestPractices {
public static class CorrectUsage {
// 1. Proper initialization with expected size
public void properInitialization() {
int expectedSize = 1000;
float loadFactor = 0.75f;
int concurrencyLevel = 16; // Number of concurrent update threads expected
ConcurrentHashMap<String, Integer> properlySizedMap =
new ConcurrentHashMap<>(expectedSize, loadFactor, concurrencyLevel);
System.out.println("Properly initialized map with:");
System.out.println(" - Expected size: " + expectedSize);
System.out.println(" - Load factor: " + loadFactor);
System.out.println(" - Concurrency level: " + concurrencyLevel);
}
// 2. Atomic operations using compute methods
public void atomicOperations() {
ConcurrentHashMap<String, AtomicInteger> counterMap = new ConcurrentHashMap<>();
// Thread-safe counter increment
String key = "request.count";
counterMap.compute(key, (k, current) -> {
if (current == null) {
return new AtomicInteger(1);
}
current.incrementAndGet();
return current;
});
// More concise using computeIfAbsent and then update
AtomicInteger counter = counterMap.computeIfAbsent(key, k -> new AtomicInteger(0));
counter.incrementAndGet();
System.out.println("Counter value: " + counterMap.get(key).get());
}
// 3. Safe compound operations
public void safeCompoundOperations() {
ConcurrentHashMap<String, List<String>> userPreferences = new ConcurrentHashMap<>();
String userId = "user123";
// Safe way to update a collection value
userPreferences.compute(userId, (k, currentList) -> {
List<String> list = (currentList == null) ? new ArrayList<>() : currentList;
list.add("preference-" + System.currentTimeMillis());
return list;
});
// For better performance with frequent updates
List<String> preferences =
userPreferences.computeIfAbsent(userId, k -> Collections.synchronizedList(new ArrayList<>()));
preferences.add("another-preference");
System.out.println("User preferences: " + userPreferences.get(userId));
}
}
public static class CommonPitfalls {
// 1. Non-atomic check-then-act
public void checkThenActProblem() {
ConcurrentHashMap<String, String> cache = new ConcurrentHashMap<>();
String key = "expensive.computation";
// ❌ WRONG - Race condition between containsKey and put
if (!cache.containsKey(key)) {
// Another thread might put here
String value = computeExpensiveValue();
cache.put(key, value); // Might overwrite another thread's value
}
// ✅ CORRECT - Use putIfAbsent or computeIfAbsent
cache.computeIfAbsent(key, k -> computeExpensiveValue());
}
// 2. Assuming size() is exact
public void sizeMethodPitfall() {
ConcurrentHashMap<String, Integer> map = new ConcurrentHashMap<>();
// ❌ WRONG - size() might be outdated immediately
if (map.size() > 1000) {
// By the time we get here, size might have changed
System.out.println("Map is large: " + map.size());
}
// ✅ CORRECT - Use for specific use cases only
// size() is useful for monitoring, not for program logic
}
// 3. Iteration and modification
public void iterationPitfall() {
ConcurrentHashMap<String, Integer> map = new ConcurrentHashMap<>();
map.put("a", 1);
map.put("b", 2);
map.put("c", 3);
// ❌ WRONG - ConcurrentModificationException won't occur, but behavior might be unexpected
for (String key : map.keySet()) {
if (key.equals("b")) {
map.remove(key); // Safe, but might not be seen by current iterator
}
}
// ✅ CORRECT - Use iterator.remove() or collect keys to remove
Iterator<Map.Entry<String, Integer>> iterator = map.entrySet().iterator();
while (iterator.hasNext()) {
Map.Entry<String, Integer> entry = iterator.next();
if (entry.getKey().equals("b")) {
iterator.remove(); // Safe removal during iteration
}
}
}
private String computeExpensiveValue() {
try {
Thread.sleep(100);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
return "computed-value";
}
}
public static class AdvancedPatterns {
// 1. Cache with expiration
public static class ExpiringCache<K, V> {
private final ConcurrentHashMap<K, CacheEntry<V>> cache = new ConcurrentHashMap<>();
private final long defaultTtl;
public ExpiringCache(long defaultTtlMillis) {
this.defaultTtl = defaultTtlMillis;
}
public V get(K key) {
CacheEntry<V> entry = cache.get(key);
if (entry == null) return null;
if (System.currentTimeMillis() > entry.expirationTime) {
cache.remove(key, entry); // Remove only if still the same entry
return null;
}
return entry.value;
}
public void put(K key, V value) {
put(key, value, defaultTtl);
}
public void put(K key, V value, long ttlMillis) {
long expirationTime = System.currentTimeMillis() + ttlMillis;
cache.put(key, new CacheEntry<>(value, expirationTime));
}
// Background cleanup thread
public void startCleanup() {
Thread cleanupThread = new Thread(() -> {
while (!Thread.currentThread().isInterrupted()) {
try {
Thread.sleep(60000); // Cleanup every minute
cleanupExpired();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
});
cleanupThread.setDaemon(true);
cleanupThread.start();
}
private void cleanupExpired() {
long now = System.currentTimeMillis();
cache.entrySet().removeIf(entry -> now > entry.getValue().expirationTime);
}
private static class CacheEntry<V> {
final V value;
final long expirationTime;
CacheEntry(V value, long expirationTime) {
this.value = value;
this.expirationTime = expirationTime;
}
}
}
// 2. Rate limiter using compute
public static class RateLimiter {
private final ConcurrentHashMap<String, RequestInfo> requests = new ConcurrentHashMap<>();
private final int maxRequests;
private final long timeWindowMillis;
public RateLimiter(int maxRequests, long timeWindowMillis) {
this.maxRequests = maxRequests;
this.timeWindowMillis = timeWindowMillis;
}
public boolean allowRequest(String clientId) {
long currentTime = System.currentTimeMillis();
return requests.compute(clientId, (k, existingInfo) -> {
if (existingInfo == null ||
currentTime - existingInfo.windowStart > timeWindowMillis) {
// New time window
return new RequestInfo(1, currentTime);
} else if (existingInfo.count < maxRequests) {
// Within limit
existingInfo.count++;
return existingInfo;
} else {
// Over limit
return existingInfo;
}
}).count <= maxRequests;
}
private static class RequestInfo {
int count;
long windowStart;
RequestInfo(int count, long windowStart) {
this.count = count;
this.windowStart = windowStart;
}
}
}
}
public static void main(String[] args) {
CorrectUsage correctUsage = new CorrectUsage();
correctUsage.properInitialization();
correctUsage.atomicOperations();
correctUsage.safeCompoundOperations();
CommonPitfalls pitfalls = new CommonPitfalls();
pitfalls.checkThenActProblem();
AdvancedPatterns.ExpiringCache<String, String> cache =
new AdvancedPatterns.ExpiringCache<>(300000); // 5 minutes TTL
cache.put("key1", "value1");
System.out.println("Cache get: " + cache.get("key1"));
AdvancedPatterns.RateLimiter rateLimiter = new AdvancedPatterns.RateLimiter(10, 60000); // 10 requests/minute
System.out.println("Rate limit allowed: " + rateLimiter.allowRequest("client1"));
}
}
Conclusion
Key Design Takeaways:
- Fine-Grained Locking: Synchronizes on individual buckets rather than the entire map
- CAS Operations: Uses compare-and-swap for lock-free reads and some writes
- Treeification: Converts long linked lists to balanced trees for O(log n) performance
- Progressive Resizing: Allows concurrent reads and some writes during resizing
- Distributed Counting: Uses CounterCell[] to avoid contention on size counter
Performance Characteristics:
| Operation | Average Case | Worst Case | Thread Safety |
|---|---|---|---|
| get() | O(1) | O(log n) | Full concurrency |
| put() | O(1) | O(log n) | Segment locking |
| remove() | O(1) | O(log n) | Segment locking |
| Iteration | O(n) | O(n) | Weakly consistent |
When to Use ConcurrentHashMap:
- ✅ High-concurrency read-heavy workloads
- ✅ Frequent updates with moderate contention
- ✅ Need for weakly consistent iteration
- ✅ Atomic compute operations required
When to Consider Alternatives:
- ❌ Extremely high write contention (consider ConcurrentHashMap with higher concurrency level)
- ❌ Need for strong consistency in iteration (use synchronized Map)
- ❌ Very small maps (synchronized HashMap might be sufficient)
ConcurrentHashMap's sophisticated internal design makes it the go-to choice for high-performance concurrent mapping in Java, balancing thread safety with exceptional performance through its innovative use of fine-grained locking, CAS operations, and adaptive data structures.