In high-performance applications, caching is essential for reducing latency and database load. While read-through caches are common, write-behind caching takes performance to the next level by optimizing write operations. This pattern is crucial for applications with heavy write workloads where eventual consistency is acceptable.
What is Write-Behind Caching?
Write-Behind is a caching pattern where the application writes data to the cache immediately, and the cache asynchronously persists these changes to the underlying data store (like a database) after a delay.
Key Characteristics:
- Write to cache first, database later
- Asynchronous database writes
- Batched operations for efficiency
- Eventual consistency between cache and database
How It Differs from Other Write Strategies
| Pattern | Write Order | Consistency | Performance | Risk |
|---|---|---|---|---|
| Write-Through | Cache → DB (sync) | Strong | Good | None |
| Write-Around | DB → Cache (on read) | Strong | Better for write-heavy | Cache misses |
| Write-Back (Write-Behind) | Cache → DB (async) | Eventual | Best | Data loss on crash |
Architecture & Flow
[Application] │ ↓ (immediate write) [Write-Behind Cache] ←---→ [Background Writer] │ ↓ (batched, async) [Read Operations] [Database]
- Write Path: Application writes to cache and returns immediately. The write is queued for persistence.
- Read Path: Application always reads from cache (which should have the latest data).
- Background Process: A separate thread periodically flushes batched writes to the database.
Hands-On Implementation: Building a Write-Behind Cache in Java
Let's implement a simple write-behind cache using Caffeine (for caching) and Spring Boot (for background processing).
Step 1: Project Setup
Use Spring Initializr with these dependencies:
- Spring Web
- Spring Data JPA
- H2 Database (for simplicity)
- Caffeine (cache implementation)
Step 2: Domain Model and Repository
Entity Class:
@Entity
@Table(name = "users")
public class User {
@Id
private Long id;
private String name;
private String email;
// Constructors, getters, setters
public User() {}
public User(Long id, String name, String email) {
this.id = id;
this.name = name;
this.email = email;
}
// getters and setters...
}
Repository:
@Repository
public interface UserRepository extends JpaRepository<User, Long> {
}
Step 3: Write-Behind Cache Implementation
@Service
public class UserWriteBehindCache {
private static final Logger logger = LoggerFactory.getLogger(UserWriteBehindCache.class);
private final Cache<Long, User> cache;
private final UserRepository userRepository;
private final Queue<WriteOperation> writeQueue;
private final ScheduledExecutorService executor;
// Configuration
private final int BATCH_SIZE = 10;
private final int FLUSH_INTERVAL_MS = 1000; // 1 second
public UserWriteBehindCache(UserRepository userRepository) {
this.userRepository = userRepository;
this.writeQueue = new ConcurrentLinkedQueue<>();
// Initialize Caffeine cache
this.cache = Caffeine.newBuilder()
.maximumSize(1000)
.build();
// Initialize background writer
this.executor = Executors.newSingleThreadScheduledExecutor();
this.executor.scheduleAtFixedRate(this::flushToDatabase,
FLUSH_INTERVAL_MS, FLUSH_INTERVAL_MS, TimeUnit.MILLISECONDS);
}
// Write operation: immediate cache update + async database write
public void save(User user) {
// 1. Update cache immediately
cache.put(user.getId(), user);
// 2. Queue for asynchronous database write
writeQueue.offer(new WriteOperation(OperationType.SAVE, user));
logger.info("User {} cached and queued for persistence", user.getId());
}
// Read operation: always from cache
public Optional<User> findById(Long id) {
User user = cache.get(id, key -> {
// Read-through: if not in cache, load from database
return userRepository.findById(key).orElse(null);
});
return Optional.ofNullable(user);
}
// Delete operation
public void delete(Long id) {
cache.invalidate(id);
User dummyUser = new User(id, null, null);
writeQueue.offer(new WriteOperation(OperationType.DELETE, dummyUser));
logger.info("User {} marked for deletion", id);
}
// Background writer that batches operations
private void flushToDatabase() {
if (writeQueue.isEmpty()) {
return;
}
List<WriteOperation> batch = new ArrayList<>(BATCH_SIZE);
WriteOperation operation;
// Collect batch of operations
while ((operation = writeQueue.poll()) != null && batch.size() < BATCH_SIZE) {
batch.add(operation);
}
if (!batch.isEmpty()) {
try {
processBatch(batch);
logger.info("Successfully persisted batch of {} operations", batch.size());
} catch (Exception e) {
logger.error("Failed to persist batch, re-queueing operations", e);
// Re-queue failed operations
writeQueue.addAll(batch);
}
}
}
private void processBatch(List<WriteOperation> batch) {
List<User> toSave = new ArrayList<>();
List<Long> toDelete = new ArrayList<>();
// Separate saves and deletes
for (WriteOperation op : batch) {
if (op.type == OperationType.SAVE) {
toSave.add(op.user);
} else if (op.type == OperationType.DELETE) {
toDelete.add(op.user.getId());
}
}
// Batch database operations
if (!toSave.isEmpty()) {
userRepository.saveAll(toSave);
}
if (!toDelete.isEmpty()) {
userRepository.deleteAllById(toDelete);
}
}
// Shutdown hook to flush remaining operations
@PreDestroy
public void shutdown() {
logger.info("Shutting down write-behind cache, flushing remaining operations...");
executor.shutdown();
try {
// Give some time to flush remaining operations
if (!executor.awaitTermination(5, TimeUnit.SECONDS)) {
executor.shutdownNow();
}
} catch (InterruptedException e) {
executor.shutdownNow();
Thread.currentThread().interrupt();
}
// Final flush
flushToDatabase();
}
// Helper classes
private static class WriteOperation {
final OperationType type;
final User user;
WriteOperation(OperationType type, User user) {
this.type = type;
this.user = user;
}
}
private enum OperationType {
SAVE, DELETE
}
}
Step 4: REST Controller
@RestController
@RequestMapping("/api/users")
public class UserController {
private final UserWriteBehindCache userCache;
public UserController(UserWriteBehindCache userCache) {
this.userCache = userCache;
}
@PostMapping
public ResponseEntity<User> createUser(@RequestBody User user) {
userCache.save(user);
return ResponseEntity.accepted().body(user);
}
@GetMapping("/{id}")
public ResponseEntity<User> getUser(@PathVariable Long id) {
return userCache.findById(id)
.map(ResponseEntity::ok)
.orElse(ResponseEntity.notFound().build());
}
@PutMapping("/{id}")
public ResponseEntity<User> updateUser(@PathVariable Long id, @RequestBody User user) {
user.setId(id);
userCache.save(user);
return ResponseEntity.accepted().body(user);
}
@DeleteMapping("/{id}")
public ResponseEntity<Void> deleteUser(@PathVariable Long id) {
userCache.delete(id);
return ResponseEntity.accepted().build();
}
}
Step 5: Configuration
Application Properties:
# H2 Database spring.datasource.url=jdbc:h2:mem:testdb spring.datasource.driverClassName=org.h2.Driver spring.datasource.username=sa spring.datasource.password= # H2 Console (for testing) spring.h2.console.enabled=true # JPA spring.jpa.show-sql=true spring.jpa.properties.hibernate.format_sql=true
Testing the Implementation
- Start the application
- Create a user:
curl -X POST http://localhost:8080/api/users \
-H "Content-Type: application/json" \
-d '{"id":1, "name":"John Doe", "email":"[email protected]"}'
- Immediately read the user:
curl http://localhost:8080/api/users/1
- Check H2 console (
http://localhost:8080/h2-console) to see the data eventually persisted.
Observe in logs:
- Immediate response from POST/PUT operations
- Batch persistence messages every second
- Data visible in cache immediately, in database after short delay
Production-Grade Considerations
1. Using Redis with Write-Behind
For distributed systems, use Redis with its key expiration and pub/sub features:
@Component
public class RedisWriteBehindCache {
@Autowired
private RedisTemplate<String, Object> redisTemplate;
@Value("${app.cache.write-behind.queue:write-queue}")
private String writeQueueName;
public void save(String key, Object value) {
// 1. Write to Redis
redisTemplate.opsForValue().set(key, value);
// 2. Push to write queue
redisTemplate.opsForList().rightPush(writeQueueName, new WriteOperation(key, value));
}
}
2. Resilience Patterns
- Retry with exponential backoff for failed database writes
- Dead Letter Queue for persistently failing operations
- Circuit breaker to prevent database overload
- Write-ahead log for crash recovery
3. Monitoring
- Cache hit/miss ratios
- Queue size and processing latency
- Batch efficiency metrics
- Database write performance
When to Use Write-Behind Caching
✅ Ideal Use Cases:
- High-frequency writes (clickstream, metrics, logging)
- Applications tolerant of eventual consistency
- Systems where write performance is critical
- Bulk data processing pipelines
❌ Avoid When:
- Strong consistency is required (financial transactions)
- Data loss is unacceptable (use write-through instead)
- Simple read-heavy workloads
Conclusion
Write-Behind caching is a powerful pattern that can dramatically improve write performance by batching and deferring database operations. While it introduces complexity around consistency and data safety, the performance benefits for appropriate use cases are substantial.
The key to successful implementation is:
- Proper batching to optimize database usage
- Robust error handling and retry mechanisms
- Comprehensive monitoring to track data consistency
- Clear understanding of consistency requirements
By carefully implementing write-behind caching, you can build Java applications that handle massive write workloads while maintaining responsive user experiences.