Monitoring Resilience: Exporting Resilience4j Metrics to Prometheus in Java

In a distributed system, failures are inevitable. While Resilience4j provides powerful patterns like Circuit Breakers, Rate Limiters, and Retries to make your application resilient, you need visibility into how these patterns are performing. Prometheus, combined with Grafana, offers the perfect observability stack to monitor these resilience patterns in real-time.

This article provides a comprehensive guide to exposing Resilience4j metrics to Prometheus, creating actionable dashboards, and leveraging these metrics for operational excellence.


Why Monitor Resilience4j Patterns?

Each Resilience4j pattern provides crucial metrics that indicate the health of your service interactions:

  • Circuit Breaker: Failure rates, state transitions, call counts
  • Rate Limiter: Available permissions, wait times
  • Retry: Retry attempts, successful/failed retries
  • Bulkhead: Concurrent calls, queue depths
  • Time Limiter: Timeout occurrences, duration metrics

Without proper monitoring, you're operating blind to how your resilience patterns are performing under load.


Architecture Overview

Java Application → Resilience4j → Micrometer Metrics → Prometheus Registry → /actuator/prometheus
↓
Prometheus Server (scrapes metrics)
↓
Grafana Dashboard (visualizes metrics)

Implementation Guide

1. Dependencies Setup

First, add the necessary dependencies to your pom.xml:

<dependencies>
<!-- Spring Boot Starter -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<!-- Resilience4j Core -->
<dependency>
<groupId>io.github.resilience4j</groupId>
<artifactId>resilience4j-spring-boot3</artifactId>
<version>2.1.0</version>
</dependency>
<dependency>
<groupId>io.github.resilience4j</groupId>
<artifactId>resilience4j-micrometer</artifactId>
<version>2.1.0</version>
</dependency>
<!-- Micrometer Prometheus -->
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
<!-- AspectJ for Resilience4j Annotations -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
</dependencies>

2. Application Configuration

Configure your application.yml to expose metrics:

spring:
application:
name: resilience4j-demo
management:
endpoints:
web:
exposure:
include: health,info,prometheus,metrics
endpoint:
prometheus:
enabled: true
health:
show-details: always
metrics:
export:
prometheus:
enabled: true
distribution:
percentiles-histogram:
http.server.requests: true
tags:
application: ${spring.application.name}
resilience4j:
circuitbreaker:
instances:
backendService:
register-health-indicator: true
sliding-window-size: 10
failure-rate-threshold: 50
wait-duration-in-open-state: 10s
permitted-number-of-calls-in-half-open-state: 3
ratelimiter:
instances:
pricingService:
limit-for-period: 5
limit-refresh-period: 1s
timeout-duration: 0
retry:
instances:
externalApi:
max-attempts: 3
wait-duration: 500ms
bulkhead:
instances:
databaseService:
max-concurrent-calls: 5
max-wait-duration: 100ms
logging:
level:
io.github.resilience4j: DEBUG

3. Service Implementation with Resilience4j

Create a service that uses various Resilience4j patterns:

@Service
public class BackendService {
private static final Logger logger = LoggerFactory.getLogger(BackendService.class);
private final Random random = new Random();
@CircuitBreaker(name = "backendService", fallbackMethod = "fallback")
@RateLimiter(name = "pricingService")
@Bulkhead(name = "databaseService")
@Retry(name = "externalApi", fallbackMethod = "retryFallback")
@TimeLimiter(name = "backendService")
public CompletableFuture<String> processRequest(String input) {
return CompletableFuture.supplyAsync(() -> {
// Simulate various scenarios
simulatePotentialFailures();
logger.info("Successfully processed request: {}", input);
return "Processed: " + input;
});
}
private void simulatePotentialFailures() {
int scenario = random.nextInt(100);
if (scenario < 10) { // 10% slow response
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
} else if (scenario < 25) { // 15% failure rate
throw new RuntimeException("Simulated backend failure");
} else if (scenario < 30) { // 5% timeout scenario
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
// 70% success case
}
// Fallback methods
public CompletableFuture<String> fallback(String input, Exception e) {
logger.warn("Using fallback for input: {}, due to: {}", input, e.getMessage());
return CompletableFuture.completedFuture("Fallback response for: " + input);
}
public CompletableFuture<String> retryFallback(String input, Exception e) {
logger.error("All retries exhausted for input: {}", input);
return CompletableFuture.completedFuture("Retry exhausted for: " + input);
}
}

4. REST Controller

Create a controller to expose the service:

@RestController
@RequestMapping("/api")
public class DemoController {
@Autowired
private BackendService backendService;
@GetMapping("/process/{input}")
public CompletableFuture<ResponseEntity<Map<String, String>>> processInput(
@PathVariable String input) {
return backendService.processRequest(input)
.thenApply(result -> ResponseEntity.ok(Map.of(
"status", "success",
"result", result,
"timestamp", Instant.now().toString()
)))
.exceptionally(throwable -> ResponseEntity.status(503).body(Map.of(
"status", "error",
"message", "Service unavailable",
"timestamp", Instant.now().toString()
)));
}
@GetMapping("/batch-process")
public CompletableFuture<ResponseEntity<Map<String, Object>>> batchProcess() {
List<CompletableFuture<String>> futures = new ArrayList<>();
// Simulate concurrent requests to test bulkhead and rate limiter
for (int i = 0; i < 10; i++) {
futures.add(backendService.processRequest("item-" + i));
}
return CompletableFuture.allOf(futures.toArray(new CompletableFuture[0]))
.thenApply(v -> {
List<String> results = futures.stream()
.map(CompletableFuture::join)
.collect(Collectors.toList());
return ResponseEntity.ok(Map.of(
"processed", results.size(),
"results", results,
"timestamp", Instant.now().toString()
));
});
}
}

5. Custom Metrics Configuration (Optional)

For more advanced metric collection, create a custom metrics configuration:

@Configuration
public class ResilienceMetricsConfig {
@Autowired
private MeterRegistry meterRegistry;
@PostConstruct
public void init() {
// Custom metrics for business-specific resilience patterns
Gauge.builder("resilience4j.custom.circuit_breaker.health")
.description("Custom circuit breaker health indicator")
.tags("application", "resilience4j-demo")
.register(meterRegistry, this, config -> {
// Custom health calculation logic
return calculateCircuitBreakerHealth();
});
}
private double calculateCircuitBreakerHealth() {
// Implement custom health logic
return 1.0; // Placeholder
}
}

Prometheus Metrics Explained

When you access /actuator/prometheus, you'll see metrics like:

Circuit Breaker Metrics

# Circuit Breaker State (0=CLOSED, 1=OPEN, 2=HALF_OPEN, 3=DISABLED, 4=FORCED_OPEN)
resilience4j_circuitbreaker_state{name="backendService",} 0.0
# Circuit Breaker Calls
resilience4j_circuitbreaker_calls{name="backendService",kind="successful",} 42.0
resilience4j_circuitbreaker_calls{name="backendService",kind="failed",} 8.0
resilience4j_circuitbreaker_calls{name="backendService",kind="ignored",} 2.0
# Failure Rate
resilience4j_circuitbreaker_failure_rate{name="backendService",} 16.0
# Slow Call Rate
resilience4j_circuitbreaker_slow_call_rate{name="backendService",} 5.0

Rate Limiter Metrics

# Available Permissions
resilience4j_ratelimiter_available_permissions{name="pricingService",} 3.0
# Waiting Threads
resilience4j_ratelimiter_waiting_threads{name="pricingService",} 0.0

Bulkhead Metrics

# Concurrent Calls
resilience4j_bulkhead_concurrent_calls{name="databaseService",} 2.0
# Available Capacity
resilience4j_bulkhead_available_capacity{name="databaseService",} 3.0

Retry Metrics

# Retry Attempts
resilience4j_retry_calls{name="externalApi",kind="successful_without_retry",} 35.0
resilience4j_retry_calls{name="externalApi",kind="successful_with_retry",} 7.0
resilience4j_retry_calls{name="externalApi",kind="failed_without_retry",} 3.0
resilience4j_retry_calls{name="externalApi",kind="failed_with_retry",} 2.0

Prometheus Configuration

Create a prometheus.yml to scrape your application:

global:
scrape_interval: 15s
evaluation_interval: 15s
rule_files:
# - "first_rules.yml"
scrape_configs:
- job_name: 'resilience4j-demo'
metrics_path: '/actuator/prometheus'
scrape_interval: 5s
static_configs:
- targets: ['localhost:8080']
labels:
application: 'resilience4j-demo'
environment: 'development'
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']

Grafana Dashboard Queries

Create a comprehensive dashboard with these key queries:

1. Circuit Breaker Status

# Circuit Breaker State
resilience4j_circuitbreaker_state{name="backendService"}
# Failure Rate
resilience4j_circuitbreaker_failure_rate{name="backendService"}
# Call Volume
sum by (kind) (rate(resilience4j_circuitbreaker_calls{name="backendService"}[5m]))

2. Rate Limiter Utilization

# Available Permissions Over Time
resilience4j_ratelimiter_available_permissions{name="pricingService"}
# Permission Utilization Rate
(5 - resiliance4j_ratelimiter_available_permissions{name="pricingService"}) / 5 * 100

3. Bulkhead Concurrency

# Concurrent Calls
resilience4j_bulkhead_concurrent_calls{name="databaseService"}
# Capacity Utilization
resilience4j_bulkhead_concurrent_calls{name="databaseService"} / 5 * 100

4. Retry Effectiveness

# Retry Success Rate
sum(rate(resilience4j_retry_calls{name="externalApi",kind=~"successful.*"}[5m])) 
/ 
sum(rate(resilience4j_retry_calls{name="externalApi"}[5m])) 
* 100

Alerting Rules

Create Prometheus alerting rules for critical scenarios:

groups:
- name: resilience4j_alerts
rules:
- alert: CircuitBreakerOpen
expr: resilience4j_circuitbreaker_state{name="backendService"} == 1
for: 1m
labels:
severity: critical
annotations:
summary: "Circuit breaker for {{ $labels.name }} is OPEN"
description: "The circuit breaker has been open for more than 1 minute. Service degradation detected."
- alert: HighFailureRate
expr: resilience4j_circuitbreaker_failure_rate{name="backendService"} > 30
for: 2m
labels:
severity: warning
annotations:
summary: "High failure rate for {{ $labels.name }}"
description: "Failure rate is {{ $value }}%, approaching circuit breaker threshold."
- alert: RateLimiterExhausted
expr: resilience4j_ratelimiter_available_permissions{name="pricingService"} == 0
for: 30s
labels:
severity: warning
annotations:
summary: "Rate limiter exhausted for {{ $labels.name }}"
description: "No available permissions in rate limiter."

Best Practices

  1. Use Meaningful Names: Choose descriptive circuit breaker and rate limiter names
  2. Set Appropriate thresholds: Align failure rate thresholds with your SLOs
  3. Monitor State Transitions: Alert on frequent OPEN/CLOSED transitions (flapping)
  4. Correlate with Business Metrics: Link resilience metrics to business KPIs
  5. Test Resilience: Use chaos engineering to validate your monitoring
  6. Set Up Dashboards: Create team-specific dashboards for different services

Conclusion

Integrating Resilience4j metrics with Prometheus provides crucial visibility into your application's resilience patterns. By monitoring circuit breaker states, rate limiter utilization, retry effectiveness, and bulkhead concurrency, you can:

  • Detect service degradation before it affects users
  • Validate your resilience configuration matches real-world conditions
  • Make data-driven decisions about timeout and retry configurations
  • Create actionable alerts for operational teams

The combination of Resilience4j's powerful patterns and Prometheus's robust monitoring creates a foundation for building truly observable and resilient microservices. With proper dashboards and alerting, you can confidently deploy circuit breakers and other resilience patterns, knowing you'll be alerted when they're needed most.

Leave a Reply

Your email address will not be published. Required fields are marked *


Macro Nepal Helper