PromQL for Java Metrics: Complete Implementation Guide

Introduction

PromQL (Prometheus Query Language) is a powerful functional query language that enables rich analytics and alerting on time series data. When combined with Java application metrics, it provides deep insights into application performance, business metrics, and system health.


Architecture Overview

[Java Application] → [Micrometer Metrics] → [Prometheus] → [PromQL Queries] → [Visualization]
↓                  ↓                    ↓               ↓               ↓
Spring Boot        Registry            Time Series DB    Query Language  Grafana Dashboards
Custom Metrics     Exporters           Storage           Alerting        Alerts
Business Metrics   Formatting          HTTP API          Analytics       Analysis

Step 1: Project Dependencies and Configuration

Maven Configuration

pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0">
<modelVersion>4.0.0</modelVersion>
<groupId>com.example</groupId>
<artifactId>promql-java-metrics</artifactId>
<version>1.0.0</version>
<packaging>jar</packaging>
<properties>
<maven.compiler.source>17</maven.compiler.source>
<maven.compiler.target>17</maven.compiler.target>
<project.build.sourceEncoding>UTF-8</project-build.sourceEncoding>
<spring-boot.version>3.2.0</spring-boot.version>
<micrometer.version>1.12.0</micrometer.version>
<prometheus.version>0.16.0</prometheus.version>
</properties>
<dependencies>
<!-- Spring Boot -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<version>${spring-boot.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
<version>${spring-boot.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
<version>${spring-boot.version}</version>
</dependency>
<!-- Metrics -->
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-core</artifactId>
<version>${micrometer.version}</version>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
<version>${micrometer.version}</version>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-observation</artifactId>
<version>${micrometer.version}</version>
</dependency>
<!-- Prometheus Client -->
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>simpleclient</artifactId>
<version>${prometheus.version}</version>
</dependency>
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>simpleclient_hotspot</artifactId>
<version>${prometheus.version}</version>
</dependency>
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>simpleclient_httpserver</artifactId>
<version>${prometheus.version}</version>
</dependency>
<!-- Database -->
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<scope>runtime</scope>
</dependency>
<!-- Cache -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
<version>${spring-boot.version}</version>
</dependency>
<dependency>
<groupId>com.github.ben-manes.caffeine</groupId>
<artifactId>caffeine</artifactId>
<version>3.1.8</version>
</dependency>
<!-- JSON -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.15.0</version>
</dependency>
</dependencies>
</project>

Application Configuration

application.yml

spring:
application:
name: order-service
datasource:
url: jdbc:postgresql://localhost:5432/orders
username: postgres
password: password
jpa:
hibernate:
ddl-auto: validate
show-sql: false
cache:
type: caffeine
caffeine:
spec: maximumSize=1000,expireAfterWrite=300s
management:
endpoints:
web:
exposure:
include: health,metrics,prometheus,info
enabled-by-default: true
endpoint:
health:
show-details: always
show-components: always
metrics:
enabled: true
prometheus:
enabled: true
metrics:
export:
prometheus:
enabled: true
step: 30s
distribution:
percentiles-histogram:
http.server.requests: true
percentiles:
http.server.requests: 0.5, 0.95, 0.99
sla:
http.server.requests: 100ms, 500ms, 1s
tags:
application: ${spring.application.name}
environment: ${ENVIRONMENT:development}
region: ${REGION:us-east-1}
server:
port: 8080
logging:
level:
com.example.metrics: DEBUG
pattern:
level: "%5p [${spring.application.name:},%X{traceId:-},%X{spanId:-}]"
# Custom metrics configuration
app:
metrics:
enabled: true
business:
enabled: true
cache:
enabled: true
database:
enabled: true

Step 2: Comprehensive Metrics Configuration

Metrics Configuration Class

MetricsConfig.java

package com.example.metrics.config;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.config.MeterFilter;
import io.micrometer.core.instrument.distribution.DistributionStatisticConfig;
import io.micrometer.prometheus.PrometheusConfig;
import io.micrometer.prometheus.PrometheusMeterRegistry;
import io.prometheus.client.CollectorRegistry;
import org.springframework.boot.actuate.autoconfigure.metrics.MeterRegistryCustomizer;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import java.time.Duration;
@Configuration
public class MetricsConfig {
@Bean
public MeterRegistryCustomizer<MeterRegistry> metricsCommonTags() {
return registry -> registry.config()
.commonTags(
"application", "order-service",
"environment", System.getenv().getOrDefault("ENVIRONMENT", "development"),
"region", System.getenv().getOrDefault("REGION", "us-east-1"),
"version", "1.0.0"
);
}
@Bean
public MeterFilter enableHistograms() {
return MeterFilter.enableHistograms("http.server.requests", "order.processing.time");
}
@Bean
public MeterFilter renameTags() {
return MeterFilter.renameTag("order-service", "status", "outcome");
}
@Bean
public PrometheusMeterRegistry prometheusMeterRegistry(PrometheusConfig config, 
CollectorRegistry collectorRegistry) {
return new PrometheusMeterRegistry(config, collectorRegistry, io.micrometer.core.instrument.Clock.SYSTEM);
}
@Bean
public DistributionStatisticConfig distributionStatisticConfig() {
return DistributionStatisticConfig.builder()
.percentilesHistogram(true)
.percentiles(0.5, 0.75, 0.95, 0.99)
.serviceLevelObjectives(
Duration.ofMillis(100).toNanos(),
Duration.ofMillis(500).toNanos(),
Duration.ofSeconds(1).toNanos()
)
.minimumExpectedValue(Duration.ofMillis(1).toNanos())
.maximumExpectedValue(Duration.ofSeconds(30).toNanos())
.expiry(Duration.ofMinutes(5))
.bufferLength(3)
.build();
}
}

Custom Metrics Registry

CustomMetricsRegistry.java

package com.example.metrics.registry;
import io.micrometer.core.instrument.Counter;
import io.micrometer.core.instrument.Gauge;
import io.micrometer.core.instrument.Timer;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.DistributionSummary;
import io.micrometer.core.instrument.LongTaskTimer;
import org.springframework.stereotype.Component;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import java.util.concurrent.atomic.AtomicLong;
@Component
public class CustomMetricsRegistry {
private final MeterRegistry meterRegistry;
private final ConcurrentHashMap<String, Counter> counters;
private final ConcurrentHashMap<String, Gauge> gauges;
private final ConcurrentHashMap<String, Timer> timers;
public CustomMetricsRegistry(MeterRegistry meterRegistry) {
this.meterRegistry = meterRegistry;
this.counters = new ConcurrentHashMap<>();
this.gauges = new ConcurrentHashMap<>();
this.timers = new ConcurrentHashMap<>();
initializeBaseMetrics();
}
private void initializeBaseMetrics() {
// Application info gauge
Gauge.builder("app.info")
.description("Application information")
.tag("version", "1.0.0")
.register(meterRegistry, 1);
}
// Counter methods
public void incrementCounter(String name, String... tags) {
Counter counter = counters.computeIfAbsent(name, k -> 
Counter.builder(name)
.description("Counter for " + name)
.tags(tags)
.register(meterRegistry)
);
counter.increment();
}
public void incrementCounter(String name, double amount, String... tags) {
Counter counter = counters.computeIfAbsent(name, k -> 
Counter.builder(name)
.description("Counter for " + name)
.tags(tags)
.register(meterRegistry)
);
counter.increment(amount);
}
// Gauge methods
public void setGauge(String name, double value, String... tags) {
AtomicLong gaugeValue = new AtomicLong((long) value);
Gauge gauge = gauges.computeIfAbsent(name, k -> 
Gauge.builder(name)
.description("Gauge for " + name)
.tags(tags)
.register(meterRegistry, gaugeValue, AtomicLong::doubleValue)
);
}
public <T> void registerGauge(String name, T obj, java.util.function.ToDoubleFunction<T> valueFunction, String... tags) {
Gauge.builder(name)
.description("Gauge for " + name)
.tags(tags)
.register(meterRegistry, obj, valueFunction);
}
// Timer methods
public Timer.Sample startTimer(String name, String... tags) {
Timer timer = timers.computeIfAbsent(name, k -> 
Timer.builder(name)
.description("Timer for " + name)
.tags(tags)
.publishPercentiles(0.5, 0.95, 0.99)
.publishPercentileHistogram()
.register(meterRegistry)
);
return Timer.start(meterRegistry);
}
public void stopTimer(Timer.Sample sample, String name, String... tags) {
Timer timer = timers.computeIfAbsent(name, k -> 
Timer.builder(name)
.description("Timer for " + name)
.tags(tags)
.publishPercentiles(0.5, 0.95, 0.99)
.publishPercentileHistogram()
.register(meterRegistry)
);
sample.stop(timer);
}
public long recordTimer(String name, long amount, TimeUnit unit, String... tags) {
Timer timer = timers.computeIfAbsent(name, k -> 
Timer.builder(name)
.description("Timer for " + name)
.tags(tags)
.publishPercentiles(0.5, 0.95, 0.99)
.publishPercentileHistogram()
.register(meterRegistry)
);
return timer.record(amount, unit);
}
// Distribution Summary methods
public void recordDistribution(String name, double amount, String... tags) {
DistributionSummary summary = DistributionSummary.builder(name)
.description("Distribution summary for " + name)
.tags(tags)
.publishPercentiles(0.5, 0.95, 0.99)
.register(meterRegistry);
summary.record(amount);
}
// Long Task Timer methods
public LongTaskTimer.Sample startLongTimer(String name, String... tags) {
LongTaskTimer timer = LongTaskTimer.builder(name)
.description("Long task timer for " + name)
.tags(tags)
.register(meterRegistry);
return timer.start();
}
}

Step 3: Business Metrics Implementation

Order Service with Metrics

OrderService.java

package com.example.metrics.service;
import com.example.metrics.registry.CustomMetricsRegistry;
import io.micrometer.core.instrument.Timer;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.stereotype.Service;
import java.util.Random;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.TimeUnit;
@Service
public class OrderService {
private static final Logger logger = LoggerFactory.getLogger(OrderService.class);
private final CustomMetricsRegistry metricsRegistry;
private final Random random = new Random();
// Simulated database or external service calls
private final AtomicInteger activeOrders = new AtomicInteger(0);
private final AtomicLong totalRevenue = new AtomicLong(0);
public OrderService(CustomMetricsRegistry metricsRegistry) {
this.metricsRegistry = metricsRegistry;
// Register custom gauges
metricsRegistry.registerGauge("orders.active", activeOrders, AtomicInteger::get);
metricsRegistry.registerGauge("orders.revenue.total", totalRevenue, AtomicLong::get);
}
public Order processOrder(OrderRequest request) {
Timer.Sample timer = metricsRegistry.startTimer("order.processing.time", 
"order_type", request.getType(),
"customer_tier", request.getCustomerTier());
try {
// Increment active orders
activeOrders.incrementAndGet();
metricsRegistry.incrementCounter("orders.processing.total", 
"order_type", request.getType());
// Validate order
if (!validateOrder(request)) {
metricsRegistry.incrementCounter("orders.validation.failed",
"order_type", request.getType(),
"reason", "validation_error");
throw new OrderValidationException("Order validation failed");
}
// Process payment
PaymentResult payment = processPayment(request);
if (!payment.isSuccess()) {
metricsRegistry.incrementCounter("orders.payment.failed",
"order_type", request.getType(),
"payment_gateway", payment.getGateway(),
"error_code", payment.getErrorCode());
throw new PaymentProcessingException("Payment processing failed");
}
// Update inventory
updateInventory(request);
// Calculate and record revenue
double revenue = calculateRevenue(request);
totalRevenue.addAndGet((long) revenue);
metricsRegistry.recordDistribution("orders.revenue.amount", revenue,
"order_type", request.getType(),
"customer_tier", request.getCustomerTier());
// Create order
Order order = createOrder(request, payment);
// Record successful order
metricsRegistry.incrementCounter("orders.completed.success",
"order_type", request.getType(),
"customer_tier", request.getCustomerTier());
logger.info("Order processed successfully: {}", order.getId());
return order;
} catch (Exception e) {
metricsRegistry.incrementCounter("orders.completed.failed",
"order_type", request.getType(),
"error_type", e.getClass().getSimpleName());
throw e;
} finally {
// Decrement active orders and record processing time
activeOrders.decrementAndGet();
metricsRegistry.stopTimer(timer, "order.processing.time",
"order_type", request.getType(),
"customer_tier", request.getCustomerTier());
}
}
public CompletableFuture<Order> processOrderAsync(OrderRequest request) {
return CompletableFuture.supplyAsync(() -> {
Timer.Sample timer = metricsRegistry.startTimer("order.processing.async.time",
"order_type", request.getType());
try {
return processOrder(request);
} finally {
metricsRegistry.stopTimer(timer, "order.processing.async.time",
"order_type", request.getType());
}
});
}
public OrderStatus getOrderStatus(String orderId) {
Timer.Sample timer = metricsRegistry.startTimer("order.status.query.time");
try {
// Simulate database query
simulateProcessing(50, 150);
OrderStatus status = queryOrderStatus(orderId);
metricsRegistry.incrementCounter("order.status.queries",
"status", status.name());
return status;
} finally {
metricsRegistry.stopTimer(timer, "order.status.query.time");
}
}
public void cancelOrder(String orderId, String reason) {
Timer.Sample timer = metricsRegistry.startTimer("order.cancellation.time");
try {
simulateProcessing(20, 100);
// Business logic for cancellation
boolean success = performCancellation(orderId, reason);
if (success) {
metricsRegistry.incrementCounter("orders.cancelled",
"reason", reason);
// Update revenue (negative amount for cancellation)
double refundAmount = calculateRefundAmount(orderId);
totalRevenue.addAndGet((long) -refundAmount);
} else {
metricsRegistry.incrementCounter("orders.cancellation.failed",
"reason", reason);
}
} finally {
metricsRegistry.stopTimer(timer, "order.cancellation.time");
}
}
// Helper methods with simulated implementations
private boolean validateOrder(OrderRequest request) {
simulateProcessing(5, 20);
return random.nextDouble() > 0.05; // 5% failure rate
}
private PaymentResult processPayment(OrderRequest request) {
simulateProcessing(100, 500);
boolean success = random.nextDouble() > 0.02; // 2% failure rate
return new PaymentResult(success, "stripe", success ? null : "card_declined");
}
private void updateInventory(OrderRequest request) {
simulateProcessing(10, 50);
}
private double calculateRevenue(OrderRequest request) {
return request.getAmount() * (1 - request.getDiscount());
}
private Order createOrder(OrderRequest request, PaymentResult payment) {
simulateProcessing(20, 100);
return new Order(
"ORD-" + System.currentTimeMillis(),
request,
payment,
OrderStatus.COMPLETED
);
}
private OrderStatus queryOrderStatus(String orderId) {
OrderStatus[] statuses = OrderStatus.values();
return statuses[random.nextInt(statuses.length)];
}
private boolean performCancellation(String orderId, String reason) {
return random.nextDouble() > 0.1; // 10% failure rate
}
private double calculateRefundAmount(String orderId) {
return random.nextDouble() * 1000;
}
private void simulateProcessing(int minMs, int maxMs) {
try {
int delay = minMs + random.nextInt(maxMs - minMs);
Thread.sleep(delay);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new RuntimeException("Processing interrupted", e);
}
}
// Data classes
public static class OrderRequest {
private String type;
private String customerTier;
private double amount;
private double discount;
// Constructors, getters, setters
public OrderRequest(String type, String customerTier, double amount, double discount) {
this.type = type;
this.customerTier = customerTier;
this.amount = amount;
this.discount = discount;
}
public String getType() { return type; }
public String getCustomerTier() { return customerTier; }
public double getAmount() { return amount; }
public double getDiscount() { return discount; }
}
public static class Order {
private String id;
private OrderRequest request;
private PaymentResult payment;
private OrderStatus status;
public Order(String id, OrderRequest request, PaymentResult payment, OrderStatus status) {
this.id = id;
this.request = request;
this.payment = payment;
this.status = status;
}
public String getId() { return id; }
}
public static class PaymentResult {
private boolean success;
private String gateway;
private String errorCode;
public PaymentResult(boolean success, String gateway, String errorCode) {
this.success = success;
this.gateway = gateway;
this.errorCode = errorCode;
}
public boolean isSuccess() { return success; }
public String getGateway() { return gateway; }
public String getErrorCode() { return errorCode; }
}
public enum OrderStatus {
PENDING, PROCESSING, COMPLETED, CANCELLED, FAILED
}
public static class OrderValidationException extends RuntimeException {
public OrderValidationException(String message) { super(message); }
}
public static class PaymentProcessingException extends RuntimeException {
public PaymentProcessingException(String message) { super(message); }
}
}

Database Metrics Interceptor

DatabaseMetricsInterceptor.java

package com.example.metrics.interceptor;
import com.example.metrics.registry.CustomMetricsRegistry;
import org.hibernate.resource.jdbc.spi.StatementInspector;
import org.springframework.stereotype.Component;
import java.util.regex.Pattern;
@Component
public class DatabaseMetricsInterceptor implements StatementInspector {
private final CustomMetricsRegistry metricsRegistry;
private final Pattern selectPattern = Pattern.compile("^SELECT", Pattern.CASE_INSENSITIVE);
private final Pattern insertPattern = Pattern.compile("^INSERT", Pattern.CASE_INSENSITIVE);
private final Pattern updatePattern = Pattern.compile("^UPDATE", Pattern.CASE_INSENSITIVE);
private final Pattern deletePattern = Pattern.compile("^DELETE", Pattern.CASE_INSENSITIVE);
public DatabaseMetricsInterceptor(CustomMetricsRegistry metricsRegistry) {
this.metricsRegistry = metricsRegistry;
}
@Override
public String inspect(String sql) {
if (sql == null || sql.trim().isEmpty()) {
return sql;
}
String operation = "OTHER";
if (selectPattern.matcher(sql).find()) {
operation = "SELECT";
} else if (insertPattern.matcher(sql).find()) {
operation = "INSERT";
} else if (updatePattern.matcher(sql).find()) {
operation = "UPDATE";
} else if (deletePattern.matcher(sql).find()) {
operation = "DELETE";
}
// Record database operation metrics
metricsRegistry.incrementCounter("database.operations",
"operation", operation,
"table", extractTableName(sql));
return sql;
}
private String extractTableName(String sql) {
// Simple table name extraction (for demonstration)
// In production, use a proper SQL parser
String[] words = sql.split("\\s+");
if (words.length >= 2) {
for (int i = 0; i < words.length - 1; i++) {
if (words[i].equalsIgnoreCase("FROM") || 
words[i].equalsIgnoreCase("INTO") ||
words[i].equalsIgnoreCase("UPDATE")) {
return words[i + 1].replaceAll("[;`'\"]", "");
}
}
}
return "unknown";
}
}

Step 4: Cache Metrics

Cache Metrics Configuration

CacheMetricsConfig.java

package com.example.metrics.cache;
import com.example.metrics.registry.CustomMetricsRegistry;
import com.github.benmanes.caffeine.cache.stats.CacheStats;
import com.github.benmanes.caffeine.cache.stats.StatsCounter;
import org.springframework.cache.CacheManager;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.cache.caffeine.CaffeineCacheManager;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;
@Configuration
@EnableCaching
public class CacheMetricsConfig {
private final CustomMetricsRegistry metricsRegistry;
private final ConcurrentHashMap<String, CacheStats> cacheStatsMap = new ConcurrentHashMap<>();
private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
public CacheMetricsConfig(CustomMetricsRegistry metricsRegistry) {
this.metricsRegistry = metricsRegistry;
startCacheMetricsCollection();
}
@Bean
@Primary
public CacheManager cacheManager() {
CaffeineCacheManager cacheManager = new CaffeineCacheManager();
cacheManager.setCacheSpecification("maximumSize=1000,expireAfterWrite=300s");
cacheManager.setAllowNullValues(false);
// Register custom stats counter for metrics collection
cacheManager.setCacheBuilder(builder -> builder.recordStats(() -> new MetricsStatsCounter()));
return cacheManager;
}
private void startCacheMetricsCollection() {
scheduler.scheduleAtFixedRate(this::collectCacheMetrics, 30, 30, TimeUnit.SECONDS);
}
private void collectCacheMetrics() {
cacheStatsMap.forEach((cacheName, stats) -> {
String[] tags = {"cache", cacheName};
// Hit rate metrics
double hitRate = stats.hitRate();
metricsRegistry.setGauge("cache.hit.rate", hitRate, tags);
// Request metrics
metricsRegistry.setGauge("cache.requests.total", stats.requestCount(), tags);
metricsRegistry.setGauge("cache.hits.total", stats.hitCount(), tags);
metricsRegistry.setGauge("cache.misses.total", stats.missCount(), tags);
// Load metrics
metricsRegistry.setGauge("cache.loads.success", stats.loadSuccessCount(), tags);
metricsRegistry.setGauge("cache.loads.failure", stats.loadFailureCount(), tags);
metricsRegistry.recordDistribution("cache.load.duration", stats.totalLoadTime(), tags);
// Eviction metrics
metricsRegistry.setGauge("cache.evictions.total", stats.evictionCount(), tags);
metricsRegistry.setGauge("cache.evictions.weight", stats.evictionWeight(), tags);
});
}
private class MetricsStatsCounter extends StatsCounter {
private final String cacheName;
public MetricsStatsCounter() {
this.cacheName = "default"; // Would be set based on cache name
}
@Override
public void recordHits(int count) {
String[] tags = {"cache", cacheName, "operation", "hit"};
metricsRegistry.incrementCounter("cache.operation", count, tags);
updateStats();
}
@Override
public void recordMisses(int count) {
String[] tags = {"cache", cacheName, "operation", "miss"};
metricsRegistry.incrementCounter("cache.operation", count, tags);
updateStats();
}
@Override
public void recordLoadSuccess(long loadTime) {
String[] tags = {"cache", cacheName, "operation", "load_success"};
metricsRegistry.incrementCounter("cache.operation", 1, tags);
metricsRegistry.recordDistribution("cache.load.time", loadTime, tags);
updateStats();
}
@Override
public void recordLoadFailure(long loadTime) {
String[] tags = {"cache", cacheName, "operation", "load_failure"};
metricsRegistry.incrementCounter("cache.operation", 1, tags);
metricsRegistry.recordDistribution("cache.load.time", loadTime, tags);
updateStats();
}
@Override
public void recordEviction() {
String[] tags = {"cache", cacheName, "operation", "eviction"};
metricsRegistry.incrementCounter("cache.operation", 1, tags);
updateStats();
}
@Override
public void recordEviction(int weight) {
String[] tags = {"cache", cacheName, "operation", "eviction"};
metricsRegistry.incrementCounter("cache.operation", 1, tags);
metricsRegistry.recordDistribution("cache.eviction.weight", weight, tags);
updateStats();
}
private void updateStats() {
// Update the stats map for periodic collection
cacheStatsMap.put(cacheName, snapshot());
}
@Override
public CacheStats snapshot() {
// Return current stats snapshot
return new CacheStats(
hitCount(), missCount(), loadSuccessCount(), loadFailureCount(),
totalLoadTime(), evictionCount(), evictionWeight()
);
}
}
}

Step 5: Comprehensive PromQL Queries

JVM and System Metrics Queries

JvmPromqlQueries.java

package com.example.metrics.promql;
public class JvmPromqlQueries {
// Memory Usage
public static final String JVM_MEMORY_USED = 
"jvm_memory_used_bytes{area=\"heap\"}";
public static final String JVM_MEMORY_MAX = 
"jvm_memory_max_bytes{area=\"heap\"}";
public static final String JVM_MEMORY_USED_PERCENTAGE = 
"jvm_memory_used_bytes{area=\"heap\"} / jvm_memory_max_bytes{area=\"heap\"} * 100";
public static final String JVM_MEMORY_COMMITTED = 
"jvm_memory_committed_bytes{area=\"heap\"}";
// Garbage Collection
public static final String GC_PAUSE_TIME = 
"rate(jvm_gc_pause_seconds_sum[5m]) / rate(jvm_gc_pause_seconds_count[5m])";
public static final String GC_RATE = 
"rate(jvm_gc_pause_seconds_count[5m])";
public static final String GC_TOTAL_TIME = 
"jvm_gc_pause_seconds_sum";
// Threads
public static final String THREADS_ACTIVE = 
"jvm_threads_live_threads";
public static final String THREADS_DAEMON = 
"jvm_threads_daemon_threads";
public static final String THREADS_STATES = 
"jvm_threads_states_threads";
// CPU Usage
public static final String PROCESS_CPU_USAGE = 
"rate(process_cpu_usage[5m]) * 100";
public static final String SYSTEM_CPU_USAGE = 
"rate(system_cpu_usage[5m]) * 100";
// Class Loading
public static final String CLASSES_LOADED = 
"jvm_classes_loaded_classes";
public static final String CLASSES_UNLOADED = 
"jvm_classes_unloaded_classes_total";
// File Descriptors
public static final String FILE_DESCRIPTORS_USAGE = 
"process_open_fds / process_max_fds * 100";
public static final String FILE_DESCRIPTORS_OPEN = 
"process_open_fds";
// Logging
public static final String LOG_EVENTS = 
"rate(logback_events_total[5m])";
// Buffer Pools
public static final String BUFFER_POOL_USAGE = 
"jvm_buffer_memory_used_bytes";
public static final String BUFFER_POOL_COUNT = 
"jvm_buffer_count_buffers";
}

Application Business Metrics Queries

BusinessPromqlQueries.java

package com.example.metrics.promql;
public class BusinessPromqlQueries {
// Order Processing Metrics
public static final String ORDERS_PROCESSING_TOTAL = 
"rate(orders_processing_total[5m])";
public static final String ORDERS_COMPLETED_SUCCESS = 
"rate(orders_completed_success[5m])";
public static final String ORDERS_COMPLETED_FAILED = 
"rate(orders_completed_failed[5m])";
public static final String ORDER_SUCCESS_RATE = 
"rate(orders_completed_success[5m]) / (rate(orders_completed_success[5m]) + rate(orders_completed_failed[5m])) * 100";
public static final String ORDER_PROCESSING_DURATION_95P = 
"histogram_quantile(0.95, rate(order_processing_time_seconds_bucket[5m]))";
public static final String ORDER_PROCESSING_DURATION_AVG = 
"rate(order_processing_time_seconds_sum[5m]) / rate(order_processing_time_seconds_count[5m])";
// Revenue Metrics
public static final String REVENUE_TOTAL = 
"orders_revenue_total";
public static final String REVENUE_RATE = 
"rate(orders_revenue_amount_sum[5m])";
public static final String REVENUE_AVG_ORDER = 
"orders_revenue_amount_sum / orders_revenue_amount_count";
public static final String REVENUE_BY_CUSTOMER_TIER = 
"sum by (customer_tier) (rate(orders_revenue_amount_sum[5m]))";
// Payment Metrics
public static final String PAYMENT_SUCCESS_RATE = 
"rate(orders_payment_success[5m]) / (rate(orders_payment_success[5m]) + rate(orders_payment_failed[5m])) * 100";
public static final String PAYMENT_FAILURE_BY_GATEWAY = 
"sum by (payment_gateway, error_code) (rate(orders_payment_failed[5m]))";
// Customer Metrics
public static final String ACTIVE_CUSTOMERS = 
"customers_active_total";
public static final String NEW_CUSTOMERS_RATE = 
"rate(customers_registered_total[5m])";
public static final String CUSTOMER_CHURN_RATE = 
"rate(customers_churned_total[5m])";
// Inventory Metrics
public static final String INVENTORY_LEVELS = 
"inventory_items_available";
public static final String INVENTORY_TURNOVER = 
"rate(inventory_items_sold[5m]) / inventory_items_available";
public static final String LOW_STOCK_ALERTS = 
"inventory_items_available < 10";
// Cache Performance
public static final String CACHE_HIT_RATE = 
"cache_hits_total / (cache_hits_total + cache_misses_total) * 100";
public static final String CACHE_EVICTION_RATE = 
"rate(cache_evictions_total[5m])";
public static final String CACHE_LOAD_TIME_95P = 
"histogram_quantile(0.95, rate(cache_load_duration_seconds_bucket[5m]))";
// Database Metrics
public static final String DB_CONNECTIONS_ACTIVE = 
"database_connections_active";
public static final String DB_CONNECTIONS_IDLE = 
"database_connections_idle";
public static final String DB_QUERY_RATE = 
"rate(database_queries_total[5m])";
public static final String DB_QUERY_DURATION_95P = 
"histogram_quantile(0.95, rate(database_query_duration_seconds_bucket[5m]))";
public static final String DB_ERROR_RATE = 
"rate(database_errors_total[5m])";
}

Advanced Analytics Queries

AnalyticalPromqlQueries.java

package com.example.metrics.promql;
public class AnalyticalPromqlQueries {
// SLO and Error Budget Queries
public static final String AVAILABILITY_SLO = 
"avg_over_time((rate(http_server_requests_seconds_count{status!~\"5..\"}[5m]) OR vector(0)) / (rate(http_server_requests_seconds_count[5m]) OR vector(1))[30d:]) * 100";
public static final String ERROR_BUDGET_REMAINING = 
"(1 - (1 - (rate(http_server_requests_seconds_count{status=~\"5..\"}[5m]) OR vector(0)) / (rate(http_server_requests_requests_seconds_count[5m]) OR vector(1)))) * 100";
public static final String LATENCY_SLO = 
"histogram_quantile(0.95, rate(http_server_requests_seconds_bucket[5m])) <= 0.5";
// Business Health Score
public static final String BUSINESS_HEALTH_SCORE = 
"(" +
"(" + BusinessPromqlQueries.ORDER_SUCCESS_RATE + " * 0.4) + " +
"(" + BusinessPromqlQueries.PAYMENT_SUCCESS_RATE + " * 0.3) + " +
"(avg_over_time(cache_hit_rate[5m]) * 0.2) + " +
"((1 - rate(database_errors_total[5m]) / rate(database_queries_total[5m])) * 0.1)" +
")";
// Predictive Scaling
public static final String PREDICTED_LOAD = 
"predict_linear(rate(http_server_requests_seconds_count[1h])[1h:], 3600)";
public static final String RESOURCE_UTILIZATION_FORECAST = 
"predict_linear(container_memory_usage_bytes[1h], 3600)";
// Anomaly Detection
public static final String RESPONSE_TIME_ANOMALY = 
"abs(histogram_quantile(0.95, rate(http_server_requests_seconds_bucket[5m])) - avg_over_time(histogram_quantile(0.95, rate(http_server_requests_seconds_bucket[5m]))[1h:5m])) / stddev_over_time(histogram_quantile(0.95, rate(http_server_requests_seconds_bucket[5m]))[1h:5m]) > 2";
public static final String ERROR_RATE_SPIKE = 
"rate(http_server_requests_seconds_count{status=~\"5..\"}[2m]) / rate(http_server_requests_seconds_count[2m]) > 0.1";
// Capacity Planning
public static final String MEMORY_GROWTH_RATE = 
"rate(container_memory_usage_bytes[24h])";
public static final String CPU_GROWTH_RATE = 
"rate(container_cpu_usage_seconds_total[24h])";
// Cost Optimization
public static final String COST_PER_REQUEST = 
"container_memory_usage_bytes * 0.0000000005 + container_cpu_usage_seconds_total * 0.000001 / rate(http_server_requests_seconds_count[5m])";
public static final String UNDERUTILIZED_RESOURCES = 
"container_memory_usage_bytes / container_spec_memory_limit_bytes < 0.3";
// User Experience Metrics
public static final String API_RESPONSE_TIME_PERCENTILES = 
"histogram_quantile(0.5, rate(http_server_requests_seconds_bucket[5m])), histogram_quantile(0.95, rate(http_server_requests_seconds_bucket[5m])), histogram_quantile(0.99, rate(http_server_requests_seconds_bucket[5m]))";
public static final String USER_SATISFACTION_SCORE = 
"100 - (histogram_quantile(0.95, rate(http_server_requests_seconds_bucket[5m])) * 20 + rate(http_server_requests_seconds_count{status=~\"5..\"}[5m]) / rate(http_server_requests_seconds_count[5m]) * 100)";
}

Step 6: Alerting Rules

Prometheus Alerting Rules

alerts.yml

groups:
- name: java_application
rules:
# Memory Alerts
- alert: HighMemoryUsage
expr: jvm_memory_used_bytes{area="heap"} / jvm_memory_max_bytes{area="heap"} > 0.8
for: 5m
labels:
severity: warning
team: platform
annotations:
summary: "High memory usage in {{ $labels.instance }}"
description: "JVM heap memory usage is above 80% for 5 minutes. Current value: {{ $value }}%"
- alert: OutOfMemoryImminent
expr: jvm_memory_used_bytes{area="heap"} / jvm_memory_max_bytes{area="heap"} > 0.9
for: 2m
labels:
severity: critical
team: platform
annotations:
summary: "Critical memory usage in {{ $labels.instance }}"
description: "JVM heap memory usage is above 90% for 2 minutes. Current value: {{ $value }}%"
# GC Alerts
- alert: HighGCPauseTime
expr: rate(jvm_gc_pause_seconds_sum[5m]) / rate(jvm_gc_pause_seconds_count[5m]) > 1
for: 5m
labels:
severity: warning
team: platform
annotations:
summary: "High GC pause time in {{ $labels.instance }}"
description: "Average GC pause time is above 1 second. Current value: {{ $value }}s"
# Thread Alerts
- alert: HighThreadCount
expr: jvm_threads_live_threads > 1000
for: 5m
labels:
severity: warning
team: platform
annotations:
summary: "High thread count in {{ $labels.instance }}"
description: "Thread count is above 1000. Current value: {{ $value }}"
# Business Alerts
- alert: HighOrderFailureRate
expr: rate(orders_completed_failed[5m]) / (rate(orders_completed_success[5m]) + rate(orders_completed_failed[5m])) > 0.05
for: 5m
labels:
severity: critical
team: business
annotations:
summary: "High order failure rate in {{ $labels.instance }}"
description: "Order failure rate is above 5%. Current value: {{ $value }}%"
- alert: LowOrderSuccessRate
expr: rate(orders_completed_success[5m]) / (rate(orders_completed_success[5m]) + rate(orders_completed_failed[5m])) < 0.95
for: 10m
labels:
severity: warning
team: business
annotations:
summary: "Low order success rate in {{ $labels.instance }}"
description: "Order success rate is below 95%. Current value: {{ $value }}%"
- alert: HighOrderProcessingTime
expr: histogram_quantile(0.95, rate(order_processing_time_seconds_bucket[5m])) > 10
for: 5m
labels:
severity: warning
team: business
annotations:
summary: "High order processing time in {{ $labels.instance }}"
description: "95th percentile order processing time is above 10 seconds. Current value: {{ $value }}s"
# Revenue Alerts
- alert: RevenueDrop
expr: rate(orders_revenue_amount_sum[15m]) < rate(orders_revenue_amount_sum[15m] offset 1h) * 0.7
for: 10m
labels:
severity: critical
team: business
annotations:
summary: "Significant revenue drop detected"
description: "Revenue has dropped by more than 30% compared to an hour ago"
# Payment Alerts
- alert: PaymentGatewayIssues
expr: rate(orders_payment_failed[5m]) > 10
for: 2m
labels:
severity: critical
team: payments
annotations:
summary: "Payment gateway issues detected"
description: "Payment failure rate is high. Current failures per minute: {{ $value }}"
# Cache Alerts
- alert: LowCacheHitRate
expr: cache_hits_total / (cache_hits_total + cache_misses_total) < 0.8
for: 10m
labels:
severity: warning
team: platform
annotations:
summary: "Low cache hit rate in {{ $labels.instance }}"
description: "Cache hit rate is below 80%. Current value: {{ $value }}%"
- alert: HighCacheEvictionRate
expr: rate(cache_evictions_total[5m]) > 100
for: 5m
labels:
severity: warning
team: platform
annotations:
summary: "High cache eviction rate in {{ $labels.instance }}"
description: "Cache eviction rate is above 100 per minute. Current value: {{ $value }}"
# Database Alerts
- alert: HighDatabaseErrorRate
expr: rate(database_errors_total[5m]) / rate(database_queries_total[5m]) > 0.01
for: 5m
labels:
severity: warning
team: platform
annotations:
summary: "High database error rate in {{ $labels.instance }}"
description: "Database error rate is above 1%. Current value: {{ $value }}%"
- alert: SlowDatabaseQueries
expr: histogram_quantile(0.95, rate(database_query_duration_seconds_bucket[5m])) > 5
for: 5m
labels:
severity: warning
team: platform
annotations:
summary: "Slow database queries in {{ $labels.instance }}"
description: "95th percentile database query time is above 5 seconds. Current value: {{ $value }}s"
# Application Health Alerts
- alert: ApplicationDown
expr: up{job="order-service"} == 0
for: 1m
labels:
severity: critical
team: platform
annotations:
summary: "Application {{ $labels.instance }} is down"
description: "The application has been down for more than 1 minute"
- alert: HighErrorRate
expr: rate(http_server_requests_seconds_count{status=~"5.."}[5m]) / rate(http_server_requests_seconds_count[5m]) > 0.05
for: 5m
labels:
severity: critical
team: platform
annotations:
summary: "High HTTP error rate in {{ $labels.instance }}"
description: "HTTP 5xx error rate is above 5%. Current value: {{ $value }}%"
- alert: HighResponseTime
expr: histogram_quantile(0.95, rate(http_server_requests_seconds_bucket[5m])) > 2
for: 5m
labels:
severity: warning
team: platform
annotations:
summary: "High response time in {{ $labels.instance }}"
description: "95th percentile response time is above 2 seconds. Current value: {{ $value }}s"

Step 7: Grafana Dashboard Queries

Comprehensive Dashboard Queries

GrafanaDashboardQueries.java

package com.example.metrics.grafana;
public class GrafanaDashboardQueries {
// Overview Panel Queries
public static class Overview {
public static final String REQUESTS_PER_SECOND = 
"sum(rate(http_server_requests_seconds_count[1m]))";
public static final String ERROR_RATE = 
"sum(rate(http_server_requests_seconds_count{status=~\"5..\"}[1m])) / sum(rate(http_server_requests_seconds_count[1m])) * 100";
public static final String SUCCESS_RATE = 
"100 - (sum(rate(http_server_requests_seconds_count{status=~\"5..\"}[1m])) / sum(rate(http_server_requests_seconds_count[1m])) * 100)";
public static final String AVG_RESPONSE_TIME = 
"sum(rate(http_server_requests_seconds_sum[1m])) / sum(rate(http_server_requests_seconds_count[1m]))";
public static final String ACTIVE_ORDERS = 
"orders_active";
public static final String REVENUE_RATE = 
"rate(orders_revenue_amount_sum[1m])";
}
// JVM Performance Panel
public static class JvmPerformance {
public static final String HEAP_MEMORY_USAGE = 
"jvm_memory_used_bytes{area=\"heap\"}";
public static final String HEAP_MEMORY_MAX = 
"jvm_memory_max_bytes{area=\"heap\"}";
public static final String NON_HEAP_MEMORY_USAGE = 
"jvm_memory_used_bytes{area=\"nonheap\"}";
public static final String GC_PAUSE_TIME = 
"rate(jvm_gc_pause_seconds_sum[1m])";
public static final String THREAD_COUNT = 
"jvm_threads_live_threads";
public static final String CPU_USAGE = 
"rate(process_cpu_usage[1m]) * 100";
}
// Business Metrics Panel
public static class BusinessMetrics {
public static final String ORDERS_PER_MINUTE = 
"sum(rate(orders_processing_total[1m]))";
public static final String ORDER_SUCCESS_RATE = 
"sum(rate(orders_completed_success[1m])) / (sum(rate(orders_completed_success[1m])) + sum(rate(orders_completed_failed[1m]))) * 100";
public static final String REVENUE_PER_MINUTE = 
"sum(rate(orders_revenue_amount_sum[1m]))";
public static final String AVERAGE_ORDER_VALUE = 
"sum(rate(orders_revenue_amount_sum[1m])) / sum(rate(orders_revenue_amount_count[1m]))";
public static final String PAYMENT_SUCCESS_RATE = 
"sum(rate(orders_payment_success[1m])) / (sum(rate(orders_payment_success[1m])) + sum(rate(orders_payment_failed[1m]))) * 100";
}
// Cache Performance Panel
public static class CachePerformance {
public static final String HIT_RATE = 
"sum(rate(cache_hits_total[1m])) / (sum(rate(cache_hits_total[1m])) + sum(rate(cache_misses_total[1m]))) * 100";
public static final String REQUESTS_PER_SECOND = 
"sum(rate(cache_operations_total[1m]))";
public static final String EVICTION_RATE = 
"sum(rate(cache_evictions_total[1m]))";
public static final String LOAD_TIME_95P = 
"histogram_quantile(0.95, sum(rate(cache_load_duration_seconds_bucket[1m])) by (le))";
}
// Database Performance Panel
public static class DatabasePerformance {
public static final String QUERIES_PER_SECOND = 
"sum(rate(database_operations_total[1m]))";
public static final String ERROR_RATE = 
"sum(rate(database_errors_total[1m])) / sum(rate(database_operations_total[1m])) * 100";
public static final String QUERY_DURATION_95P = 
"histogram_quantile(0.95, sum(rate(database_query_duration_seconds_bucket[1m])) by (le))";
public static final String ACTIVE_CONNECTIONS = 
"database_connections_active";
}
// Custom Business Analytics
public static class BusinessAnalytics {
public static final String HOURLY_REVENUE_TREND = 
"sum by (hour) (rate(orders_revenue_amount_sum[1h]))";
public static final String CUSTOMER_TIER_REVENUE = 
"sum by (customer_tier) (rate(orders_revenue_amount_sum[1h]))";
public static final String ORDER_TYPE_DISTRIBUTION = 
"sum by (order_type) (rate(orders_processing_total[1h]))";
public static final String CONVERSION_FUNNEL = 
"rate(orders_processing_total[1h]), rate(orders_completed_success[1h]), rate(orders_delivered_total[1h])";
}
}

Step 8: Metrics Export and Testing

Metrics Controller for Testing

MetricsTestController.java

package com.example.metrics.controller;
import com.example.metrics.registry.CustomMetricsRegistry;
import com.example.metrics.service.OrderService;
import org.springframework.web.bind.annotation.*;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.CompletableFuture;
@RestController
@RequestMapping("/api/test/metrics")
public class MetricsTestController {
private final CustomMetricsRegistry metricsRegistry;
private final OrderService orderService;
public MetricsTestController(CustomMetricsRegistry metricsRegistry, OrderService orderService) {
this.metricsRegistry = metricsRegistry;
this.orderService = orderService;
}
@PostMapping("/orders/generate")
public Map<String, Object> generateTestOrders(@RequestParam(defaultValue = "10") int count) {
Map<String, Object> result = new HashMap<>();
for (int i = 0; i < count; i++) {
try {
OrderService.OrderRequest request = new OrderService.OrderRequest(
i % 2 == 0 ? "standard" : "premium",
i % 3 == 0 ? "basic" : i % 3 == 1 ? "premium" : "enterprise",
100 + (i * 10),
i % 5 == 0 ? 0.1 : 0.0
);
if (i % 4 == 0) {
// Process some orders asynchronously
CompletableFuture<OrderService.Order> future = orderService.processOrderAsync(request);
future.thenAccept(order -> {
metricsRegistry.incrementCounter("test.orders.async.completed");
});
} else {
OrderService.Order order = orderService.processOrder(request);
metricsRegistry.incrementCounter("test.orders.sync.completed");
}
// Simulate some failures
if (i % 7 == 0) {
throw new RuntimeException("Simulated failure");
}
} catch (Exception e) {
metricsRegistry.incrementCounter("test.orders.failed",
"error_type", e.getClass().getSimpleName());
}
}
result.put("generated", count);
result.put("message", "Test orders generated successfully");
return result;
}
@PostMapping("/cache/operations")
public Map<String, Object> generateCacheOperations(@RequestParam(defaultValue = "100") int operations) {
for (int i = 0; i < operations; i++) {
String operation = i % 4 == 0 ? "hit" : i % 4 == 1 ? "miss" : i % 4 == 2 ? "eviction" : "load";
metricsRegistry.incrementCounter("test.cache.operations",
"operation", operation);
}
return Map.of(
"operations", operations,
"message", "Cache operations simulated"
);
}
@PostMapping("/database/operations")
public Map<String, Object> generateDatabaseOperations(@RequestParam(defaultValue = "50") int operations) {
for (int i = 0; i < operations; i++) {
String operation = i % 4 == 0 ? "SELECT" : i % 4 == 1 ? "INSERT" : i % 4 == 2 ? "UPDATE" : "DELETE";
metricsRegistry.incrementCounter("test.database.operations",
"operation", operation,
"table", i % 3 == 0 ? "users" : i % 3 == 1 ? "orders" : "products");
// Simulate some errors
if (i % 10 == 0) {
metricsRegistry.incrementCounter("test.database.errors",
"operation", operation,
"error_code", "SIMULATED_ERROR");
}
}
return Map.of(
"operations", operations,
"message", "Database operations simulated"
);
}
@GetMapping("/custom/gauge/{value}")
public Map<String, Object> setCustomGauge(@PathVariable double value) {
metricsRegistry.setGauge("test.custom.gauge", value);
return Map.of(
"value", value,
"message", "Custom gauge set successfully"
);
}
@PostMapping("/distribution/{values}")
public Map<String, Object> recordDistribution(@PathVariable String values) {
String[] valueArray = values.split(",");
for (String valueStr : valueArray) {
try {
double value = Double.parseDouble(valueStr.trim());
metricsRegistry.recordDistribution("test.distribution.values", value);
} catch (NumberFormatException e) {
// Ignore invalid values
}
}
return Map.of(
"values_recorded", valueArray.length,
"message", "Distribution values recorded"
);
}
}

Best Practices

1. Metric Design

  • Use consistent naming conventions (snake_case)
  • Include meaningful labels for dimensionality
  • Avoid high cardinality labels
  • Document metrics with descriptions

2. Query Optimization

  • Use rate() with appropriate time windows
  • Leverage recording rules for expensive queries
  • Use subqueries for historical analysis
  • Optimize label matching with =~ and !~

3. Alerting Strategy

  • Set appropriate thresholds and durations
  • Use multi-level severity (warning, critical)
  • Include meaningful annotations
  • Test alert rules thoroughly

4. Performance Considerations

  • Monitor metric cardinality
  • Use histogram for latency measurements
  • Implement metric aggregation where appropriate
  • Regularly review and optimize queries

Conclusion

This comprehensive PromQL implementation for Java metrics provides:

  • Complete Metrics Collection: JVM, business, cache, and database metrics
  • Advanced PromQL Queries: Performance, business analytics, and alerting
  • Production-Ready Alerting: Comprehensive alert rules for all critical scenarios
  • Grafana Integration: Ready-to-use dashboard queries
  • Best Practices: Optimized queries and metric design patterns

Key Benefits:

  1. Deep Observability: Full-stack monitoring from JVM to business metrics
  2. Proactive Alerting: Early detection of issues before they impact users
  3. Business Insights: Correlate technical metrics with business outcomes
  4. Performance Optimization: Identify bottlenecks and optimize resource usage
  5. Cost Management: Monitor and optimize infrastructure costs

By implementing this solution, you can achieve comprehensive observability of your Java applications, enabling data-driven decisions and proactive issue resolution.

Leave a Reply

Your email address will not be published. Required fields are marked *


Macro Nepal Helper