Grafana Mimir in Java: Complete Integration Guide


Article

Grafana Mimir is a horizontally scalable, highly available, multi-tenant Prometheus-compatible long-term storage solution. While Mimir is written in Go, Java applications can integrate with it through various APIs and protocols. This guide covers everything from basic metric ingestion to advanced querying and management.

Mimir Architecture Overview

Mimir provides several integration points for Java applications:

  • Prometheus Remote Write API - For pushing metrics
  • Prometheus HTTP API - For querying metrics (PromQL)
  • Mimir Admin API - For tenant management
  • Grafana HTTP API - For dashboard management

Project Setup and Dependencies

Maven Dependencies:

<properties>
<prometheus.version>0.16.0</prometheus.version>
<micrometer.version>1.11.5</micrometer.version>
</properties>
<dependencies>
<!-- Prometheus Java Client -->
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>prometheus-metrics-core</artifactId>
<version>${prometheus.version}</version>
</dependency>
<!-- Micrometer Prometheus Registry -->
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
<version>${micrometer.version}</version>
</dependency>
<!-- HTTP Client -->
<dependency>
<groupId>org.apache.httpcomponents.client5</groupId>
<artifactId>httpclient5</artifactId>
<version>5.2.1</version>
</dependency>
<!-- JSON Processing -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.15.2</version>
</dependency>
<!-- For Async Operations -->
<dependency>
<groupId>org.springframework</groupId>
<artifactId>spring-web</artifactId>
<version>6.0.11</version>
<optional>true</optional>
</dependency>
</dependencies>

1. Remote Write to Mimir

This is the primary method for sending metrics from Java applications to Mimir.

Basic Remote Write Client:

package com.example.mimir.client;
import io.prometheus.metrics.model.snapshots.CounterSnapshot;
import io.prometheus.metrics.model.snapshots.GaugeSnapshot;
import io.prometheus.metrics.model.snapshots.MetricSnapshot;
import io.prometheus.metrics.model.snapshots.MetricSnapshots;
import org.apache.hc.client5.http.classic.methods.HttpPost;
import org.apache.hc.client5.http.impl.classic.CloseableHttpClient;
import org.apache.hc.client5.http.impl.classic.CloseableHttpResponse;
import org.apache.hc.client5.http.impl.classic.HttpClients;
import org.apache.hc.core5.http.ContentType;
import org.apache.hc.core5.http.io.entity.ByteArrayEntity;
import org.xerial.snappy.Snappy;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.util.Base64;
public class MimirRemoteWriteClient {
private final String mimirWriteUrl;
private final String tenantId;
private final CloseableHttpClient httpClient;
public MimirRemoteWriteClient(String mimirUrl, String tenantId) {
this.mimirWriteUrl = mimirUrl + "/api/v1/push";
this.tenantId = tenantId;
this.httpClient = HttpClients.createDefault();
}
public void pushMetrics(MetricSnapshots snapshots) throws IOException {
byte[] protobufData = convertSnapshotsToProtobuf(snapshots);
byte[] compressedData = Snappy.compress(protobufData);
HttpPost httpPost = new HttpPost(mimirWriteUrl);
httpPost.setEntity(new ByteArrayEntity(compressedData, 
ContentType.APPLICATION_OCTET_STREAM));
httpPost.setHeader("Content-Encoding", "snappy");
httpPost.setHeader("X-Prometheus-Remote-Write-Version", "0.1.0");
httpPost.setHeader("X-Scope-OrgID", tenantId); // Multi-tenancy header
try (CloseableHttpResponse response = httpClient.execute(httpPost)) {
int statusCode = response.getCode();
if (statusCode < 200 || statusCode >= 300) {
throw new IOException("Mimir remote write failed with status: " + statusCode);
}
System.out.println("Successfully pushed metrics to Mimir");
}
}
private byte[] convertSnapshotsToProtobuf(MetricSnapshots snapshots) throws IOException {
// Convert MetricSnapshots to Prometheus WriteRequest protobuf
// This is a simplified version - you'd use the prometheus-remote-storage library
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
// Implementation would use protobuf serialization
return outputStream.toByteArray();
}
public void close() throws IOException {
httpClient.close();
}
}

2. Micrometer Integration with Mimir

The recommended approach for most Java applications is using Micrometer with Prometheus registry.

Micrometer Mimir Configuration:

package com.example.mimir.config;
import io.micrometer.core.instrument.Clock;
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.config.MeterFilter;
import io.micrometer.prometheus.PrometheusConfig;
import io.micrometer.prometheus.PrometheusMeterRegistry;
import io.prometheus.client.CollectorRegistry;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
@Configuration
public class MicrometerMimirConfig {
@Bean
public PrometheusMeterRegistry prometheusMeterRegistry() {
PrometheusMeterRegistry registry = new PrometheusMeterRegistry(
PrometheusConfig.DEFAULT, 
CollectorRegistry.defaultRegistry, 
Clock.SYSTEM
);
// Common configuration for Mimir
registry.config()
.meterFilter(MeterFilter.ignoreTags("too.many.cardinality"))
.meterFilter(MeterFilter.maximumAllowableTags("http.request", "uri", 100))
.commonTags("application", "my-java-app", "environment", "production");
return registry;
}
@Bean 
public MimirMetricsPublisher mimirMetricsPublisher(PrometheusMeterRegistry registry) {
return new MimirMetricsPublisher(registry);
}
}

Mimir Metrics Publisher:

package com.example.mimir.service;
import io.micrometer.core.instrument.Counter;
import io.micrometer.core.instrument.Gauge;
import io.micrometer.core.instrument.Timer;
import io.micrometer.core.instrument.MeterRegistry;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Service;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
@Service
public class MimirMetricsPublisher {
private final MeterRegistry meterRegistry;
private final Counter httpRequests;
private final Timer httpRequestDuration;
private final AtomicInteger activeUsers;
public MimirMetricsPublisher(MeterRegistry meterRegistry) {
this.meterRegistry = meterRegistry;
// Define metrics
this.httpRequests = Counter.builder("http_requests_total")
.description("Total HTTP requests")
.tag("application", "my-java-app")
.register(meterRegistry);
this.httpRequestDuration = Timer.builder("http_request_duration_seconds")
.description("HTTP request duration")
.publishPercentiles(0.5, 0.95, 0.99) // Important for Mimir's histogram capabilities
.register(meterRegistry);
this.activeUsers = new AtomicInteger(0);
Gauge.builder("active_users", activeUsers, AtomicInteger::get)
.description("Number of active users")
.register(meterRegistry);
}
public void recordHttpRequest(String method, String path, int status, long durationMs) {
httpRequests.increment();
httpRequestDuration.record(durationMs, TimeUnit.MILLISECONDS);
// You can also create ad-hoc counters with labels
Counter.builder("http_requests_detailed")
.tag("method", method)
.tag("path", path)
.tag("status", String.valueOf(status))
.register(meterRegistry)
.increment();
}
@Scheduled(fixedRate = 30000) // Every 30 seconds
public void pushMetricsToMimir() {
// In a real implementation, this would trigger remote write
// or you'd use the Prometheus scrape endpoint
updateBusinessMetrics();
}
private void updateBusinessMetrics() {
// Update business-specific metrics
activeUsers.set(getCurrentActiveUsersFromDatabase());
Gauge.builder("database_connections")
.description("Active database connections")
.register(meterRegistry)
.set(getDatabaseConnectionCount());
}
// Mock implementations
private int getCurrentActiveUsersFromDatabase() { return 42; }
private int getDatabaseConnectionCount() { return 10; }
}

3. Querying Mimir from Java

Mimir supports the full Prometheus HTTP API for querying metrics.

Mimir Query Client:

package com.example.mimir.client;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.hc.client5.http.classic.methods.HttpGet;
import org.apache.hc.client5.http.impl.classic.CloseableHttpClient;
import org.apache.hc.client5.http.impl.classic.CloseableHttpResponse;
import org.apache.hc.client5.http.impl.classic.HttpClients;
import org.apache.hc.core5.http.io.entity.EntityUtils;
import org.springframework.stereotype.Component;
import java.io.IOException;
import java.net.URLEncoder;
import java.nio.charset.StandardCharsets;
import java.time.Instant;
import java.util.ArrayList;
import java.util.List;
@Component
public class MimirQueryClient {
private final String mimirQueryUrl;
private final String tenantId;
private final CloseableHttpClient httpClient;
private final ObjectMapper objectMapper;
public MimirQueryClient(String mimirUrl, String tenantId) {
this.mimirQueryUrl = mimirUrl + "/prometheus/api/v1";
this.tenantId = tenantId;
this.httpClient = HttpClients.createDefault();
this.objectMapper = new ObjectMapper();
}
public JsonNode instantQuery(String query) throws IOException {
String encodedQuery = URLEncoder.encode(query, StandardCharsets.UTF_8);
String url = mimirQueryUrl + "/query?query=" + encodedQuery;
HttpGet httpGet = new HttpGet(url);
httpGet.setHeader("X-Scope-OrgID", tenantId);
try (CloseableHttpResponse response = httpClient.execute(httpGet)) {
String responseBody = EntityUtils.toString(response.getEntity());
return objectMapper.readTree(responseBody);
}
}
public JsonNode rangeQuery(String query, Instant start, Instant end, String step) throws IOException {
String encodedQuery = URLEncoder.encode(query, StandardCharsets.UTF_8);
String url = String.format("%s/query_range?query=%s&start=%s&end=%s&step=%s",
mimirQueryUrl, encodedQuery, start.toString(), end.toString(), step);
HttpGet httpGet = new HttpGet(url);
httpGet.setHeader("X-Scope-OrgID", tenantId);
try (CloseableHttpResponse response = httpClient.execute(httpGet)) {
String responseBody = EntityUtils.toString(response.getEntity());
return objectMapper.readTree(responseBody);
}
}
public List<String> getSeries(String match) throws IOException {
String encodedMatch = URLEncoder.encode(match, StandardCharsets.UTF_8);
String url = mimirQueryUrl + "/series?match[]=" + encodedMatch;
HttpGet httpGet = new HttpGet(url);
httpGet.setHeader("X-Scope-OrgID", tenantId);
try (CloseableHttpResponse response = httpClient.execute(httpGet)) {
String responseBody = EntityUtils.toString(response.getEntity());
JsonNode jsonResponse = objectMapper.readTree(responseBody);
List<String> series = new ArrayList<>();
JsonNode data = jsonResponse.get("data");
if (data.isArray()) {
for (JsonNode item : data) {
series.add(item.toString());
}
}
return series;
}
}
public void close() throws IOException {
httpClient.close();
}
}

4. Advanced Mimir Operations

Mimir Admin Operations:

package com.example.mimir.admin;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.apache.hc.client5.http.classic.methods.HttpGet;
import org.apache.hc.client5.http.classic.methods.HttpPost;
import org.apache.hc.client5.http.impl.classic.CloseableHttpClient;
import org.apache.hc.client5.http.impl.classic.CloseableHttpResponse;
import org.apache.hc.client5.http.impl.classic.HttpClients;
import org.apache.hc.core5.http.io.entity.EntityUtils;
import org.apache.hc.core5.http.io.entity.StringEntity;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
public class MimirAdminClient {
private final String mimirAdminUrl;
private final String tenantId;
private final CloseableHttpClient httpClient;
private final ObjectMapper objectMapper;
public MimirAdminClient(String mimirUrl, String tenantId) {
this.mimirAdminUrl = mimirUrl + "/config/v1";
this.tenantId = tenantId;
this.httpClient = HttpClients.createDefault();
this.objectMapper = new ObjectMapper();
}
public void createTenant(String tenantId, Map<String, Object> config) throws IOException {
String url = mimirAdminUrl.replace("/config/v1", "") + "/multitenant/tenants";
Map<String, Object> requestBody = new HashMap<>();
requestBody.put("id", tenantId);
requestBody.put("config", config);
HttpPost httpPost = new HttpPost(url);
httpPost.setHeader("Content-Type", "application/json");
httpPost.setEntity(new StringEntity(
objectMapper.writeValueAsString(requestBody)
));
try (CloseableHttpResponse response = httpClient.execute(httpPost)) {
int statusCode = response.getCode();
if (statusCode < 200 || statusCode >= 300) {
throw new IOException("Failed to create tenant: " + statusCode);
}
}
}
public JsonNode getTenantStats() throws IOException {
String url = mimirAdminUrl + "/tenant_stats";
HttpGet httpGet = new HttpGet(url);
httpGet.setHeader("X-Scope-OrgID", tenantId);
try (CloseableHttpResponse response = httpClient.execute(httpGet)) {
String responseBody = EntityUtils.toString(response.getEntity());
return objectMapper.readTree(responseBody);
}
}
public void deleteTenantSeries(String[] matchers) throws IOException {
String url = mimirAdminUrl + "/admin/tsdb/delete_series";
Map<String, Object> requestBody = new HashMap<>();
requestBody.put("matchers", matchers);
HttpPost httpPost = new HttpPost(url);
httpPost.setHeader("X-Scope-OrgID", tenantId);
httpPost.setHeader("Content-Type", "application/json");
httpPost.setEntity(new StringEntity(
objectMapper.writeValueAsString(requestBody)
));
try (CloseableHttpResponse response = httpClient.execute(httpPost)) {
if (response.getCode() != 204) {
throw new IOException("Failed to delete series: " + response.getCode());
}
}
}
}

5. Spring Boot Integration

Complete Spring Boot Configuration:

package com.example.mimir.config;
import com.example.mimir.client.MimirQueryClient;
import com.example.mimir.client.MimirRemoteWriteClient;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.scheduling.annotation.EnableScheduling;
@Configuration
@EnableScheduling
public class MimirConfig {
@Value("${mimir.url:http://localhost:9009}")
private String mimirUrl;
@Value("${mimir.tenant-id:default-tenant}")
private String tenantId;
@Bean
public MimirRemoteWriteClient mimirRemoteWriteClient() {
return new MimirRemoteWriteClient(mimirUrl, tenantId);
}
@Bean
public MimirQueryClient mimirQueryClient() {
return new MimirQueryClient(mimirUrl, tenantId);
}
@Bean(destroyMethod = "close")
public MimirAdminClient mimirAdminClient() {
return new MimirAdminClient(mimirUrl, tenantId);
}
}

REST Controller for Mimir Queries:

package com.example.mimir.controller;
import com.example.mimir.client.MimirQueryClient;
import com.fasterxml.jackson.databind.JsonNode;
import org.springframework.format.annotation.DateTimeFormat;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;
import java.io.IOException;
import java.time.Instant;
@RestController
@RequestMapping("/api/mimir")
public class MimirQueryController {
private final MimirQueryClient queryClient;
public MimirQueryController(MimirQueryClient queryClient) {
this.queryClient = queryClient;
}
@GetMapping("/query")
public ResponseEntity<JsonNode> instantQuery(@RequestParam String query) {
try {
JsonNode result = queryClient.instantQuery(query);
return ResponseEntity.ok(result);
} catch (IOException e) {
return ResponseEntity.internalServerError().build();
}
}
@GetMapping("/query-range")
public ResponseEntity<JsonNode> rangeQuery(
@RequestParam String query,
@RequestParam @DateTimeFormat(iso = DateTimeFormat.ISO.DATE_TIME) Instant start,
@RequestParam @DateTimeFormat(iso = DateTimeFormat.ISO.DATE_TIME) Instant end,
@RequestParam(defaultValue = "15s") String step) {
try {
JsonNode result = queryClient.rangeQuery(query, start, end, step);
return ResponseEntity.ok(result);
} catch (IOException e) {
return ResponseEntity.internalServerError().build();
}
}
@GetMapping("/alerts")
public ResponseEntity<JsonNode> getAlerts() {
try {
// Query for alert states
JsonNode result = queryClient.instantQuery("ALERTS");
return ResponseEntity.ok(result);
} catch (IOException e) {
return ResponseEntity.internalServerError().build();
}
}
}

6. Best Practices for Mimir with Java

Configuration Properties:

# application.properties
mimir.url=http://mimir-gateway:9009
mimir.tenant-id=my-java-app
mimir.remote-write.enabled=true
mimir.remote-write.batch-size=1000
mimir.remote-write.timeout=30s
# Micrometer configuration
management.endpoints.web.exposure.include=prometheus,metrics,health
management.metrics.export.prometheus.step=1m

Error Handling and Resilience:

@Component
public class MimirMetricsService {
private final MimirRemoteWriteClient writeClient;
private final MeterRegistry meterRegistry;
private final Counter failedWrites;
public MimirMetricsService(MimirRemoteWriteClient writeClient, MeterRegistry meterRegistry) {
this.writeClient = writeClient;
this.meterRegistry = meterRegistry;
this.failedWrites = Counter.builder("mimir_write_failures")
.register(meterRegistry);
}
@Async
@Retryable(value = IOException.class, maxAttempts = 3, backoff = @Backoff(delay = 1000))
public CompletableFuture<Void> pushMetricsAsync(MetricSnapshots snapshots) {
return CompletableFuture.runAsync(() -> {
try {
writeClient.pushMetrics(snapshots);
} catch (IOException e) {
failedWrites.increment();
throw new RuntimeException("Failed to push metrics to Mimir", e);
}
});
}
}

Monitoring Your Mimir Integration

Create alerts for your Java application's Mimir integration:

# mimir-alerts.yaml
groups:
- name: java-mimir-integration
rules:
- alert: MimirWriteFailures
expr: rate(mimir_write_failures_total[5m]) > 0
for: 2m
labels:
severity: warning
annotations:
summary: "Mimir remote write failures detected"
description: "Java application is failing to write metrics to Mimir at {{ $value }} failures per second"

Conclusion

Integrating Java applications with Grafana Mimir provides a robust, scalable solution for long-term metric storage and analysis. Key takeaways:

  1. Use Micrometer for application-level metrics instrumentation
  2. Leverage Remote Write for pushing metrics from ephemeral workloads
  3. Utilize Mimir's Query API for building custom dashboards and alerts
  4. Implement proper error handling and retry mechanisms
  5. Monitor your Mimir integration itself with appropriate alerts

Mimir's Prometheus compatibility makes it an excellent choice for Java applications already using Prometheus metrics, while providing the scalability and reliability needed for enterprise deployments.

Java Observability, Logging Intelligence & AI-Driven Monitoring (APM, Tracing, Logs & Anomaly Detection)

https://macronepal.com/blog/beyond-metrics-observing-serverless-and-traditional-java-applications-with-thundra-apm/
Explains using Thundra APM to observe both serverless and traditional Java applications by combining tracing, metrics, and logs into a unified observability platform for faster debugging and performance insights.

https://macronepal.com/blog/dynatrace-oneagent-in-java-2/
Explains Dynatrace OneAgent for Java, which automatically instruments JVM applications to capture metrics, traces, and logs, enabling full-stack monitoring and root-cause analysis with minimal configuration.

https://macronepal.com/blog/lightstep-java-sdk-distributed-tracing-and-observability-implementation/
Explains Lightstep Java SDK for distributed tracing, helping developers track requests across microservices and identify latency issues using OpenTelemetry-based observability.

https://macronepal.com/blog/honeycomb-io-beeline-for-java-complete-guide-2/
Explains Honeycomb Beeline for Java, which provides high-cardinality observability and deep query capabilities to understand complex system behavior and debug distributed systems efficiently.

https://macronepal.com/blog/lumigo-for-serverless-in-java-complete-distributed-tracing-guide-2/
Explains Lumigo for Java serverless applications, offering automatic distributed tracing, log correlation, and error tracking to simplify debugging in cloud-native environments. (Lumigo Docs)

https://macronepal.com/blog/from-noise-to-signals-implementing-log-anomaly-detection-in-java-applications/
Explains how to detect anomalies in Java logs using behavioral patterns and machine learning techniques to separate meaningful incidents from noisy log data and improve incident response.

https://macronepal.com/blog/ai-powered-log-analysis-in-java-from-reactive-debugging-to-proactive-insights/
Explains AI-driven log analysis for Java applications, shifting from manual debugging to predictive insights that identify issues early and improve system reliability using intelligent log processing.

https://macronepal.com/blog/titliel-java-logging-best-practices/
Explains best practices for Java logging, focusing on structured logs, proper log levels, performance optimization, and ensuring logs are useful for debugging and observability systems.

https://macronepal.com/blog/seeking-a-loguru-for-java-the-quest-for-elegant-and-simple-logging/
Explains the search for simpler, more elegant logging frameworks in Java, comparing modern logging approaches that aim to reduce complexity while improving readability and developer experience.

Leave a Reply

Your email address will not be published. Required fields are marked *


Macro Nepal Helper