Vector for Observability in Java

Introduction

Vector is a high-performance observability data pipeline that enables collecting, transforming, and routing logs, metrics, and traces. When integrated with Java applications, Vector provides a robust solution for managing observability data at scale.

Architecture Overview

1. Vector Pipeline Architecture

Java Application → Vector Agent → Backends (Loki, Prometheus, Jaeger, etc.)
↓              ↓
Log4j2/Logback   Metrics/Traces
↓              ↓
Files/Stdout   OTLP/HTTP

Setup and Dependencies

1. Maven Dependencies

<properties>
<opentelemetry.version>1.31.0</opentelemetry.version>
<micrometer.version>1.11.5</micrometer.version>
<log4j2.version>2.20.0</log4j2.version>
</properties>
<dependencies>
<!-- OpenTelemetry for Traces and Metrics -->
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-api</artifactId>
<version>${opentelemetry.version}</version>
</dependency>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-sdk</artifactId>
<version>${opentelemetry.version}</version>
</dependency>
<dependency>
<groupId>io.opentelemetry</groupId>
<artifactId>opentelemetry-exporter-otlp</artifactId>
<version>${opentelemetry.version}</version>
</dependency>
<!-- Micrometer for Application Metrics -->
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-core</artifactId>
<version>${micrometer.version}</version>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
<version>${micrometer.version}</version>
</dependency>
<!-- Log4j2 with JSON Layout -->
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>${log4j2.version}</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-api</artifactId>
<version>${log4j2.version}</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-layout-template-json</artifactId>
<version>${log4j2.version}</version>
</dependency>
<!-- Vector SDK (if using direct integration) -->
<dependency>
<groupId>com.timberio</groupId>
<artifactId>vector-client</artifactId>
<version>0.2.0</version>
</dependency>
</dependencies>

2. Vector Configuration

# vector.toml

[api]

enabled = true address = "127.0.0.1:8686"

[sources.java_app_logs]

type = "file" include = ["/var/log/java-app/*.log"] read_from = "beginning"

[sources.java_app_metrics]

type = "prometheus_scrape" endpoints = ["http://localhost:8080/actuator/prometheus"] scrape_interval_secs = 15

[sources.java_app_traces]

type = "otlp" endpoint = "0.0.0.0:4317" protocol = "grpc"

[transforms.java_logs_parser]

type = "remap" inputs = ["java_app_logs"] source = ''' parsed = parse_json(.message) . = merge(., parsed) .service_name = "java-application" .environment = "production" '''

[transforms.java_metrics_enrich]

type = "remap" inputs = ["java_app_metrics"] source = ''' .tags.environment = "production" .tags.service = "java-application" '''

[transforms.java_traces_enrich]

type = "remap" inputs = ["java_app_traces"] source = ''' .resource.attributes."service.name" = "java-application" .resource.attributes."deployment.environment" = "production" '''

[sinks.loki_logs]

type = "loki" inputs = ["java_logs_parser"] endpoint = "http://loki:3100" labels = { "job" = "java-app", "environment" = "{{ environment }}", "level" = "{{ level }}" }

[sinks.prometheus_metrics]

type = "prometheus_remote_write" inputs = ["java_metrics_enrich"] endpoint = "http://prometheus:9090/api/v1/write"

[sinks.jaeger_traces]

type = "jaeger" inputs = ["java_traces_enrich"] endpoint = "http://jaeger:14268/api/traces" # For debugging

[sinks.console]

type = "console" inputs = ["java_logs_parser", "java_metrics_enrich"] encoding.codec = "json"

Logging Integration

1. Log4j2 Configuration for Vector

<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN" packages="com.fasterxml.jackson.databind">
<Appenders>
<!-- Console Appender for local development -->
<Console name="Console" target="SYSTEM_OUT">
<JsonTemplateLayout eventTemplateUri="classpath:LogstashJsonEventLayoutV1.json"/>
</Console>
<!-- File Appender for Vector consumption -->
<File name="VectorFile" fileName="/var/log/java-app/app.log">
<JsonTemplateLayout eventTemplateUri="classpath:VectorJsonLayout.json"/>
</File>
<!-- Async Appender for performance -->
<Async name="AsyncVector">
<AppenderRef ref="VectorFile"/>
</Async>
</Appenders>
<Loggers>
<Logger name="com.example.app" level="DEBUG" additivity="false">
<AppenderRef ref="AsyncVector"/>
<AppenderRef ref="Console"/>
</Logger>
<Root level="INFO">
<AppenderRef ref="AsyncVector"/>
<AppenderRef ref="Console"/>
</Root>
</Loggers>
</Configuration>

2. Custom JSON Layout for Vector

{
"@timestamp": "${json:timestamp}",
"level": "${json:level}",
"logger": "${json:logger}",
"message": "${json:message}",
"thread": "${json:thread}",
"service": "java-application",
"environment": "${sys:ENVIRONMENT:-development}",
"version": "${bundle:application:version:-unknown}",
"trace_id": "${json:traceId:-}",
"span_id": "${json:spanId:-}",
"exception": {
"class": "${json:exception:className:-}",
"message": "${json:exception:message:-}",
"stack_trace": "${json:exception:stackTrace:-}"
},
"custom_fields": {
"user_id": "${ctx:userId:-}",
"session_id": "${ctx:sessionId:-}",
"request_id": "${ctx:requestId:-}",
"correlation_id": "${ctx:correlationId:-}"
}
}

3. Structured Logging Service

@Service
@Slf4j
public class VectorLoggingService {
private final ThreadLocal<Map<String, String>> context = new ThreadLocal<>();
private final ObjectMapper objectMapper;
public VectorLoggingService(ObjectMapper objectMapper) {
this.objectMapper = objectMapper;
}
public void setContext(String key, String value) {
Map<String, String> ctx = context.get();
if (ctx == null) {
ctx = new HashMap<>();
context.set(ctx);
}
ctx.put(key, value);
}
public void clearContext() {
context.remove();
}
public void info(String message, Map<String, Object> fields) {
log.info("{}", createLogEvent("INFO", message, fields));
}
public void error(String message, Throwable throwable, Map<String, Object> fields) {
log.error("{}", createLogEvent("ERROR", message, fields, throwable));
}
public void debug(String message, Map<String, Object> fields) {
log.debug("{}", createLogEvent("DEBUG", message, fields));
}
public void warn(String message, Map<String, Object> fields) {
log.warn("{}", createLogEvent("WARN", message, fields));
}
private String createLogEvent(String level, String message, Map<String, Object> fields) {
return createLogEvent(level, message, fields, null);
}
private String createLogEvent(String level, String message, Map<String, Object> fields, Throwable throwable) {
try {
LogEvent logEvent = LogEvent.builder()
.timestamp(Instant.now().toString())
.level(level)
.logger(this.getClass().getName())
.message(message)
.thread(Thread.currentThread().getName())
.service("java-application")
.environment(System.getenv().getOrDefault("ENVIRONMENT", "development"))
.traceId(getTraceId())
.spanId(getSpanId())
.customFields(mergeFields(fields))
.exception(throwable != null ? ExceptionInfo.fromThrowable(throwable) : null)
.build();
return objectMapper.writeValueAsString(logEvent);
} catch (Exception e) {
// Fallback to simple logging if JSON serialization fails
log.warn("Failed to create structured log event: {}", e.getMessage());
return message;
}
}
private Map<String, Object> mergeFields(Map<String, Object> additionalFields) {
Map<String, Object> allFields = new HashMap<>();
// Add thread context
Map<String, String> ctx = context.get();
if (ctx != null) {
allFields.putAll(ctx);
}
// Add additional fields
if (additionalFields != null) {
allFields.putAll(additionalFields);
}
return allFields;
}
private String getTraceId() {
// Get trace ID from OpenTelemetry context
Span currentSpan = Span.current();
return currentSpan.getSpanContext().getTraceId();
}
private String getSpanId() {
// Get span ID from OpenTelemetry context
Span currentSpan = Span.current();
return currentSpan.getSpanContext().getSpanId();
}
@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
public static class LogEvent {
private String timestamp;
private String level;
private String logger;
private String message;
private String thread;
private String service;
private String environment;
private String traceId;
private String spanId;
private Map<String, Object> customFields;
private ExceptionInfo exception;
}
@Data
@Builder
@NoArgsConstructor
@AllArgsConstructor
public static class ExceptionInfo {
private String className;
private String message;
private String stackTrace;
public static ExceptionInfo fromThrowable(Throwable throwable) {
StringWriter sw = new StringWriter();
throwable.printStackTrace(new PrintWriter(sw));
return ExceptionInfo.builder()
.className(throwable.getClass().getName())
.message(throwable.getMessage())
.stackTrace(sw.toString())
.build();
}
}
}

Metrics Integration

1. Micrometer Configuration for Vector

@Configuration
public class MetricsConfig {
@Bean
public MeterRegistry meterRegistry() {
PrometheusMeterRegistry prometheusRegistry = new PrometheusMeterRegistry(
PrometheusConfig.DEFAULT);
// Add common tags
prometheusRegistry.config().commonTags(
"application", "java-app",
"environment", System.getenv().getOrDefault("ENVIRONMENT", "development"),
"version", System.getenv().getOrDefault("APP_VERSION", "unknown")
);
return prometheusRegistry;
}
@Bean
public TimedAspect timedAspect(MeterRegistry registry) {
return new TimedAspect(registry);
}
@Bean
public CountedAspect countedAspect(MeterRegistry registry) {
return new CountedAspect(registry);
}
}
@Service
@Slf4j
public class VectorMetricsService {
private final MeterRegistry meterRegistry;
private final Map<String, Counter> counters = new ConcurrentHashMap<>();
private final Map<String, Timer> timers = new ConcurrentHashMap<>();
private final Map<String, Gauge> gauges = new ConcurrentHashMap<>();
public VectorMetricsService(MeterRegistry meterRegistry) {
this.meterRegistry = meterRegistry;
initializeDefaultMetrics();
}
private void initializeDefaultMetrics() {
// JVM metrics
meterRegistry.gauge("jvm_memory_used", 
Tags.of("area", "heap"), 
Runtime.getRuntime(), 
Runtime::totalMemory);
meterRegistry.gauge("jvm_memory_max",
Tags.of("area", "heap"),
Runtime.getRuntime(),
Runtime::maxMemory);
// Custom application metrics
Gauge.builder("application_uptime")
.description("Application uptime in seconds")
.register(meterRegistry, this, service -> 
System.currentTimeMillis() - startTime);
}
private final long startTime = System.currentTimeMillis();
public void incrementCounter(String name, String... tags) {
incrementCounter(name, 1.0, tags);
}
public void incrementCounter(String name, double amount, String... tags) {
Counter counter = counters.computeIfAbsent(name, k ->
Counter.builder(name)
.description("Counter for " + name)
.register(meterRegistry));
counter.increment(amount);
}
public void recordTimer(String name, long durationMs, String... tags) {
Timer timer = timers.computeIfAbsent(name, k ->
Timer.builder(name)
.description("Timer for " + name)
.register(meterRegistry));
timer.record(durationMs, TimeUnit.MILLISECONDS);
}
public Timer.Sample startTimer() {
return Timer.start(meterRegistry);
}
public void stopTimer(Timer.Sample sample, String timerName, String... tags) {
sample.stop(timers.computeIfAbsent(timerName, k ->
Timer.builder(timerName)
.description("Execution time for " + timerName)
.register(meterRegistry)));
}
public void setGauge(String name, double value, String... tags) {
List<Tag> tagList = parseTags(tags);
Gauge gauge = gauges.computeIfAbsent(name + tagList.toString(), k ->
Gauge.builder(name)
.description("Gauge for " + name)
.tags(tagList)
.register(meterRegistry));
// Gauges in Micrometer are set by the function provided during registration
// For dynamic gauges, we need to use a different approach
meterRegistry.gauge(name, tagList, value);
}
public void recordHistogram(String name, double value, String... tags) {
DistributionSummary summary = DistributionSummary.builder(name)
.description("Histogram for " + name)
.tags(tags)
.register(meterRegistry);
summary.record(value);
}
private List<Tag> parseTags(String... tags) {
List<Tag> tagList = new ArrayList<>();
for (int i = 0; i < tags.length; i += 2) {
if (i + 1 < tags.length) {
tagList.add(Tag.of(tags[i], tags[i + 1]));
}
}
return tagList;
}
}

2. Business Metrics Service

@Service
@Slf4j
public class BusinessMetricsService {
private final VectorMetricsService metricsService;
private final Counter userRegistrations;
private final Counter orderCreations;
private final Timer orderProcessingTime;
private final Gauge activeUsers;
public BusinessMetricsService(VectorMetricsService metricsService, 
MeterRegistry meterRegistry) {
this.metricsService = metricsService;
// Pre-defined counters
this.userRegistrations = Counter.builder("user_registrations_total")
.description("Total number of user registrations")
.tag("type", "registration")
.register(meterRegistry);
this.orderCreations = Counter.builder("order_creations_total")
.description("Total number of orders created")
.tag("type", "creation")
.register(meterRegistry);
this.orderProcessingTime = Timer.builder("order_processing_duration")
.description("Time taken to process orders")
.tag("unit", "milliseconds")
.register(meterRegistry);
this.activeUsers = Gauge.builder("active_users_count")
.description("Number of currently active users")
.register(meterRegistry);
}
public void recordUserRegistration(String source) {
userRegistrations.increment();
metricsService.incrementCounter("business_events", 
"event_type", "user_registration", "source", source);
}
public void recordOrderCreation(String customerTier, double orderAmount) {
orderCreations.increment();
metricsService.incrementCounter("business_events",
"event_type", "order_creation", "customer_tier", customerTier);
metricsService.recordHistogram("order_amounts", orderAmount,
"customer_tier", customerTier);
}
public void recordOrderProcessingTime(long durationMs, String orderType) {
orderProcessingTime.record(durationMs, TimeUnit.MILLISECONDS);
metricsService.recordTimer("business_operations", durationMs,
"operation", "order_processing", "order_type", orderType);
}
public void updateActiveUsers(int count) {
// Gauges are set by the function provided during registration
// For dynamic updates, we need to use a different approach
metricsService.setGauge("active_users", count);
}
@Scheduled(fixedRate = 30000) // Every 30 seconds
public void collectSystemMetrics() {
Runtime runtime = Runtime.getRuntime();
long usedMemory = runtime.totalMemory() - runtime.freeMemory();
long maxMemory = runtime.maxMemory();
metricsService.setGauge("jvm_memory_used_bytes", usedMemory);
metricsService.setGauge("jvm_memory_max_bytes", maxMemory);
metricsService.setGauge("jvm_threads_active", 
Thread.activeCount());
}
}

Tracing Integration

1. OpenTelemetry Configuration

@Configuration
public class TracingConfig {
@Bean
public OpenTelemetry openTelemetry() {
String applicationName = System.getenv().getOrDefault("APP_NAME", "java-application");
Resource resource = Resource.getDefault()
.merge(Resource.create(Attributes.of(
ResourceAttributes.SERVICE_NAME, applicationName,
ResourceAttributes.DEPLOYMENT_ENVIRONMENT, 
System.getenv().getOrDefault("ENVIRONMENT", "development"),
ResourceAttributes.SERVICE_VERSION,
System.getenv().getOrDefault("APP_VERSION", "unknown")
)));
SdkTracerProvider tracerProvider = SdkTracerProvider.builder()
.addSpanProcessor(BatchSpanProcessor.builder(
OtlpGrpcSpanExporter.builder()
.setEndpoint("http://localhost:4317")
.build()
).build())
.setResource(resource)
.build();
SdkMeterProvider meterProvider = SdkMeterProvider.builder()
.setResource(resource)
.build();
return OpenTelemetrySdk.builder()
.setTracerProvider(tracerProvider)
.setMeterProvider(meterProvider)
.setPropagators(ContextPropagators.create(
W3CTraceContextPropagator.getInstance()
))
.build();
}
@Bean
public Tracer tracer(OpenTelemetry openTelemetry) {
return openTelemetry.getTracer("java-application");
}
}
@Service
@Slf4j
public class VectorTracingService {
private final Tracer tracer;
private final VectorMetricsService metricsService;
public VectorTracingService(Tracer tracer, VectorMetricsService metricsService) {
this.tracer = tracer;
this.metricsService = metricsService;
}
public <T> T trace(String operationName, Supplier<T> operation) {
return trace(operationName, Map.of(), operation);
}
public <T> T trace(String operationName, Map<String, Object> attributes, 
Supplier<T> operation) {
Span span = tracer.spanBuilder(operationName).startSpan();
try (Scope scope = span.makeCurrent()) {
// Set span attributes
attributes.forEach((key, value) -> 
span.setAttribute(key, value.toString()));
// Execute the operation
Timer.Sample timer = metricsService.startTimer();
T result = operation.get();
// Record metrics
metricsService.stopTimer(timer, operationName + "_duration");
metricsService.incrementCounter(operationName + "_total", "status", "success");
span.setStatus(StatusCode.OK);
return result;
} catch (Exception e) {
span.recordException(e);
span.setStatus(StatusCode.ERROR, e.getMessage());
metricsService.incrementCounter(operationName + "_total", "status", "error");
throw e;
} finally {
span.end();
}
}
public void trace(String operationName, Runnable operation) {
trace(operationName, Map.of(), operation);
}
public void trace(String operationName, Map<String, Object> attributes, 
Runnable operation) {
trace(operationName, attributes, () -> {
operation.run();
return null;
});
}
public Span startSpan(String operationName) {
return tracer.spanBuilder(operationName).startSpan();
}
public void endSpan(Span span) {
span.end();
}
public void addEvent(Span span, String eventName, Map<String, Object> attributes) {
AttributesBuilder attributesBuilder = Attributes.builder();
attributes.forEach((key, value) -> 
attributesBuilder.put(key, value.toString()));
span.addEvent(eventName, attributesBuilder.build());
}
public Context extractContext(Map<String, String> headers) {
TextMapGetter<Map<String, String>> getter = new TextMapGetter<Map<String, String>>() {
@Override
public String get(Map<String, String> carrier, String key) {
return carrier.get(key);
}
@Override
public Iterable<String> keys(Map<String, String> carrier) {
return carrier.keySet();
}
};
return W3CTraceContextPropagator.getInstance().extract(Context.current(), headers, getter);
}
public void injectContext(Map<String, String> headers) {
TextMapSetter<Map<String, String>> setter = Map::put;
W3CTraceContextPropagator.getInstance().inject(Context.current(), headers, setter);
}
}

Spring Boot Integration

1. Spring Boot Actuator Configuration

# application.yml
management:
endpoints:
web:
exposure:
include: health,info,metrics,prometheus,loggers
base-path: /actuator
endpoint:
health:
show-details: always
show-components: always
metrics:
enabled: true
prometheus:
enabled: true
metrics:
export:
prometheus:
enabled: true
distribution:
percentiles-histogram:
http.server.requests: true
tags:
application: java-app
environment: ${ENVIRONMENT:development}
version: ${APP_VERSION:unknown}
tracing:
sampling:
probability: 1.0
logging:
level:
com.example.app: DEBUG
file:
name: /var/log/java-app/app.log
pattern:
file: "%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"

2. REST Controller with Observability

@RestController
@RequestMapping("/api")
@Slf4j
public class UserController {
private final UserService userService;
private final VectorLoggingService loggingService;
private final VectorMetricsService metricsService;
private final VectorTracingService tracingService;
private final BusinessMetricsService businessMetrics;
public UserController(UserService userService,
VectorLoggingService loggingService,
VectorMetricsService metricsService,
VectorTracingService tracingService,
BusinessMetricsService businessMetrics) {
this.userService = userService;
this.loggingService = loggingService;
this.metricsService = metricsService;
this.tracingService = tracingService;
this.businessMetrics = businessMetrics;
}
@PostMapping("/users")
public ResponseEntity<User> createUser(@RequestBody CreateUserRequest request,
@RequestHeader Map<String, String> headers) {
return tracingService.trace("user.create", Map.of(
"user.email", request.getEmail(),
"operation.type", "create"
), () -> {
try {
// Extract and set tracing context
Context context = tracingService.extractContext(headers);
try (Scope scope = context.makeCurrent()) {
// Set logging context
String requestId = headers.getOrDefault("X-Request-ID", 
UUID.randomUUID().toString());
loggingService.setContext("request_id", requestId);
loggingService.setContext("user_email", request.getEmail());
// Log the request
loggingService.info("Creating user", Map.of(
"email", request.getEmail(),
"source", "api"
));
// Business logic
User user = userService.createUser(request);
// Record business metrics
businessMetrics.recordUserRegistration("api");
// Log success
loggingService.info("User created successfully", Map.of(
"user_id", user.getId(),
"email", user.getEmail()
));
return ResponseEntity.status(HttpStatus.CREATED).body(user);
} finally {
loggingService.clearContext();
}
} catch (Exception e) {
loggingService.error("Failed to create user", e, Map.of(
"email", request.getEmail(),
"error_type", e.getClass().getSimpleName()
));
throw e;
}
});
}
@GetMapping("/users/{id}")
public ResponseEntity<User> getUser(@PathVariable String id,
@RequestHeader Map<String, String> headers) {
return tracingService.trace("user.get", Map.of(
"user.id", id,
"operation.type", "read"
), () -> {
try {
Context context = tracingService.extractContext(headers);
try (Scope scope = context.makeCurrent()) {
String requestId = headers.getOrDefault("X-Request-ID", 
UUID.randomUUID().toString());
loggingService.setContext("request_id", requestId);
loggingService.setContext("user_id", id);
loggingService.debug("Fetching user", Map.of("user_id", id));
User user = userService.getUser(id);
metricsService.incrementCounter("user_fetches", 
"user_id", id, "status", "success");
return ResponseEntity.ok(user);
} finally {
loggingService.clearContext();
}
} catch (UserNotFoundException e) {
loggingService.warn("User not found", Map.of("user_id", id));
metricsService.incrementCounter("user_fetches", 
"user_id", id, "status", "not_found");
throw e;
} catch (Exception e) {
loggingService.error("Failed to fetch user", e, Map.of("user_id", id));
metricsService.incrementCounter("user_fetches", 
"user_id", id, "status", "error");
throw e;
}
});
}
}

Advanced Vector Configuration

1. Multi-Environment Vector Setup

# vector.dev.toml - Development configuration

[api]

enabled = true address = "127.0.0.1:8686"

[sources.java_app_logs]

type = "file" include = ["/var/log/java-app/*.log"]

[sinks.console]

type = "console" inputs = ["java_app_logs"] encoding.codec = "json"

[sinks.local_file]

type = "file" inputs = ["java_app_logs"] path = "/tmp/vector-logs/application-%Y-%m-%d.log" encoding.codec = "json"

# vector.prod.toml - Production configuration

[api]

enabled = true address = "0.0.0.0:8686"

[sources.java_app_logs]

type = "file" include = ["/var/log/java-app/*.log"]

[transforms.logs_processing]

type = "remap" inputs = ["java_app_logs"] source = ''' # Parse JSON logs . = parse_json!(.message) # Add processing metadata .vector_processed_at = now() .vector_host = get_hostname!() # Filter out sensitive data . = redact(., ["password", "credit_card", "api_key"]) '''

[sinks.loki_prod]

type = "loki" inputs = ["logs_processing"] endpoint = "http://loki-prod:3100" labels = { "job" = "java-app-prod", "environment" = "production", "level" = "{{ level }}", "service" = "{{ service }}" }

[sinks.s3_archive]

type = "aws_s3" inputs = ["logs_processing"] bucket = "my-app-logs-archive" key_prefix = "logs/%Y/%m/%d/" encoding.codec = "json" compression = "gzip" batch.max_events = 1000

2. Custom Vector Transformations

# Custom transformations for Java application logs

[transforms.java_logs_enrich]

type = "remap" inputs = ["java_app_logs"] source = ''' # Parse the JSON message parsed = parse_json(.message) . = merge(., parsed) # Extract trace context for correlation if exists(.trace_id) && exists(.span_id) { .vector_trace_id = .trace_id .vector_span_id = .span_id } # Add business context if exists(.custom_fields) { .user_id = .custom_fields.user_id .session_id = .custom_fields.session_id .request_id = .custom_fields.request_id } # Classify log levels for routing .log_level_class = if .level == "ERROR" || .level == "FATAL" { "critical" } else if .level == "WARN" { "warning" } else { "info" } # Calculate log size for metrics .log_size_bytes = length!(.message) '''

[transforms.java_logs_route]

type = "route" inputs = ["java_logs_enrich"] route.app_errors = '.level == "ERROR" || .level == "FATAL"' route.user_activity = '.logger == "com.example.app.UserService"' route.performance = '.message contains "duration" || .message contains "performance"' # Different sinks for different log types

[sinks.critical_logs]

type = "slack" inputs = ["java_logs_route.app_errors"] webhook_url = "${SLACK_WEBHOOK_URL}" channel = "#alerts-critical"

[sinks.user_activity_loki]

type = "loki" inputs = ["java_logs_route.user_activity"] endpoint = "http://loki:3100" labels = { "job" = "user-activity", "environment" = "production" }

[sinks.performance_metrics]

type = "prometheus_remote_write" inputs = ["java_logs_route.performance"] endpoint = "http://prometheus:9090/api/v1/write"

Monitoring and Alerting

1. Custom Health Checks

@Component
@Slf4j
public class VectorHealthIndicator implements HealthIndicator {
private final VectorLoggingService loggingService;
private final VectorMetricsService metricsService;
public VectorHealthIndicator(VectorLoggingService loggingService,
VectorMetricsService metricsService) {
this.loggingService = loggingService;
this.metricsService = metricsService;
}
@Override
public Health health() {
Health.Builder status = Health.up();
try {
// Test logging functionality
testLogging();
status.withDetail("logging", "OPERATIONAL");
// Test metrics functionality  
testMetrics();
status.withDetail("metrics", "OPERATIONAL");
// Check system resources
checkSystemResources(status);
} catch (Exception e) {
status.down()
.withDetail("error", e.getMessage())
.withException(e);
}
return status.build();
}
private void testLogging() {
String testId = UUID.randomUUID().toString();
loggingService.info("Health check test message", Map.of(
"health_check_id", testId,
"component", "vector_health_check"
));
}
private void testMetrics() {
metricsService.incrementCounter("health_check_requests");
metricsService.recordHistogram("health_check_duration", 1.0);
}
private void checkSystemResources(Health.Builder status) {
Runtime runtime = Runtime.getRuntime();
long usedMemory = runtime.totalMemory() - runtime.freeMemory();
long maxMemory = runtime.maxMemory();
double memoryUsage = (double) usedMemory / maxMemory * 100;
status.withDetail("memory_used_mb", usedMemory / (1024 * 1024))
.withDetail("memory_max_mb", maxMemory / (1024 * 1024))
.withDetail("memory_usage_percent", Math.round(memoryUsage * 100.0) / 100.0);
if (memoryUsage > 90) {
status.withDetail("memory_warning", "High memory usage detected");
}
}
}

Conclusion

Vector provides a comprehensive observability solution for Java applications with:

  1. Unified Data Pipeline - Single tool for logs, metrics, and traces
  2. High Performance - Built in Rust for optimal resource usage
  3. Flexible Routing - Route data to multiple backends based on content
  4. Rich Transformations - Powerful data enrichment and filtering capabilities
  5. Java Integration - Seamless integration with Spring Boot, Micrometer, and OpenTelemetry

By implementing Vector with the patterns shown above, Java applications can achieve comprehensive observability with minimal performance overhead and maximum flexibility in data routing and processing.

Pyroscope Profiling in Java
Explains how to use Pyroscope for continuous profiling in Java applications, helping developers analyze CPU and memory usage patterns to improve performance and identify bottlenecks.
https://macronepal.com/blog/pyroscope-profiling-in-java/

OpenTelemetry Metrics in Java: Comprehensive Guide
Provides a complete guide to collecting and exporting metrics in Java using OpenTelemetry, including counters, histograms, gauges, and integration with monitoring tools. (MACRO NEPAL)
https://macronepal.com/blog/opentelemetry-metrics-in-java-comprehensive-guide/

OTLP Exporter in Java: Complete Guide for OpenTelemetry
Explains how to configure OTLP exporters in Java to send telemetry data such as traces, metrics, and logs to monitoring systems using HTTP or gRPC protocols. (MACRO NEPAL)
https://macronepal.com/blog/otlp-exporter-in-java-complete-guide-for-opentelemetry/

Thanos Integration in Java: Global View of Metrics
Explains how to integrate Thanos with Java monitoring systems to create a scalable global metrics view across multiple Prometheus instances.

https://macronepal.com/blog/thanos-integration-in-java-global-view-of-metrics

Time Series with InfluxDB in Java: Complete Guide (Version 2)
Explains how to manage time-series data using InfluxDB in Java applications, including storing, querying, and analyzing metrics data.

https://macronepal.com/blog/time-series-with-influxdb-in-java-complete-guide-2

Time Series with InfluxDB in Java: Complete Guide
Provides an overview of integrating InfluxDB with Java for time-series data handling, including monitoring applications and managing performance metrics.

https://macronepal.com/blog/time-series-with-influxdb-in-java-complete-guide

Implementing Prometheus Remote Write in Java (Version 2)
Explains how to configure Java applications to send metrics data to Prometheus-compatible systems using the remote write feature for scalable monitoring.

https://macronepal.com/blog/implementing-prometheus-remote-write-in-java-a-complete-guide-2

Implementing Prometheus Remote Write in Java: Complete Guide
Provides instructions for sending metrics from Java services to Prometheus servers, enabling centralized monitoring and real-time analytics.

https://macronepal.com/blog/implementing-prometheus-remote-write-in-java-a-complete-guide

Building a TileServer GL in Java: Vector and Raster Tile Server
Explains how to build a TileServer GL in Java for serving vector and raster map tiles, useful for geographic visualization and mapping applications.

https://macronepal.com/blog/building-a-tileserver-gl-in-java-vector-and-raster-tile-server

Indoor Mapping in Java
Explains how to create indoor mapping systems in Java, including navigation inside buildings, spatial data handling, and visualization techniques.

Leave a Reply

Your email address will not be published. Required fields are marked *


Macro Nepal Helper