Logstash Logback Encoder provides structured JSON logging that integrates seamlessly with the ELK stack (Elasticsearch, Logstash, Kibana). This guide demonstrates comprehensive implementation for structured logging in Java applications.
Why Use Logstash Logback Encoder?
- Structured Logging: JSON format for easy parsing and analysis
- ELK Stack Integration: Direct compatibility with Elasticsearch
- Rich Context: Include MDC, markers, and custom fields
- Performance: Efficient logging with minimal overhead
- Centralized Logging: Perfect for microservices and distributed systems
Prerequisites
- Java 8+ with SLF4J/Logback
- Maven/Gradle for dependency management
- ELK Stack (optional, for log analysis)
Step 1: Project Dependencies
Maven (pom.xml):
<dependencies> <!-- Logstash Logback Encoder --> <dependency> <groupId>net.logstash.logback</groupId> <artifactId>logstash-logback-encoder</artifactId> <version>7.3</version> </dependency> <!-- Spring Boot Starter Logging --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> <version>2.7.0</version> </dependency> <!-- SLF4J API --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>2.0.7</version> </dependency> <!-- Jackson for JSON --> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.15.2</version> </dependency> <!-- Micrometer for metrics --> <dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-core</artifactId> <version>1.10.5</version> </dependency> <!-- Spring Boot (Optional) --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <version>2.7.0</version> </dependency> </dependencies>
Step 2: Logback Configuration
logback-spring.xml:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
<!-- Custom JSON Fields -->
<springProperty scope="context" name="app_name" source="spring.application.name" defaultValue="unknown-app"/>
<springProperty scope="context" name="app_version" source="app.version" defaultValue="1.0.0"/>
<springProperty scope="context" name="environment" source="spring.profiles.active" defaultValue="local"/>
<!-- Console Appender with Logstash Encoder -->
<appender name="CONSOLE_JSON" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<!-- Timestamp -->
<timestamp>
<timeZone>UTC</timeZone>
<fieldName>timestamp</fieldName>
</timestamp>
<!-- Log Level -->
<logLevel>
<fieldName>level</fieldName>
</logLevel>
<!-- Logger Name -->
<loggerName>
<fieldName>logger</fieldName>
<shortenedLoggerNameLength>36</shortenedLoggerNameLength>
</loggerName>
<!-- Message -->
<message>
<fieldName>message</fieldName>
</message>
<!-- Thread Name -->
<threadName>
<fieldName>thread</fieldName>
</threadName>
<!-- MDC Fields -->
<mdc>
<includeMdcKeyName>correlationId</includeMdcKeyName>
<includeMdcKeyName>userId</includeMdcKeyName>
<includeMdcKeyName>sessionId</includeMdcKeyName>
<includeMdcKeyName>requestId</includeMdcKeyName>
<includeMdcKeyName>serviceName</includeMdcKeyName>
<includeMdcKeyName>operation</includeMdcKeyName>
</mdc>
<!-- Stack Trace -->
<stackTrace>
<fieldName>stack_trace</fieldName>
</stackTrace>
<!-- Log Context -->
<context/>
<!-- Custom Fields -->
<arguments>
<includeNonStructuredArguments>false</includeNonStructuredArguments>
</arguments>
<!-- Pattern -->
<pattern>
<pattern>
{
"app": "${app_name}",
"version": "${app_version}",
"environment": "${environment}",
"host": "${HOSTNAME:-unknown}"
}
</pattern>
</pattern>
</providers>
</encoder>
</appender>
<!-- File Appender with Logstash Encoder -->
<appender name="FILE_JSON" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>logs/${app_name}.json</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>logs/${app_name}.%d{yyyy-MM-dd}.json.gz</fileNamePattern>
<maxHistory>30</maxHistory>
<totalSizeCap>3GB</totalSizeCap>
</rollingPolicy>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC</timeZone>
<fieldName>@timestamp</fieldName>
</timestamp>
<logLevel>
<fieldName>level</fieldName>
</logLevel>
<loggerName>
<fieldName>logger</fieldName>
</loggerName>
<message>
<fieldName>message</fieldName>
</message>
<threadName>
<fieldName>thread</fieldName>
</threadName>
<mdc/>
<stackTrace>
<fieldName>stack_trace</fieldName>
</stackTrace>
<pattern>
<pattern>
{
"app": "${app_name}",
"version": "${app_version}",
"environment": "${environment}",
"host": "${HOSTNAME:-unknown}",
"type": "application"
}
</pattern>
</pattern>
</providers>
</encoder>
</appender>
<!-- Logstash TCP Appender for direct Logstash integration -->
<appender name="LOGSTASH_TCP" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>localhost:5000</destination>
<reconnectionDelay>10000</reconnectionDelay>
<keepAliveDuration>5 minutes</keepAliveDuration>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC</timeZone>
</timestamp>
<version/>
<logLevel/>
<loggerName/>
<message/>
<threadName/>
<mdc/>
<stackTrace/>
<pattern>
<pattern>
{
"app": "${app_name}",
"version": "${app_version}",
"environment": "${environment}",
"host": "${HOSTNAME:-unknown}"
}
</pattern>
</pattern>
</providers>
</encoder>
</appender>
<!-- Async Appender for better performance -->
<appender name="ASYNC_LOGSTASH" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="LOGSTASH_TCP"/>
<queueSize>8192</queueSize>
<discardingThreshold>0</discardingThreshold>
<includeCallerData>true</includeCallerData>
<neverBlock>true</neverBlock>
</appender>
<!-- Root Logger Configuration -->
<root level="INFO">
<!-- Use CONSOLE for development -->
<appender-ref ref="CONSOLE_JSON"/>
<!-- Use FILE_JSON for production file logging -->
<appender-ref ref="FILE_JSON"/>
<!-- Use LOGSTASH_TCP for direct ELK integration -->
<!-- <appender-ref ref="ASYNC_LOGSTASH"/> -->
</root>
<!-- Application-specific loggers -->
<logger name="com.yourcompany" level="DEBUG" additivity="false">
<appender-ref ref="CONSOLE_JSON"/>
<appender-ref ref="FILE_JSON"/>
</logger>
<!-- Reduce noise from framework logs -->
<logger name="org.springframework" level="INFO"/>
<logger name="org.hibernate" level="WARN"/>
<logger name="org.apache" level="WARN"/>
</configuration>
Step 3: Structured Logger Utility
@Component
@Slf4j
public class StructuredLogger {
private final ObjectMapper objectMapper;
private final String applicationName;
private final String environment;
public StructuredLogger(ObjectMapper objectMapper,
@Value("${spring.application.name:unknown}") String applicationName,
@Value("${spring.profiles.active:local}") String environment) {
this.objectMapper = objectMapper;
this.applicationName = applicationName;
this.environment = environment;
}
// Basic structured logging methods
public void info(String message, Map<String, Object> fields) {
log.info(createStructuredMessage(message, fields));
}
public void warn(String message, Map<String, Object> fields) {
log.warn(createStructuredMessage(message, fields));
}
public void error(String message, Map<String, Object> fields) {
log.error(createStructuredMessage(message, fields));
}
public void error(String message, Throwable throwable, Map<String, Object> fields) {
log.error(createStructuredMessage(message, fields), throwable);
}
public void debug(String message, Map<String, Object> fields) {
log.debug(createStructuredMessage(message, fields));
}
// Business event logging
public void logBusinessEvent(String eventType, String eventId, Map<String, Object> eventData) {
Map<String, Object> fields = new HashMap<>();
fields.put("event_type", eventType);
fields.put("event_id", eventId);
fields.put("event_timestamp", Instant.now().toString());
fields.put("event_data", eventData);
log.info("Business event: {}", createStructuredMessage(eventType, fields));
}
// Performance logging
public void logPerformance(String operation, long durationMs, Map<String, Object> context) {
Map<String, Object> fields = new HashMap<>();
fields.put("operation", operation);
fields.put("duration_ms", durationMs);
fields.put("performance_type", "operation_timing");
fields.putAll(context);
log.info("Performance measurement: {}", createStructuredMessage(operation, fields));
}
// Audit logging
public void logAuditEvent(String action, String resource, String userId, Map<String, Object> details) {
Map<String, Object> fields = new HashMap<>();
fields.put("audit_action", action);
fields.put("audit_resource", resource);
fields.put("audit_user_id", userId);
fields.put("audit_timestamp", Instant.now().toString());
fields.put("audit_details", details);
log.info("Audit event: {}", createStructuredMessage(action, fields));
}
// HTTP request logging
public void logHttpRequest(String method, String path, int status, long durationMs,
String correlationId, Map<String, Object> additionalFields) {
Map<String, Object> fields = new HashMap<>();
fields.put("http_method", method);
fields.put("http_path", path);
fields.put("http_status", status);
fields.put("http_duration_ms", durationMs);
fields.put("correlation_id", correlationId);
fields.put("log_type", "http_request");
fields.putAll(additionalFields);
String message = String.format("%s %s %d %dms", method, path, status, durationMs);
log.info("HTTP request: {}", createStructuredMessage(message, fields));
}
// Database query logging
public void logDatabaseQuery(String queryType, String table, long durationMs,
boolean success, Map<String, Object> queryContext) {
Map<String, Object> fields = new HashMap<>();
fields.put("db_operation", queryType);
fields.put("db_table", table);
fields.put("db_duration_ms", durationMs);
fields.put("db_success", success);
fields.put("log_type", "database_query");
fields.putAll(queryContext);
String message = String.format("DB %s on %s took %dms", queryType, table, durationMs);
log.info("Database operation: {}", createStructuredMessage(message, fields));
}
// Custom marker for important logs
public void logWithMarker(String markerName, String message, Map<String, Object> fields) {
Marker marker = MarkerFactory.getMarker(markerName);
log.info(marker, createStructuredMessage(message, fields));
}
private String createStructuredMessage(String message, Map<String, Object> fields) {
try {
Map<String, Object> logEntry = new HashMap<>();
logEntry.put("message", message);
logEntry.putAll(fields);
// Add common fields
logEntry.put("application", applicationName);
logEntry.put("environment", environment);
logEntry.put("timestamp", Instant.now().toString());
return objectMapper.writeValueAsString(logEntry);
} catch (JsonProcessingException e) {
// Fallback to simple message if JSON serialization fails
log.warn("Failed to create structured log message: {}", e.getMessage());
return message + " [fields: " + fields + "]";
}
}
// MDC-based logging with context
public void withContext(Map<String, String> context, Runnable loggingOperation) {
try {
// Set MDC context
context.forEach(MDC::put);
loggingOperation.run();
} finally {
// Clear MDC context
context.keySet().forEach(MDC::remove);
}
}
public <T> T withContext(Map<String, String> context, Supplier<T> loggingOperation) {
try {
// Set MDC context
context.forEach(MDC::put);
return loggingOperation.get();
} finally {
// Clear MDC context
context.keySet().forEach(MDC::remove);
}
}
}
Step 4: Advanced Logstash Encoder Configuration
logback-advanced.xml:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<springProperty scope="context" name="app_name" source="spring.application.name" defaultValue="unknown"/>
<springProperty scope="context" name="app_version" source="app.version" defaultValue="1.0.0"/>
<springProperty scope="context" name="environment" source="spring.profiles.active" defaultValue="local"/>
<!-- Custom JSON Provider for additional fields -->
<bean class="net.logstash.logback.composite.loggingevent.ArgumentsJsonProvider">
<includeNonStructuredArguments>false</includeNonStructuredArguments>
</bean>
<!-- Custom Fields Provider -->
<bean class="net.logstash.logback.composite.loggingevent.LoggingEventPatternJsonProvider">
<pattern>
{
"app_name": "${app_name}",
"app_version": "${app_version}",
"environment": "${environment}",
"host_name": "${HOSTNAME:-unknown}",
"log_type": "application"
}
</pattern>
</bean>
<!-- Custom MDC Fields Provider with filtering -->
<bean class="net.logstash.logback.composite.loggingevent.MdcJsonProvider">
<includeMdcKeyName>correlationId</includeMdcKeyName>
<includeMdcKeyName>userId</includeMdcKeyName>
<includeMdcKeyName>sessionId</includeMdcKeyName>
<includeMdcKeyName>requestId</includeMdcKeyName>
<includeMdcKeyName>operation</includeMdcKeyName>
<includeMdcKeyName>tenantId</includeMdcKeyName>
</bean>
<!-- Custom Appender with Enhanced Encoder -->
<appender name="ENHANCED_JSON" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<!-- Standard providers -->
<timestamp>
<timeZone>UTC</timeZone>
<fieldName>@timestamp</fieldName>
</timestamp>
<logLevel>
<fieldName>level</fieldName>
</logLevel>
<loggerName>
<fieldName>logger_name</fieldName>
<shortenedLoggerNameLength>40</shortenedLoggerNameLength>
</loggerName>
<message>
<fieldName>message</fieldName>
</message>
<threadName>
<fieldName>thread_name</fieldName>
</threadName>
<!-- Stack trace with filtering -->
<stackTrace>
<fieldName>exception</fieldName>
<throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
<maxDepthPerThrowable>30</maxDepthPerThrowable>
<maxLength>2048</maxLength>
<shortenedClassNameLength>20</shortenedClassNameLength>
<rootCauseFirst>true</rootCauseFirst>
<exclude>sun\.reflect\..*\.invoke</exclude>
<exclude>net\.sf\.cglib\.proxy\.MethodProxy\.invoke</exclude>
</throwableConverter>
</stackTrace>
<!-- Context name -->
<contextName>
<fieldName>log_context</fieldName>
</contextName>
<!-- Custom pattern for static fields -->
<pattern>
<pattern>
{
"service": "${app_name}",
"version": "${app_version}",
"env": "${environment}",
"host": "${HOSTNAME:-unknown}",
"pid": "${PID:-unknown}"
}
</pattern>
</pattern>
<!-- Custom fields from MDC -->
<mdcJsonProvider>
<includeMdcKeyName>correlationId</includeMdcKeyName>
<includeMdcKeyName>userId</includeMdcKeyName>
<includeMdcKeyName>sessionId</includeMdcKeyName>
<includeMdcKeyName>requestId</includeMdcKeyName>
</mdcJsonProvider>
<!-- Logstash version -->
<version/>
<!-- Custom JSON provider for arguments -->
<arguments/>
</providers>
</encoder>
</appender>
<!-- Root logger with enhanced configuration -->
<root level="INFO">
<appender-ref ref="ENHANCED_JSON"/>
</root>
</configuration>
Step 5: Custom Logstash Encoder with Business Context
@Component
public class BusinessContextLogstashEncoder extends LoggingEventCompositeJsonEncoder {
private final String applicationName;
private final String environment;
private final ObjectMapper objectMapper;
public BusinessContextLogstashEncoder(
@Value("${spring.application.name:unknown}") String applicationName,
@Value("${spring.profiles.active:local}") String environment,
ObjectMapper objectMapper) {
this.applicationName = applicationName;
this.environment = environment;
this.objectMapper = objectMapper;
configureProviders();
}
private void configureProviders() {
List<JsonProvider<ILoggingEvent>> providers = new ArrayList<>();
// Timestamp
LoggingEventFormattedTimestampJsonProvider timestampProvider =
new LoggingEventFormattedTimestampJsonProvider();
timestampProvider.setTimeZone("UTC");
timestampProvider.setFieldName("@timestamp");
providers.add(timestampProvider);
// Log level
LogLevelJsonProvider levelProvider = new LogLevelJsonProvider();
levelProvider.setFieldName("level");
providers.add(levelProvider);
// Logger name
LoggerNameJsonProvider loggerProvider = new LoggerNameJsonProvider();
loggerProvider.setFieldName("logger_name");
loggerProvider.setShortenedLoggerNameLength(36);
providers.add(loggerProvider);
// Message
MessageJsonProvider messageProvider = new MessageJsonProvider();
messageProvider.setFieldName("message");
providers.add(messageProvider);
// Thread name
ThreadNameJsonProvider threadProvider = new ThreadNameJsonProvider();
threadProvider.setFieldName("thread_name");
providers.add(threadProvider);
// MDC with business context
MdcJsonProvider mdcProvider = new MdcJsonProvider();
mdcProvider.setIncludeMdcKeyName("correlationId");
mdcProvider.setIncludeMdcKeyName("userId");
mdcProvider.setIncludeMdcKeyName("sessionId");
mdcProvider.setIncludeMdcKeyName("requestId");
mdcProvider.setIncludeMdcKeyName("operation");
mdcProvider.setIncludeMdcKeyName("tenantId");
providers.add(mdcProvider);
// Stack trace
StackTraceJsonProvider stackTraceProvider = new StackTraceJsonProvider();
stackTraceProvider.setFieldName("exception");
stackTraceProvider.setThrowableConverter(createThrowableConverter());
providers.add(stackTraceProvider);
// Custom business context provider
providers.add(createBusinessContextProvider());
// Set providers
this.setProviders(providers.toArray(new JsonProvider[0]));
}
private ThrowableConverter createThrowableConverter() {
ShortenedThrowableConverter converter = new ShortenedThrowableConverter();
converter.setMaxDepthPerThrowable(20);
converter.setMaxLength(2048);
converter.setShortenedClassNameLength(20);
converter.setRootCauseFirst(true);
converter.addExclude("sun.reflect\\..*\\.invoke");
converter.addExclude("net.sf.cglib.proxy.MethodProxy.invoke");
return converter;
}
private JsonProvider<ILoggingEvent> createBusinessContextProvider() {
return new JsonProvider<ILoggingEvent>() {
@Override
public void writeTo(JsonGenerator generator, ILoggingEvent event) throws IOException {
generator.writeObjectFieldStart("business_context");
generator.writeStringField("application", applicationName);
generator.writeStringField("environment", environment);
generator.writeStringField("host", getHostName());
// Extract business fields from MDC
Map<String, String> mdc = event.getMDCPropertyMap();
if (mdc != null) {
mdc.forEach((key, value) -> {
if (key.startsWith("business_")) {
try {
generator.writeStringField(key, value);
} catch (IOException e) {
// Ignore field if writing fails
}
}
});
}
generator.writeEndObject();
}
@Override
public void prepareForDeferredProcessing(ILoggingEvent event) {
// No preparation needed
}
private String getHostName() {
try {
return InetAddress.getLocalHost().getHostName();
} catch (UnknownHostException e) {
return "unknown";
}
}
};
}
}
// Custom appender using the business context encoder
public class BusinessContextAppender extends ConsoleAppender<ILoggingEvent> {
private final BusinessContextLogstashEncoder encoder;
public BusinessContextAppender(BusinessContextLogstashEncoder encoder) {
this.encoder = encoder;
setEncoder(encoder);
}
}
Step 6: Logging Configuration Properties
application.yml:
logging:
config: classpath:logback-spring.xml
level:
com.yourcompany: DEBUG
org.springframework: INFO
org.hibernate: WARN
org.apache: WARN
# Logstash configuration
logstash:
enabled: true
host: localhost
port: 5000
queue-size: 8192
include-mdc: true
include-context: true
# File appender configuration
file:
name: logs/${spring.application.name}.json
max-history: 30
max-file-size: 100MB
total-size-cap: 3GB
# Application properties for logging context
app:
name: ${spring.application.name}
version: 1.0.0
environment: ${spring.profiles.active:local}
# Custom logging properties
custom:
logging:
structured: true
include-hostname: true
include-thread: true
mdc-fields:
- correlationId
- userId
- sessionId
- requestId
- operation
- tenantId
Step 7: Usage Examples and Best Practices
@Service
@Slf4j
public class OrderService {
private final StructuredLogger structuredLogger;
private final ObjectMapper objectMapper;
public OrderService(StructuredLogger structuredLogger, ObjectMapper objectMapper) {
this.structuredLogger = structuredLogger;
this.objectMapper = objectMapper;
}
public Order processOrder(Order order) {
long startTime = System.currentTimeMillis();
String correlationId = MDC.get("correlationId");
try {
// Set business context in MDC
MDC.put("operation", "processOrder");
MDC.put("orderId", order.getId());
MDC.put("customerId", order.getCustomerId());
// Log business event
Map<String, Object> eventFields = new HashMap<>();
eventFields.put("order_amount", order.getAmount());
eventFields.put("order_currency", order.getCurrency());
eventFields.put("customer_tier", order.getCustomerTier());
structuredLogger.logBusinessEvent("ORDER_PROCESSING_STARTED",
order.getId(), eventFields);
// Process order
Order processedOrder = validateAndProcess(order);
// Log success
Map<String, Object> successFields = new HashMap<>();
successFields.put("processing_time_ms", System.currentTimeMillis() - startTime);
successFields.put("final_status", processedOrder.getStatus());
structuredLogger.logBusinessEvent("ORDER_PROCESSING_COMPLETED",
order.getId(), successFields);
return processedOrder;
} catch (Exception e) {
// Log error with context
Map<String, Object> errorFields = new HashMap<>();
errorFields.put("processing_time_ms", System.currentTimeMillis() - startTime);
errorFields.put("error_type", e.getClass().getSimpleName());
errorFields.put("error_message", e.getMessage());
structuredLogger.error("Order processing failed", e, errorFields);
throw new OrderProcessingException("Failed to process order", e);
} finally {
// Clean up MDC
MDC.remove("operation");
MDC.remove("orderId");
MDC.remove("customerId");
}
}
public void batchProcessOrders(List<Order> orders) {
structuredLogger.withContext(Map.of(
"operation", "batchProcessOrders",
"batch_size", String.valueOf(orders.size())
), () -> {
long startTime = System.currentTimeMillis();
try {
log.info("Starting batch processing of {} orders", orders.size());
orders.forEach(this::processOrder);
Map<String, Object> performanceFields = new HashMap<>();
performanceFields.put("total_orders", orders.size());
performanceFields.put("total_time_ms", System.currentTimeMillis() - startTime);
performanceFields.put("avg_time_per_order",
(System.currentTimeMillis() - startTime) / (double) orders.size());
structuredLogger.logPerformance("batch_order_processing",
System.currentTimeMillis() - startTime, performanceFields);
} catch (Exception e) {
structuredLogger.error("Batch processing failed", e, Map.of(
"processed_orders", orders.size(),
"total_time_ms", System.currentTimeMillis() - startTime
));
throw e;
}
});
}
// HTTP request logging example
public void logHttpCall(String method, String url, int status, long duration,
String correlationId, Map<String, Object> response) {
Map<String, Object> httpFields = new HashMap<>();
httpFields.put("http_url", url);
httpFields.put("http_response_size", calculateResponseSize(response));
httpFields.put("http_user_agent", getCurrentUserAgent());
structuredLogger.logHttpRequest(method, url, status, duration,
correlationId, httpFields);
}
// Database operation logging
public void logDatabaseOperation(String operation, String table,
long duration, boolean success) {
Map<String, Object> dbFields = new HashMap<>();
dbFields.put("db_connection_pool", getConnectionPoolStats());
dbFields.put("db_query_complexity", "medium"); // low, medium, high
structuredLogger.logDatabaseQuery(operation, table, duration, success, dbFields);
}
// Audit logging for security events
public void logSecurityEvent(String action, String resource, String user,
boolean success, String reason) {
Map<String, Object> auditFields = new HashMap<>();
auditFields.put("audit_success", success);
auditFields.put("audit_reason", reason);
auditFields.put("audit_ip", getClientIp());
auditFields.put("audit_user_agent", getUserAgent());
structuredLogger.logAuditEvent(action, resource, user, auditFields);
}
private Order validateAndProcess(Order order) {
// Validation and processing logic
return order;
}
private int calculateResponseSize(Map<String, Object> response) {
try {
return objectMapper.writeValueAsBytes(response).length;
} catch (JsonProcessingException e) {
return -1;
}
}
private String getCurrentUserAgent() {
// Implementation to get user agent from current context
return "unknown";
}
private String getConnectionPoolStats() {
// Implementation to get connection pool statistics
return "default";
}
private String getClientIp() {
// Implementation to get client IP
return "unknown";
}
private String getUserAgent() {
// Implementation to get user agent
return "unknown";
}
}
// Custom exception with structured logging support
public class OrderProcessingException extends RuntimeException {
private final String orderId;
private final String customerId;
private final Map<String, Object> context;
public OrderProcessingException(String message, String orderId, String customerId,
Map<String, Object> context, Throwable cause) {
super(message, cause);
this.orderId = orderId;
this.customerId = customerId;
this.context = context != null ? new HashMap<>(context) : new HashMap<>();
}
public String getOrderId() { return orderId; }
public String getCustomerId() { return customerId; }
public Map<String, Object> getContext() { return new HashMap<>(context); }
public void logStructured(StructuredLogger logger) {
Map<String, Object> fields = new HashMap<>();
fields.put("order_id", orderId);
fields.put("customer_id", customerId);
fields.put("exception_type", getClass().getSimpleName());
fields.putAll(context);
logger.error(getMessage(), this, fields);
}
}
Step 8: Testing Structured Logging
@SpringBootTest
@Slf4j
class StructuredLoggingTest {
@Autowired
private StructuredLogger structuredLogger;
@Test
void testStructuredLogging() {
Map<String, Object> fields = new HashMap<>();
fields.put("user_id", "user123");
fields.put("action", "login");
fields.put("success", true);
fields.put("duration_ms", 150);
// This should produce structured JSON log
structuredLogger.info("User action completed", fields);
}
@Test
void testMDCContext() {
// Set MDC context
MDC.put("correlationId", "test-correlation-123");
MDC.put("userId", "test-user");
try {
log.info("This log should include MDC fields");
// Test with structured logger
structuredLogger.info("Structured log with MDC", Map.of("test_field", "value"));
} finally {
MDC.clear();
}
}
@Test
void testErrorLogging() {
Exception testException = new RuntimeException("Test error");
Map<String, Object> errorFields = Map.of(
"component", "test",
"operation", "testOperation",
"input_data", "test input"
);
structuredLogger.error("Test error occurred", testException, errorFields);
}
@Test
void testPerformanceLogging() {
long startTime = System.currentTimeMillis();
// Simulate some work
try {
Thread.sleep(100);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
structuredLogger.logPerformance("test_operation",
System.currentTimeMillis() - startTime,
Map.of("iterations", 10, "data_size", "1KB"));
}
}
Step 9: Log Analysis and Monitoring
@Component
@Slf4j
public class LogAnalysisService {
private final StructuredLogger structuredLogger;
public LogAnalysisService(StructuredLogger structuredLogger) {
this.structuredLogger = structuredLogger;
}
public void analyzeLogPatterns() {
// Log analysis patterns for monitoring
Map<String, Object> analysisFields = new HashMap<>();
analysisFields.put("analysis_type", "error_patterns");
analysisFields.put("time_window", "last_1_hour");
analysisFields.put("total_errors", 42);
analysisFields.put("most_common_error", "NullPointerException");
structuredLogger.info("Log analysis completed", analysisFields);
}
public void logSystemHealth(Map<String, Object> healthMetrics) {
Map<String, Object> healthFields = new HashMap<>(healthMetrics);
healthFields.put("log_type", "system_health");
healthFields.put("timestamp", Instant.now().toString());
structuredLogger.info("System health check", healthFields);
}
public void logSecurityIncident(String incidentType, String severity,
Map<String, Object> incidentDetails) {
Map<String, Object> securityFields = new HashMap<>();
securityFields.put("incident_type", incidentType);
securityFields.put("severity", severity);
securityFields.put("incident_details", incidentDetails);
securityFields.put("log_type", "security_incident");
securityFields.put("requires_attention", true);
structuredLogger.logWithMarker("SECURITY", "Security incident detected", securityFields);
}
}
Key Features Implemented
- Structured JSON Logging: Consistent JSON format for all logs
- MDC Integration: Context propagation across threads and services
- Custom Encoders: Enhanced logging with business context
- Performance Logging: Operation timing and performance metrics
- Audit Logging: Security and compliance event logging
- Error Tracking: Structured error logging with context
- ELK Integration: Direct compatibility with Logstash
- Custom Appenders: File, console, and TCP appenders
Best Practices
- Consistent Field Names: Use consistent naming conventions across services
- Context Enrichment: Include relevant business context in logs
- Performance Considerations: Use async appenders for high-volume logging
- Security: Avoid logging sensitive information
- Monitoring: Set up alerts based on log patterns
- Retention Policies: Configure appropriate log retention periods
- Testing: Test logging configurations in different environments
This comprehensive Logstash Logback Encoder implementation provides a robust foundation for structured logging in Java applications, enabling effective log analysis, monitoring, and debugging in distributed systems.
Java Observability, Logging Intelligence & AI-Driven Monitoring (APM, Tracing, Logs & Anomaly Detection)
https://macronepal.com/blog/beyond-metrics-observing-serverless-and-traditional-java-applications-with-thundra-apm/
Explains using Thundra APM to observe both serverless and traditional Java applications by combining tracing, metrics, and logs into a unified observability platform for faster debugging and performance insights.
https://macronepal.com/blog/dynatrace-oneagent-in-java-2/
Explains Dynatrace OneAgent for Java, which automatically instruments JVM applications to capture metrics, traces, and logs, enabling full-stack monitoring and root-cause analysis with minimal configuration.
https://macronepal.com/blog/lightstep-java-sdk-distributed-tracing-and-observability-implementation/
Explains Lightstep Java SDK for distributed tracing, helping developers track requests across microservices and identify latency issues using OpenTelemetry-based observability.
https://macronepal.com/blog/honeycomb-io-beeline-for-java-complete-guide-2/
Explains Honeycomb Beeline for Java, which provides high-cardinality observability and deep query capabilities to understand complex system behavior and debug distributed systems efficiently.
https://macronepal.com/blog/lumigo-for-serverless-in-java-complete-distributed-tracing-guide-2/
Explains Lumigo for Java serverless applications, offering automatic distributed tracing, log correlation, and error tracking to simplify debugging in cloud-native environments. (Lumigo Docs)
https://macronepal.com/blog/from-noise-to-signals-implementing-log-anomaly-detection-in-java-applications/
Explains how to detect anomalies in Java logs using behavioral patterns and machine learning techniques to separate meaningful incidents from noisy log data and improve incident response.
https://macronepal.com/blog/ai-powered-log-analysis-in-java-from-reactive-debugging-to-proactive-insights/
Explains AI-driven log analysis for Java applications, shifting from manual debugging to predictive insights that identify issues early and improve system reliability using intelligent log processing.
https://macronepal.com/blog/titliel-java-logging-best-practices/
Explains best practices for Java logging, focusing on structured logs, proper log levels, performance optimization, and ensuring logs are useful for debugging and observability systems.
https://macronepal.com/blog/seeking-a-loguru-for-java-the-quest-for-elegant-and-simple-logging/
Explains the search for simpler, more elegant logging frameworks in Java, comparing modern logging approaches that aim to reduce complexity while improving readability and developer experience.