This guide provides a complete implementation for deploying, managing, and interacting with Elastic Stack (ELK) on Kubernetes using Java applications.
Project Setup and Dependencies
1. Maven Dependencies
<dependencies> <!-- Elasticsearch Client --> <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-high-level-client</artifactId> <version>7.17.0</version> </dependency> <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-client</artifactId> <version>7.17.0</version> </dependency> <!-- Logstash Logback Encoder --> <dependency> <groupId>net.logstash.logback</groupId> <artifactId>logstash-logback-encoder</artifactId> <version>7.4</version> </dependency> <!-- Kubernetes Client --> <dependency> <groupId>io.kubernetes</groupId> <artifactId>client-java</artifactId> <version>18.0.0</version> </dependency> <!-- Spring Boot --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <version>3.1.0</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> <version>3.1.0</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-elasticsearch</artifactId> <version>3.1.0</version> </dependency> <!-- Micrometer for Metrics --> <dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-core</artifactId> <version>1.11.5</version> </dependency> <dependency> <groupId>io.micrometer</groupId> <artifactId>micrometer-registry-elastic</artifactId> <version>1.11.5</version> </dependency> <!-- JSON Processing --> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.15.2</version> </dependency> <!-- Testing --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <version>3.1.0</version> <scope>test</scope> </dependency> <dependency> <groupId>org.testcontainers</groupId> <artifactId>elasticsearch</artifactId> <version>1.18.3</version> <scope>test</scope> </dependency> <dependency> <groupId>org.testcontainers</groupId> <artifactId>junit-jupiter</artifactId> <version>1.18.3</version> <scope>test</scope> </dependency> </dependencies>
Kubernetes Manifests for Elastic Stack
1. Elasticsearch Deployment
# elasticsearch.yaml apiVersion: v1 kind: ConfigMap metadata: name: elasticsearch-config namespace: elastic-stack data: elasticsearch.yml: | cluster.name: "docker-cluster" network.host: 0.0.0.0 discovery.type: single-node xpack.security.enabled: false xpack.monitoring.collection.enabled: true --- apiVersion: apps/v1 kind: StatefulSet metadata: name: elasticsearch namespace: elastic-stack labels: app: elasticsearch spec: serviceName: elasticsearch replicas: 3 selector: matchLabels: app: elasticsearch template: metadata: labels: app: elasticsearch spec: initContainers: - name: increase-vm-max-map image: busybox command: ["sysctl", "-w", "vm.max_map_count=262144"] securityContext: privileged: true - name: increase-fd-ulimit image: busybox command: ["sh", "-c", "ulimit -n 65536"] securityContext: privileged: true containers: - name: elasticsearch image: docker.elastic.co/elasticsearch/elasticsearch:7.17.0 env: - name: cluster.name value: k8s-logs - name: node.name valueFrom: fieldRef: fieldPath: metadata.name - name: discovery.seed_hosts value: "elasticsearch-0.elasticsearch,elasticsearch-1.elasticsearch,elasticsearch-2.elasticsearch" - name: cluster.initial_master_nodes value: "elasticsearch-0,elasticsearch-1,elasticsearch-2" - name: ES_JAVA_OPTS value: "-Xms2g -Xmx2g" ports: - containerPort: 9200 name: http - containerPort: 9300 name: transport volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data - name: config mountPath: /usr/share/elasticsearch/config/elasticsearch.yml subPath: elasticsearch.yml resources: requests: memory: "2Gi" cpu: "500m" limits: memory: "4Gi" cpu: "2" readinessProbe: httpGet: path: /_cluster/health port: 9200 initialDelaySeconds: 30 periodSeconds: 10 livenessProbe: httpGet: path: /_cluster/health port: 9200 initialDelaySeconds: 30 periodSeconds: 30 volumes: - name: config configMap: name: elasticsearch-config volumeClaimTemplates: - metadata: name: data spec: accessModes: [ "ReadWriteOnce" ] storageClassName: fast-ssd resources: requests: storage: 100Gi --- apiVersion: v1 kind: Service metadata: name: elasticsearch namespace: elastic-stack labels: app: elasticsearch spec: selector: app: elasticsearch ports: - port: 9200 name: http - port: 9300 name: transport clusterIP: None
2. Kibana Deployment
# kibana.yaml apiVersion: apps/v1 kind: Deployment metadata: name: kibana namespace: elastic-stack labels: app: kibana spec: replicas: 1 selector: matchLabels: app: kibana template: metadata: labels: app: kibana spec: containers: - name: kibana image: docker.elastic.co/kibana/kibana:7.17.0 env: - name: ELASTICSEARCH_HOSTS value: "http://elasticsearch:9200" ports: - containerPort: 5601 resources: requests: memory: "1Gi" cpu: "500m" limits: memory: "2Gi" cpu: "1" readinessProbe: httpGet: path: /api/status port: 5601 initialDelaySeconds: 30 periodSeconds: 10 livenessProbe: httpGet: path: /api/status port: 5601 initialDelaySeconds: 30 periodSeconds: 30 --- apiVersion: v1 kind: Service metadata: name: kibana namespace: elastic-stack spec: selector: app: kibana ports: - port: 5601 targetPort: 5601 type: LoadBalancer
3. Logstash Deployment
# logstash.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
namespace: elastic-stack
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
beats {
port => 5044
}
tcp {
port => 5000
codec => json
}
http {
port => 8080
codec => json
}
}
filter {
# Add any custom filters here
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch:9200"]
index => "logs-%{+YYYY.MM.dd}"
}
stdout {
codec => rubydebug
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash
namespace: elastic-stack
labels:
app: logstash
spec:
replicas: 2
selector:
matchLabels:
app: logstash
template:
metadata:
labels:
app: logstash
spec:
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.17.0
env:
- name: LS_JAVA_OPTS
value: "-Xmx1g -Xms1g"
ports:
- containerPort: 5044
name: beats
- containerPort: 5000
name: tcp
- containerPort: 8080
name: http
volumeMounts:
- name: config
mountPath: /usr/share/logstash/config/logstash.yml
subPath: logstash.yml
- name: pipeline
mountPath: /usr/share/logstash/pipeline
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "2Gi"
cpu: "1"
readinessProbe:
tcpSocket:
port: 9600
initialDelaySeconds: 30
periodSeconds: 10
volumes:
- name: config
configMap:
name: logstash-config
- name: pipeline
configMap:
name: logstash-config
items:
- key: logstash.conf
path: logstash.conf
---
apiVersion: v1
kind: Service
metadata:
name: logstash
namespace: elastic-stack
spec:
selector:
app: logstash
ports:
- port: 5044
name: beats
- port: 5000
name: tcp
- port: 8080
name: http
type: ClusterIP
4. Filebeat DaemonSet
# filebeat.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: elastic-stack
data:
filebeat.yml: |
filebeat.inputs:
- type: container
paths:
- /var/log/containers/*.log
processors:
- add_kubernetes_metadata:
host: ${NODE_NAME}
matchers:
- logs_path:
logs_path: "/var/log/containers/"
output.logstash:
hosts: ["logstash:5044"]
logging.level: info
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: elastic-stack
labels:
app: filebeat
spec:
selector:
matchLabels:
app: filebeat
template:
metadata:
labels:
app: filebeat
spec:
serviceAccountName: filebeat
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.17.0
args: [
"-c", "/etc/filebeat.yml",
"-e"
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: varlog
mountPath: /var/log
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
resources:
requests:
memory: "100Mi"
cpu: "100m"
limits:
memory: "200Mi"
cpu: "200m"
securityContext:
runAsUser: 0
terminationGracePeriodSeconds: 30
volumes:
- name: config
configMap:
name: filebeat-config
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: elastic-stack
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
namespace: elastic-stack
rules:
- apiGroups: [""]
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
namespace: elastic-stack
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elastic-stack
Java Application Configuration
1. Elasticsearch Configuration
// ElasticsearchConfig.java
package com.company.elasticstack.config;
import org.apache.http.HttpHost;
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestHighLevelClient;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.elasticsearch.repository.config.EnableElasticsearchRepositories;
@Configuration
@EnableElasticsearchRepositories(basePackages = "com.company.elasticstack.repository")
public class ElasticsearchConfig {
@Value("${elasticsearch.host:localhost}")
private String elasticsearchHost;
@Value("${elasticsearch.port:9200}")
private int elasticsearchPort;
@Value("${elasticsearch.scheme:http}")
private String elasticsearchScheme;
@Value("${elasticsearch.username:}")
private String username;
@Value("${elasticsearch.password:}")
private String password;
@Bean(destroyMethod = "close")
public RestHighLevelClient restHighLevelClient() {
return new RestHighLevelClient(
RestClient.builder(
new HttpHost(elasticsearchHost, elasticsearchPort, elasticsearchScheme)
)
.setRequestConfigCallback(requestConfigBuilder ->
requestConfigBuilder
.setConnectTimeout(5000)
.setSocketTimeout(60000)
)
.setHttpClientConfigCallback(httpClientBuilder ->
httpClientBuilder
.setMaxConnTotal(100)
.setMaxConnPerRoute(100)
)
);
}
}
// ElasticsearchProperties.java
package com.company.elasticstack.config;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.stereotype.Component;
@Component
@ConfigurationProperties(prefix = "elasticsearch")
public class ElasticsearchProperties {
private String host = "localhost";
private int port = 9200;
private String scheme = "http";
private String username;
private String password;
private int connectionTimeout = 5000;
private int socketTimeout = 60000;
private int maxConnections = 100;
private int maxConnectionsPerRoute = 100;
// Getters and setters
public String getHost() { return host; }
public void setHost(String host) { this.host = host; }
public int getPort() { return port; }
public void setPort(int port) { this.port = port; }
public String getScheme() { return scheme; }
public void setScheme(String scheme) { this.scheme = scheme; }
public String getUsername() { return username; }
public void setUsername(String username) { this.username = username; }
public String getPassword() { return password; }
public void setPassword(String password) { this.password = password; }
public int getConnectionTimeout() { return connectionTimeout; }
public void setConnectionTimeout(int connectionTimeout) { this.connectionTimeout = connectionTimeout; }
public int getSocketTimeout() { return socketTimeout; }
public void setSocketTimeout(int socketTimeout) { this.socketTimeout = socketTimeout; }
public int getMaxConnections() { return maxConnections; }
public void setMaxConnections(int maxConnections) { this.maxConnections = maxConnections; }
public int getMaxConnectionsPerRoute() { return maxConnectionsPerRoute; }
public void setMaxConnectionsPerRoute(int maxConnectionsPerRoute) { this.maxConnectionsPerRoute = maxConnectionsPerRoute; }
}
2. Logback Configuration for Logstash
<!-- logback-spring.xml -->
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<springProperty scope="context" name="appName" source="spring.application.name" defaultValue="elasticstack-app"/>
<springProperty scope="context" name="logstashHost" source="logging.logstash.host" defaultValue="localhost"/>
<springProperty scope="context" name="logstashPort" source="logging.logstash.port" defaultValue="5000"/>
<!-- Console Appender -->
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<!-- Logstash TCP Appender -->
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>${logstashHost}:${logstashPort}</destination>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC</timeZone>
</timestamp>
<version/>
<logLevel/>
<loggerName/>
<message/>
<mdc/>
<stackTrace/>
<threadName/>
<pattern>
<pattern>
{
"service": "${appName}",
"environment": "#logback.lookup('spring.profiles.active')"
}
</pattern>
</pattern>
</providers>
</encoder>
<keepAliveDuration>5 minutes</keepAliveDuration>
</appender>
<!-- File Appender -->
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>logs/${appName}.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>logs/${appName}.%d{yyyy-MM-dd}.log</fileNamePattern>
<maxHistory>30</maxHistory>
<totalSizeCap>3GB</totalSizeCap>
</rollingPolicy>
<encoder>
<pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<!-- Root Logger -->
<root level="INFO">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="FILE"/>
<appender-ref ref="LOGSTASH"/>
</root>
<!-- Application-specific Loggers -->
<logger name="com.company.elasticstack" level="DEBUG" additivity="false">
<appender-ref ref="CONSOLE"/>
<appender-ref ref="FILE"/>
<appender-ref ref="LOGSTASH"/>
</logger>
<!-- Elasticsearch Logger -->
<logger name="org.elasticsearch" level="WARN"/>
<logger name="org.apache.http" level="WARN"/>
</configuration>
Elasticsearch Service Implementation
1. Elasticsearch Service
// ElasticsearchService.java
package com.company.elasticstack.service;
import com.company.elasticstack.model.*;
import org.elasticsearch.action.DocWriteResponse;
import org.elasticsearch.action.admin.indices.create.CreateIndexRequest;
import org.elasticsearch.action.admin.indices.create.CreateIndexResponse;
import org.elasticsearch.action.admin.indices.delete.DeleteIndexRequest;
import org.elasticsearch.action.admin.indices.get.GetIndexRequest;
import org.elasticsearch.action.bulk.BulkRequest;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.delete.DeleteRequest;
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.get.GetRequest;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.action.update.UpdateResponse;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.client.indices.GetMappingsRequest;
import org.elasticsearch.client.indices.GetMappingsResponse;
import org.elasticsearch.cluster.health.ClusterHealthStatus;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.xcontent.XContentType;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.aggregations.AggregationBuilders;
import org.elasticsearch.search.aggregations.bucket.terms.ParsedTerms;
import org.elasticsearch.search.aggregations.bucket.terms.Terms;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.elasticsearch.search.sort.SortOrder;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.io.IOException;
import java.util.*;
import java.util.stream.Collectors;
@Service
public class ElasticsearchService {
private static final Logger log = LoggerFactory.getLogger(ElasticsearchService.class);
private final RestHighLevelClient elasticsearchClient;
@Autowired
public ElasticsearchService(RestHighLevelClient elasticsearchClient) {
this.elasticsearchClient = elasticsearchClient;
}
/**
* Check Elasticsearch cluster health
*/
public ClusterHealth checkClusterHealth() {
try {
org.elasticsearch.client.cluster.ClusterHealthResponse healthResponse =
elasticsearchClient.cluster().health(
new org.elasticsearch.client.cluster.ClusterHealthRequest(),
RequestOptions.DEFAULT
);
return ClusterHealth.builder()
.clusterName(healthResponse.getClusterName())
.status(healthResponse.getStatus().name())
.numberOfNodes(healthResponse.getNumberOfNodes())
.numberOfDataNodes(healthResponse.getNumberOfDataNodes())
.activeShards(healthResponse.getActiveShards())
.activePrimaryShards(healthResponse.getActivePrimaryShards())
.unassignedShards(healthResponse.getUnassignedShards())
.timedOut(healthResponse.isTimedOut())
.build();
} catch (IOException e) {
log.error("Failed to check cluster health", e);
return ClusterHealth.error(e.getMessage());
}
}
/**
* Create index with mapping
*/
public boolean createIndex(String indexName, String mapping) {
try {
CreateIndexRequest request = new CreateIndexRequest(indexName);
// Configure index settings
request.settings(Settings.builder()
.put("index.number_of_shards", 3)
.put("index.number_of_replicas", 2)
);
// Add mapping if provided
if (mapping != null && !mapping.trim().isEmpty()) {
request.mapping(mapping, XContentType.JSON);
}
CreateIndexResponse response = elasticsearchClient.indices()
.create(request, RequestOptions.DEFAULT);
log.info("Index {} created: {}", indexName, response.isAcknowledged());
return response.isAcknowledged();
} catch (IOException e) {
log.error("Failed to create index: {}", indexName, e);
return false;
}
}
/**
* Check if index exists
*/
public boolean indexExists(String indexName) {
try {
GetIndexRequest request = new GetIndexRequest().indices(indexName);
return elasticsearchClient.indices().exists(request, RequestOptions.DEFAULT);
} catch (IOException e) {
log.error("Failed to check index existence: {}", indexName, e);
return false;
}
}
/**
* Delete index
*/
public boolean deleteIndex(String indexName) {
try {
DeleteIndexRequest request = new DeleteIndexRequest(indexName);
var response = elasticsearchClient.indices().delete(request, RequestOptions.DEFAULT);
return response.isAcknowledged();
} catch (IOException e) {
log.error("Failed to delete index: {}", indexName, e);
return false;
}
}
/**
* Index a document
*/
public IndexResponse indexDocument(String indexName, String id, String document) {
try {
IndexRequest request = new IndexRequest(indexName);
if (id != null) {
request.id(id);
}
request.source(document, XContentType.JSON);
IndexResponse response = elasticsearchClient.index(request, RequestOptions.DEFAULT);
log.debug("Indexed document {} in index {}", response.getId(), indexName);
return response;
} catch (IOException e) {
log.error("Failed to index document in index: {}", indexName, e);
throw new ElasticsearchOperationException("Failed to index document", e);
}
}
/**
* Get document by ID
*/
public Optional<String> getDocument(String indexName, String id) {
try {
GetRequest request = new GetRequest(indexName, id);
GetResponse response = elasticsearchClient.get(request, RequestOptions.DEFAULT);
if (response.isExists()) {
return Optional.of(response.getSourceAsString());
}
return Optional.empty();
} catch (IOException e) {
log.error("Failed to get document {} from index {}", id, indexName, e);
throw new ElasticsearchOperationException("Failed to get document", e);
}
}
/**
* Update document
*/
public UpdateResponse updateDocument(String indexName, String id, String updateDoc) {
try {
UpdateRequest request = new UpdateRequest(indexName, id);
request.doc(updateDoc, XContentType.JSON);
return elasticsearchClient.update(request, RequestOptions.DEFAULT);
} catch (IOException e) {
log.error("Failed to update document {} in index {}", id, indexName, e);
throw new ElasticsearchOperationException("Failed to update document", e);
}
}
/**
* Delete document
*/
public DeleteResponse deleteDocument(String indexName, String id) {
try {
DeleteRequest request = new DeleteRequest(indexName, id);
return elasticsearchClient.delete(request, RequestOptions.DEFAULT);
} catch (IOException e) {
log.error("Failed to delete document {} from index {}", id, indexName, e);
throw new ElasticsearchOperationException("Failed to delete document", e);
}
}
/**
* Bulk index documents
*/
public BulkResponse bulkIndex(List<BulkIndexRequest> requests) {
try {
BulkRequest bulkRequest = new BulkRequest();
for (BulkIndexRequest req : requests) {
IndexRequest indexRequest = new IndexRequest(req.getIndexName());
if (req.getId() != null) {
indexRequest.id(req.getId());
}
indexRequest.source(req.getDocument(), XContentType.JSON);
bulkRequest.add(indexRequest);
}
return elasticsearchClient.bulk(bulkRequest, RequestOptions.DEFAULT);
} catch (IOException e) {
log.error("Failed to bulk index documents", e);
throw new ElasticsearchOperationException("Failed to bulk index documents", e);
}
}
/**
* Search documents
*/
public SearchResult search(String indexName, String query, int from, int size) {
try {
SearchRequest searchRequest = new SearchRequest(indexName);
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
if (query != null && !query.trim().isEmpty()) {
sourceBuilder.query(QueryBuilders.queryStringQuery(query));
}
sourceBuilder.from(from);
sourceBuilder.size(size);
sourceBuilder.sort("_score", SortOrder.DESC);
searchRequest.source(sourceBuilder);
SearchResponse response = elasticsearchClient.search(searchRequest, RequestOptions.DEFAULT);
List<Map<String, Object>> hits = Arrays.stream(response.getHits().getHits())
.map(SearchHit::getSourceAsMap)
.collect(Collectors.toList());
return SearchResult.builder()
.totalHits(response.getHits().getTotalHits().value)
.hits(hits)
.tookInMillis(response.getTook().millis())
.timedOut(response.isTimedOut())
.build();
} catch (IOException e) {
log.error("Failed to search index: {}", indexName, e);
throw new ElasticsearchOperationException("Failed to search documents", e);
}
}
/**
* Get field aggregation
*/
public Map<String, Long> getFieldAggregation(String indexName, String fieldName) {
try {
SearchRequest searchRequest = new SearchRequest(indexName);
SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
sourceBuilder.aggregation(
AggregationBuilders.terms(fieldName + "_agg")
.field(fieldName)
.size(100)
);
sourceBuilder.size(0); // We only care about aggregations
searchRequest.source(sourceBuilder);
SearchResponse response = elasticsearchClient.search(searchRequest, RequestOptions.DEFAULT);
ParsedTerms aggregation = response.getAggregations().get(fieldName + "_agg");
return aggregation.getBuckets().stream()
.collect(Collectors.toMap(
Terms.Bucket::getKeyAsString,
Terms.Bucket::getDocCount
));
} catch (IOException e) {
log.error("Failed to get aggregation for field: {}", fieldName, e);
throw new ElasticsearchOperationException("Failed to get field aggregation", e);
}
}
/**
* Get index mappings
*/
public Map<String, Object> getIndexMappings(String indexName) {
try {
GetMappingsRequest request = new GetMappingsRequest().indices(indexName);
GetMappingsResponse response = elasticsearchClient.indices()
.getMapping(request, RequestOptions.DEFAULT);
return response.mappings().get(indexName).getSourceAsMap();
} catch (IOException e) {
log.error("Failed to get mappings for index: {}", indexName, e);
throw new ElasticsearchOperationException("Failed to get index mappings", e);
}
}
}
// Custom Exception
class ElasticsearchOperationException extends RuntimeException {
public ElasticsearchOperationException(String message) {
super(message);
}
public ElasticsearchOperationException(String message, Throwable cause) {
super(message, cause);
}
}
2. Data Models
// ClusterHealth.java
package com.company.elasticstack.model;
import lombok.Builder;
import lombok.Data;
@Data
@Builder
public class ClusterHealth {
private String clusterName;
private String status;
private int numberOfNodes;
private int numberOfDataNodes;
private int activeShards;
private int activePrimaryShards;
private int unassignedShards;
private boolean timedOut;
private String error;
public static ClusterHealth error(String errorMessage) {
return ClusterHealth.builder()
.error(errorMessage)
.build();
}
public boolean isHealthy() {
return "GREEN".equals(status) || "YELLOW".equals(status);
}
}
// SearchResult.java
package com.company.elasticstack.model;
import lombok.Builder;
import lombok.Data;
import java.util.List;
import java.util.Map;
@Data
@Builder
public class SearchResult {
private long totalHits;
private List<Map<String, Object>> hits;
private long tookInMillis;
private boolean timedOut;
public int getHitCount() {
return hits != null ? hits.size() : 0;
}
}
// BulkIndexRequest.java
package com.company.elasticstack.model;
import lombok.Data;
@Data
public class BulkIndexRequest {
private String indexName;
private String id;
private String document;
public BulkIndexRequest(String indexName, String document) {
this.indexName = indexName;
this.document = document;
}
public BulkIndexRequest(String indexName, String id, String document) {
this.indexName = indexName;
this.id = id;
this.document = document;
}
}
// LogEntry.java
package com.company.elasticstack.model;
import com.fasterxml.jackson.annotation.JsonFormat;
import lombok.Data;
import java.time.LocalDateTime;
import java.util.Map;
@Data
public class LogEntry {
private String level;
private String logger;
private String message;
private String thread;
@JsonFormat(pattern = "yyyy-MM-dd HH:mm:ss.SSS")
private LocalDateTime timestamp;
private Map<String, String> mdc;
private String stackTrace;
private String service;
private String environment;
// Kubernetes metadata
private String namespace;
private String podName;
private String containerName;
private String nodeName;
}
// ApplicationLog.java
package com.company.elasticstack.model;
import com.fasterxml.jackson.annotation.JsonFormat;
import lombok.Data;
import java.time.LocalDateTime;
import java.util.Map;
@Data
public class ApplicationLog {
private String id;
private String applicationName;
private String level;
private String message;
@JsonFormat(pattern = "yyyy-MM-dd HH:mm:ss.SSS")
private LocalDateTime timestamp;
private String userId;
private String sessionId;
private String requestId;
private Map<String, Object> metadata;
private String environment;
// Error details
private String exceptionClass;
private String exceptionMessage;
private String stackTrace;
}
Kubernetes Management Service
1. Elastic Stack Kubernetes Manager
// ElasticStackKubernetesManager.java
package com.company.elasticstack.service;
import io.kubernetes.client.openapi.ApiClient;
import io.kubernetes.client.openapi.ApiException;
import io.kubernetes.client.openapi.apis.AppsV1Api;
import io.kubernetes.client.openapi.apis.CoreV1Api;
import io.kubernetes.client.openapi.models.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
@Service
public class ElasticStackKubernetesManager {
private static final Logger log = LoggerFactory.getLogger(ElasticStackKubernetesManager.class);
private static final String NAMESPACE = "elastic-stack";
private final CoreV1Api coreV1Api;
private final AppsV1Api appsV1Api;
@Autowired
public ElasticStackKubernetesManager(ApiClient apiClient) {
this.coreV1Api = new CoreV1Api(apiClient);
this.appsV1Api = new AppsV1Api(apiClient);
}
/**
* Get Elasticsearch cluster status
*/
public ElasticsearchClusterStatus getElasticsearchClusterStatus() {
try {
// Get Elasticsearch pods
V1PodList podList = coreV1Api.listNamespacedPod(
NAMESPACE, null, null, null, null, "app=elasticsearch",
null, null, null, null, null);
List<PodStatus> podStatuses = podList.getItems().stream()
.map(this::mapPodToStatus)
.collect(Collectors.toList());
// Get Elasticsearch service
V1Service service = coreV1Api.readNamespacedService("elasticsearch", NAMESPACE, null);
return ElasticsearchClusterStatus.builder()
.namespace(NAMESPACE)
.pods(podStatuses)
.serviceName(service.getMetadata().getName())
.servicePort(service.getSpec().getPorts().get(0).getPort())
.readyPods(podStatuses.stream().filter(PodStatus::isReady).count())
.totalPods(podStatuses.size())
.build();
} catch (ApiException e) {
log.error("Failed to get Elasticsearch cluster status", e);
return ElasticsearchClusterStatus.error(e.getMessage());
}
}
/**
* Get Kibana status
*/
public DeploymentStatus getKibanaStatus() {
try {
V1Deployment deployment = appsV1Api.readNamespacedDeployment("kibana", NAMESPACE, null);
V1Service service = coreV1Api.readNamespacedService("kibana", NAMESPACE, null);
return DeploymentStatus.builder()
.name("kibana")
.namespace(NAMESPACE)
.readyReplicas(deployment.getStatus().getReadyReplicas())
.availableReplicas(deployment.getStatus().getAvailableReplicas())
.unavailableReplicas(deployment.getStatus().getUnavailableReplicas())
.replicas(deployment.getStatus().getReplicas())
.serviceName(service.getMetadata().getName())
.serviceType(service.getSpec().getType())
.build();
} catch (ApiException e) {
log.error("Failed to get Kibana status", e);
return DeploymentStatus.error("kibana", e.getMessage());
}
}
/**
* Scale Elasticsearch cluster
*/
public boolean scaleElasticsearch(int replicas) {
try {
// Get current StatefulSet
V1StatefulSet statefulSet = appsV1Api.readNamespacedStatefulSet("elasticsearch", NAMESPACE, null);
// Update replicas
statefulSet.getSpec().setReplicas(replicas);
appsV1Api.replaceNamespacedStatefulSet(
"elasticsearch", NAMESPACE, statefulSet, null, null, null, null);
log.info("Scaled Elasticsearch to {} replicas", replicas);
return true;
} catch (ApiException e) {
log.error("Failed to scale Elasticsearch", e);
return false;
}
}
/**
* Get Elastic Stack resource usage
*/
public ResourceUsage getResourceUsage() {
try {
// Get pod metrics would require metrics API
// This is a simplified version
V1PodList elasticsearchPods = coreV1Api.listNamespacedPod(
NAMESPACE, null, null, null, null, "app=elasticsearch", null, null, null, null, null);
V1PodList kibanaPods = coreV1Api.listNamespacedPod(
NAMESPACE, null, null, null, null, "app=kibana", null, null, null, null, null);
V1PodList logstashPods = coreV1Api.listNamespacedPod(
NAMESPACE, null, null, null, null, "app=logstash", null, null, null, null, null);
return ResourceUsage.builder()
.elasticsearchPods(elasticsearchPods.getItems().size())
.kibanaPods(kibanaPods.getItems().size())
.logstashPods(logstashPods.getItems().size())
.totalPods(elasticsearchPods.getItems().size() +
kibanaPods.getItems().size() +
logstashPods.getItems().size())
.build();
} catch (ApiException e) {
log.error("Failed to get resource usage", e);
return ResourceUsage.error(e.getMessage());
}
}
/**
* Get Elasticsearch pod logs
*/
public String getElasticsearchPodLogs(String podName, Integer tailLines) {
try {
return coreV1Api.readNamespacedPodLog(
podName, NAMESPACE, null, null, null,
false, null, null, tailLines, null, null);
} catch (ApiException e) {
log.error("Failed to get logs for pod: {}", podName, e);
return "Error retrieving logs: " + e.getMessage();
}
}
private PodStatus mapPodToStatus(V1Pod pod) {
return PodStatus.builder()
.name(pod.getMetadata().getName())
.namespace(pod.getMetadata().getNamespace())
.status(pod.getStatus().getPhase())
.ready(isPodReady(pod))
.nodeName(pod.getSpec().getNodeName())
.startTime(pod.getStatus().getStartTime())
.build();
}
private boolean isPodReady(V1Pod pod) {
if (pod.getStatus().getConditions() == null) {
return false;
}
return pod.getStatus().getConditions().stream()
.filter(condition -> "Ready".equals(condition.getType()))
.map(V1PodCondition::getStatus)
.anyMatch("True"::equals);
}
}
// Status Models
class ElasticsearchClusterStatus {
private String namespace;
private List<PodStatus> pods;
private String serviceName;
private Integer servicePort;
private Long readyPods;
private Long totalPods;
private String error;
// Builder, getters, setters
public static ElasticsearchClusterStatusBuilder builder() { return new ElasticsearchClusterStatusBuilder(); }
public static ElasticsearchClusterStatus error(String error) {
ElasticsearchClusterStatus status = new ElasticsearchClusterStatus();
status.error = error;
return status;
}
// ... builder class and getters/setters
}
class PodStatus {
private String name;
private String namespace;
private String status;
private boolean ready;
private String nodeName;
private String startTime;
// Builder, getters, setters
}
class DeploymentStatus {
private String name;
private String namespace;
private Integer readyReplicas;
private Integer availableReplicas;
private Integer unavailableReplicas;
private Integer replicas;
private String serviceName;
private String serviceType;
private String error;
// Builder, getters, setters
public static DeploymentStatus error(String name, String error) {
DeploymentStatus status = new DeploymentStatus();
status.name = name;
status.error = error;
return status;
}
}
class ResourceUsage {
private int elasticsearchPods;
private int kibanaPods;
private int logstashPods;
private int totalPods;
private String error;
// Builder, getters, setters
public static ResourceUsage error(String error) {
ResourceUsage usage = new ResourceUsage();
usage.error = error;
return usage;
}
}
REST API Controllers
1. Elastic Stack Management API
// ElasticStackController.java
package com.company.elasticstack.api;
import com.company.elasticstack.model.*;
import com.company.elasticstack.service.ElasticsearchService;
import com.company.elasticstack.service.ElasticStackKubernetesManager;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;
import java.util.List;
import java.util.Map;
@RestController
@RequestMapping("/api/v1/elasticstack")
public class ElasticStackController {
private final ElasticsearchService elasticsearchService;
private final ElasticStackKubernetesManager kubernetesManager;
public ElasticStackController(ElasticsearchService elasticsearchService,
ElasticStackKubernetesManager kubernetesManager) {
this.elasticsearchService = elasticsearchService;
this.kubernetesManager = kubernetesManager;
}
@GetMapping("/health")
public ResponseEntity<ApiResponse<ClusterHealth>> getClusterHealth() {
ClusterHealth health = elasticsearchService.checkClusterHealth();
return ResponseEntity.ok(ApiResponse.success(health));
}
@GetMapping("/kubernetes/status")
public ResponseEntity<ApiResponse<ElasticsearchClusterStatus>> getKubernetesStatus() {
ElasticsearchClusterStatus status = kubernetesManager.getElasticsearchClusterStatus();
return ResponseEntity.ok(ApiResponse.success(status));
}
@PostMapping("/indices/{indexName}")
public ResponseEntity<ApiResponse<Boolean>> createIndex(
@PathVariable String indexName,
@RequestBody(required = false) String mapping) {
boolean created = elasticsearchService.createIndex(indexName, mapping);
return ResponseEntity.ok(ApiResponse.success(created,
created ? "Index created successfully" : "Failed to create index"));
}
@GetMapping("/indices/{indexName}/exists")
public ResponseEntity<ApiResponse<Boolean>> indexExists(@PathVariable String indexName) {
boolean exists = elasticsearchService.indexExists(indexName);
return ResponseEntity.ok(ApiResponse.success(exists));
}
@DeleteMapping("/indices/{indexName}")
public ResponseEntity<ApiResponse<Boolean>> deleteIndex(@PathVariable String indexName) {
boolean deleted = elasticsearchService.deleteIndex(indexName);
return ResponseEntity.ok(ApiResponse.success(deleted,
deleted ? "Index deleted successfully" : "Failed to delete index"));
}
@PostMapping("/indices/{indexName}/documents")
public ResponseEntity<ApiResponse<String>> indexDocument(
@PathVariable String indexName,
@RequestBody IndexDocumentRequest request) {
var response = elasticsearchService.indexDocument(
indexName, request.getId(), request.getDocument());
return ResponseEntity.ok(ApiResponse.success(response.getId(), "Document indexed successfully"));
}
@GetMapping("/indices/{indexName}/documents/{id}")
public ResponseEntity<ApiResponse<String>> getDocument(
@PathVariable String indexName,
@PathVariable String id) {
return elasticsearchService.getDocument(indexName, id)
.map(document -> ResponseEntity.ok(ApiResponse.success(document)))
.orElse(ResponseEntity.notFound().build());
}
@PostMapping("/indices/{indexName}/search")
public ResponseEntity<ApiResponse<SearchResult>> search(
@PathVariable String indexName,
@RequestBody SearchRequest request) {
SearchResult result = elasticsearchService.search(
indexName, request.getQuery(), request.getFrom(), request.getSize());
return ResponseEntity.ok(ApiResponse.success(result));
}
@GetMapping("/indices/{indexName}/aggregations/{fieldName}")
public ResponseEntity<ApiResponse<Map<String, Long>>> getFieldAggregation(
@PathVariable String indexName,
@PathVariable String fieldName) {
Map<String, Long> aggregation = elasticsearchService.getFieldAggregation(indexName, fieldName);
return ResponseEntity.ok(ApiResponse.success(aggregation));
}
@PostMapping("/scale")
public ResponseEntity<ApiResponse<Boolean>> scaleElasticsearch(
@RequestParam int replicas) {
boolean scaled = kubernetesManager.scaleElasticsearch(replicas);
return ResponseEntity.ok(ApiResponse.success(scaled,
scaled ? "Scaling operation initiated" : "Failed to scale cluster"));
}
}
// Request/Response DTOs
class IndexDocumentRequest {
private String id;
private String document;
// Getters and setters
public String getId() { return id; }
public void setId(String id) { this.id = id; }
public String getDocument() { return document; }
public void setDocument(String document) { this.document = document; }
}
class SearchRequest {
private String query;
private int from = 0;
private int size = 10;
// Getters and setters
public String getQuery() { return query; }
public void setQuery(String query) { this.query = query; }
public int getFrom() { return from; }
public void setFrom(int from) { this.from = from; }
public int getSize() { return size; }
public void setSize(int size) { this.size = size; }
}
class ApiResponse<T> {
private boolean success;
private String message;
private T data;
public ApiResponse(boolean success, String message, T data) {
this.success = success;
this.message = message;
this.data = data;
}
public static <T> ApiResponse<T> success(T data) {
return new ApiResponse<>(true, "Success", data);
}
public static <T> ApiResponse<T> success(T data, String message) {
return new ApiResponse<>(true, message, data);
}
public static <T> ApiResponse<T> error(String message) {
return new ApiResponse<>(false, message, null);
}
// Getters and setters
public boolean isSuccess() { return success; }
public void setSuccess(boolean success) { this.success = success; }
public String getMessage() { return message; }
public void setMessage(String message) { this.message = message; }
public T getData() { return data; }
public void setData(T data) { this.data = data; }
}
Testing
1. Elasticsearch Integration Tests
// ElasticsearchServiceTest.java
package com.company.elasticstack.service;
import com.company.elasticstack.model.ClusterHealth;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.client.RestHighLevelClient;
import org.junit.jupiter.api.AfterEach;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.DynamicPropertyRegistry;
import org.springframework.test.context.DynamicPropertySource;
import org.testcontainers.elasticsearch.ElasticsearchContainer;
import org.testcontainers.junit.jupiter.Container;
import org.testcontainers.junit.jupiter.Testcontainers;
import static org.junit.jupiter.api.Assertions.*;
@SpringBootTest
@Testcontainers
class ElasticsearchServiceTest {
@Container
static final ElasticsearchContainer elasticsearchContainer =
new ElasticsearchContainer("docker.elastic.co/elasticsearch/elasticsearch:7.17.0")
.withExposedPorts(9200)
.withEnv("discovery.type", "single-node")
.withEnv("xpack.security.enabled", "false");
@DynamicPropertySource
static void elasticsearchProperties(DynamicPropertyRegistry registry) {
registry.add("elasticsearch.host", elasticsearchContainer::getHost);
registry.add("elasticsearch.port", elasticsearchContainer::getFirstMappedPort);
}
@Autowired
private ElasticsearchService elasticsearchService;
@Autowired
private RestHighLevelClient elasticsearchClient;
private static final String TEST_INDEX = "test-index";
@BeforeEach
void setUp() {
// Create test index
if (!elasticsearchService.indexExists(TEST_INDEX)) {
elasticsearchService.createIndex(TEST_INDEX, null);
}
}
@AfterEach
void tearDown() {
// Clean up test index
if (elasticsearchService.indexExists(TEST_INDEX)) {
elasticsearchService.deleteIndex(TEST_INDEX);
}
}
@Test
void testClusterHealth() {
ClusterHealth health = elasticsearchService.checkClusterHealth();
assertNotNull(health);
assertTrue(health.isHealthy());
assertEquals("elasticsearch", health.getClusterName());
}
@Test
void testIndexDocument() {
String document = "{\"name\": \"test\", \"value\": 123}";
IndexResponse response = elasticsearchService.indexDocument(TEST_INDEX, "1", document);
assertNotNull(response);
assertEquals("1", response.getId());
assertEquals(TEST_INDEX, response.getIndex());
}
@Test
void testGetDocument() {
// First index a document
String document = "{\"name\": \"test\", \"value\": 123}";
elasticsearchService.indexDocument(TEST_INDEX, "1", document);
// Then retrieve it
var retrieved = elasticsearchService.getDocument(TEST_INDEX, "1");
assertTrue(retrieved.isPresent());
assertEquals(document, retrieved.get());
}
@Test
void testSearch() {
// Index multiple documents
elasticsearchService.indexDocument(TEST_INDEX, "1", "{\"name\": \"apple\", \"category\": \"fruit\"}");
elasticsearchService.indexDocument(TEST_INDEX, "2", "{\"name\": \"banana\", \"category\": \"fruit\"}");
elasticsearchService.indexDocument(TEST_INDEX, "3", "{\"name\": \"carrot\", \"category\": \"vegetable\"}");
// Search for fruits
var result = elasticsearchService.search(TEST_INDEX, "category:fruit", 0, 10);
assertNotNull(result);
assertEquals(2, result.getTotalHits());
assertEquals(2, result.getHitCount());
}
}
Application Configuration
1. Application Properties
# application.yml
spring:
application:
name: elasticstack-manager
profiles:
active: dev
server:
port: 8080
# Elasticsearch Configuration
elasticsearch:
host: ${ELASTICSEARCH_HOST:localhost}
port: ${ELASTICSEARCH_PORT:9200}
scheme: http
connection-timeout: 5000
socket-timeout: 60000
max-connections: 100
max-connections-per-route: 100
# Logging Configuration
logging:
level:
com.company.elasticstack: DEBUG
org.elasticsearch: WARN
logstash:
host: ${LOGSTASH_HOST:localhost}
port: ${LOGSTASH_PORT:5000}
# Management Endpoints
management:
endpoints:
web:
exposure:
include: health,info,metrics,prometheus
endpoint:
health:
show-details: always
metrics:
export:
elastic:
enabled: true
host: ${ELASTICSEARCH_HOST:localhost}:${ELASTICSEARCH_PORT:9200}
index: application-metrics
step: 1m
# Kubernetes Configuration (if running in-cluster)
kubernetes:
in-cluster: ${KUBERNETES_IN_CLUSTER:false}
namespace: ${KUBERNETES_NAMESPACE:elastic-stack}
This comprehensive Elastic Stack implementation provides:
- Complete Kubernetes deployment for Elasticsearch, Kibana, Logstash, and Filebeat
- Java client integration for Elasticsearch operations
- Centralized logging with Logstash and structured JSON logging
- Kubernetes management for monitoring and scaling the Elastic Stack
- Comprehensive REST API for cluster management
- Production-ready configuration with health checks and monitoring
- Testing framework with Testcontainers for integration testing
The solution enables full observability for Java applications running on Kubernetes with centralized logging, monitoring, and search capabilities through the Elastic Stack.
Pyroscope Profiling in Java
Explains how to use Pyroscope for continuous profiling in Java applications, helping developers analyze CPU and memory usage patterns to improve performance and identify bottlenecks.
https://macronepal.com/blog/pyroscope-profiling-in-java/
OpenTelemetry Metrics in Java: Comprehensive Guide
Provides a complete guide to collecting and exporting metrics in Java using OpenTelemetry, including counters, histograms, gauges, and integration with monitoring tools. (MACRO NEPAL)
https://macronepal.com/blog/opentelemetry-metrics-in-java-comprehensive-guide/
OTLP Exporter in Java: Complete Guide for OpenTelemetry
Explains how to configure OTLP exporters in Java to send telemetry data such as traces, metrics, and logs to monitoring systems using HTTP or gRPC protocols. (MACRO NEPAL)
https://macronepal.com/blog/otlp-exporter-in-java-complete-guide-for-opentelemetry/
Thanos Integration in Java: Global View of Metrics
Explains how to integrate Thanos with Java monitoring systems to create a scalable global metrics view across multiple Prometheus instances.
https://macronepal.com/blog/thanos-integration-in-java-global-view-of-metrics
Time Series with InfluxDB in Java: Complete Guide (Version 2)
Explains how to manage time-series data using InfluxDB in Java applications, including storing, querying, and analyzing metrics data.
https://macronepal.com/blog/time-series-with-influxdb-in-java-complete-guide-2
Time Series with InfluxDB in Java: Complete Guide
Provides an overview of integrating InfluxDB with Java for time-series data handling, including monitoring applications and managing performance metrics.
https://macronepal.com/blog/time-series-with-influxdb-in-java-complete-guide
Implementing Prometheus Remote Write in Java (Version 2)
Explains how to configure Java applications to send metrics data to Prometheus-compatible systems using the remote write feature for scalable monitoring.
https://macronepal.com/blog/implementing-prometheus-remote-write-in-java-a-complete-guide-2
Implementing Prometheus Remote Write in Java: Complete Guide
Provides instructions for sending metrics from Java services to Prometheus servers, enabling centralized monitoring and real-time analytics.
https://macronepal.com/blog/implementing-prometheus-remote-write-in-java-a-complete-guide
Building a TileServer GL in Java: Vector and Raster Tile Server
Explains how to build a TileServer GL in Java for serving vector and raster map tiles, useful for geographic visualization and mapping applications.
https://macronepal.com/blog/building-a-tileserver-gl-in-java-vector-and-raster-tile-server
Indoor Mapping in Java
Explains how to create indoor mapping systems in Java, including navigation inside buildings, spatial data handling, and visualization techniques.