Skip to main content

Spring Boot JVM Metrics

This tutorial shows you how you can visualise JVM metrics from Spring Boot applications in SigNoz.

In this tutorial, we use Micrometer and Spring Boot actuator to expose JVM metrics in Prometheus format. Then we update OpenTelemetry collector which comes pre-installed with SigNoz to be able to scrape these metrics.

You can then plot the JVM metrics relevant for your team by creating custom dashboards in SigNoz.

You can use a sample Spring Boot application at this GitHub repo.

Steps to monitor JVM metrics

Changes required in your Spring Boot application

  1. Add the following code in pom.xml

    <dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
    </dependency>
    <dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-prometheus</artifactId>
    <scope>runtime</scope>
    </dependency>
  2. Add the following code in application.properties file located at src/main/resources/application.properties

    management.endpoints.web.exposure.include=*
    management.endpoints.web.exposure.include=prometheus,health,info,metric

    management.health.probes.enabled=true
    management.endpoint.health.show-details=always
    management.endpoint.prometheus.enabled=true

    Sample Spring Boot app with needed changes
  3. Build the Spring Boot application again

You can read more on how to expose Prometheus metric from Spring Boot docs.

Configure SigNoz otel-collector to scrape Prometheus metrics

  1. Add the following code in otel-collector-config.yaml file

    SigNoz Otel collector yaml file
    note

    Target should be updated to the IP and port where Spring Boot app is exposing metrics.

    prometheus:
    config:
    scrape_configs:
    - job_name: "otel-collector"
    scrape_interval: 60s
    static_configs:
    - targets: ["otel-collector:8889"]
    - job_name: "jvm-metrics"
    scrape_interval: 10s
    metrics_path: "/actuator/prometheus"
    static_configs:
    - targets: ["<IP of the machine:8090>"]

    For e.g. if SigNoz is running on same machine as Spring Boot application, you can replace IP of SigNoz with host.docker.internal.

  2. Restart otel-collector metrics using the following command

    sudo docker compose -f docker-compose.yaml restart otel-collector
  3. Go to SigNoz dashboard and plot metrics you want

    Creating metrics dashboard in SigNoz

Available metrics that you can monitor

Below is the list of available JVM metrics that you can monitor with the help of SigNoz:

http_server_requests_seconds_sum
jvm_memory_committed_bytes
jdbc_connections_min
hikaricp_connections_min
jvm_threads_states_threads
jvm_classes_unloaded_classes_total
jvm_buffer_count_buffers
logback_events_total
jvm_memory_used_bytes
jvm_gc_pause_seconds_sum
jvm_memory_max_bytes
jdbc_connections_active
jvm_classes_loaded_classes
jvm_gc_pause_seconds_count
jdbc_connections_idle
jvm_threads_live_threads
jvm_gc_memory_promoted_bytes_total
jvm_gc_memory_allocated_bytes_total
cache_gets_total
jvm_buffer_memory_used_bytes
jvm_buffer_total_capacity_bytes
jvm_gc_live_data_size_bytes
tomcat_sessions_alive_max_seconds
hikaricp_connections_usage_seconds_count
jvm_threads_daemon_threads
hikaricp_connections_creation_seconds_sum
process_cpu_usage
jvm_gc_pause_seconds_max
process_start_time_seconds
tomcat_sessions_active_max_sessions
hikaricp_connections_acquire_seconds_count
hikaricp_connections_acquire_seconds_sum
system_load_average_1m
hikaricp_connections_usage_seconds_sum
system_cpu_usage
jvm_threads_peak_threads
tomcat_sessions_expired_sessions_total
cache_removals
tomcat_sessions_created_sessions_total
hikaricp_connections_idle
tomcat_sessions_active_current_sessions
process_uptime_seconds
hikaricp_connections_acquire_seconds_max