Kong Gateway provides a native OpenTelemetry plugin to capture proxy latencies, upstream traces, traffic metrics, and access logs. By configuring this plugin, you send your telemetry data directly to SigNoz.
Prerequisites
- For OpenTelemetry Traces: Kong Gateway version 3.0+ (Enterprise or OSS).
- For OpenTelemetry Logs: Kong Gateway version 3.8+ (Enterprise or OSS).
- For OpenTelemetry Metrics: Kong Gateway Enterprise version 3.13+. (The latest open-source community edition release is 3.9.x, which predates native OpenTelemetry metrics support).
- An instance of SigNoz (Cloud or Self-Hosted).
Configure OpenTelemetry Export
Configure the opentelemetry plugin either via declarative config (decK) or via the Kong Ingress Controller in Kubernetes.
_format_version: "3.0"
_transform: true
plugins:
- name: opentelemetry
config:
traces_endpoint: "https://ingest.<region>.signoz.cloud:443/v1/traces"
logs_endpoint: "https://ingest.<region>.signoz.cloud:443/v1/logs"
metrics:
endpoint: "https://ingest.<region>.signoz.cloud:443/v1/metrics"
push_interval: 10
enable_bandwidth_metrics: true
enable_latency_metrics: true
enable_request_metrics: true
enable_upstream_health_metrics: true
headers:
signoz-ingestion-key: "<your-ingestion-key>"
resource_attributes:
service.name: "<service_name>"
queue:
max_batch_size: 200
max_coalescing_delay: 3
Verify these values:
<region>: Your SigNoz Cloud region<your-ingestion-key>: Your SigNoz ingestion key<service_name>: The name of your service
Sync your state via Kong's Admin API:
deck gateway sync kong-state.yaml --kong-addr <YOUR-KONG-ADMIN-URL>
Apply a KongClusterPlugin to enable the plugin globally across all services managed by the Ingress Controller.
apiVersion: configuration.konghq.com/v1
kind: KongClusterPlugin
metadata:
name: opentelemetry-global
annotations:
kubernetes.io/ingress.class: kong
labels:
global: "true"
plugin: opentelemetry
config:
traces_endpoint: "https://ingest.<region>.signoz.cloud:443/v1/traces"
logs_endpoint: "https://ingest.<region>.signoz.cloud:443/v1/logs"
metrics:
endpoint: "https://ingest.<region>.signoz.cloud:443/v1/metrics"
push_interval: 10
enable_bandwidth_metrics: true
enable_latency_metrics: true
enable_request_metrics: true
enable_upstream_health_metrics: true
headers:
signoz-ingestion-key: "<your-ingestion-key>"
resource_attributes:
service.name: "<service_name>"
Verify these values:
<region>: Your SigNoz Cloud region<your-ingestion-key>: Your SigNoz ingestion key<service_name>: The name of your service
Apply the file to your cluster:
kubectl apply -f kong-plugin-otel.yaml
To collect only specific signal types, remove the endpoints you don't need:
- Traces only: remove
logs_endpointand themetricsblock - Logs only: remove
traces_endpointand themetricsblock - Metrics only: remove
traces_endpointandlogs_endpoint
Available Telemetry
Kong instruments each proxied request as an OTel span with child spans for each phase:
- Proxy latency: time Kong spends routing the request before forwarding it upstream
- Plugin execution: time spent inside each enabled plugin (e.g. auth, rate-limiting)
- DNS resolution: upstream hostname lookup time
- Upstream latency: time waiting for the upstream service to respond
Each span carries standard HTTP attributes: http.method, http.url, http.host, http.scheme, http.flavor, and net.peer.ip.
Kong supports W3C TraceContext, B3/B3-single (Zipkin), Jaeger, OpenTracing, Datadog, AWS X-Ray, and GCP propagation formats. Existing distributed traces from upstream or downstream services pass through without extra configuration.
Requires Kong Gateway Enterprise v3.13+.
| Metric | Type | What it measures |
|---|---|---|
kong.latency.total | Histogram | End-to-end request duration |
kong.latency.upstream | Histogram | Time waiting for the upstream service |
kong.latency.internal | Histogram | Kong's own processing time (plugins, routing) |
http.server.request.count | Sum | Incoming requests, tagged by service, route, consumer, and HTTP status |
http.server.request.size | Histogram | Request body size in bytes |
http.server.response.size | Histogram | Response body size in bytes |
kong.nginx.connection.count | Gauge | Active Nginx connections by state |
kong.upstream.target.status | Gauge | Health status of each upstream target |
Kong pushes all metrics over OTLP on the push_interval you configure (default: 60 seconds, set to 10 in the example above for faster feedback). The enable_bandwidth_metrics, enable_latency_metrics, enable_request_metrics, and enable_upstream_health_metrics flags in the plugin config control which metric groups Kong collects.
Kong emits two categories of logs over OTLP:
- Access logs: one log record per proxied request. Each record includes the request method, URI, status code, response size, and latency.
- Runtime logs: internal Kong and plugin execution messages (errors, warnings, debug output).
Every log record includes:
Timestamp,ObservedTimestamp,SeverityText, andSeverityNumberrequest.idto correlate logs to a specific requestTraceIDandSpanIDwhen tracing is enabled, so you can jump from a log line to the corresponding trace in SigNoz
You no longer need a separate log forwarder like Fluent Bit to parse Kong access logs. The plugin handles formatting and delivery.
Validate
Verify your data appears in SigNoz:
- Open Services to confirm
<service_name>appears in the list. - Head to Traces to browse distributed traces for your Kong routes.
- Open Metrics Explorer and query
kong.latency.totalorkong.nginx.connection.count. - Head to Logs and filter by
service.name = <service_name>to read access logs.


Troubleshooting
Metrics do not appear in SigNoz
Verify your Kong Gateway version and edition. Kong Open-Source (CE) presently terminates at version 3.9 and lacks the config.metrics object. Using the metrics configuration against Kong 3.12 or older produces a schema validation error.
Schema Validation Error on "metrics"
Ensure you pass metrics as an object containing endpoint, rather than a flat string. Earlier plugin schema versions rejected the metrics map entirely. See Kong OpenTelemetry plugin schema.
Setup OpenTelemetry Collector (Optional)
Avoid hardcoding the SigNoz cloud endpoints in your Kong instance by configuring Kong to point to a local OpenTelemetry Collector instead.
- Ensure your OpenTelemetry Collector instance exposes the
otlpHTTP receiver on port4318. - Update the Kong Plugin configuration to point to your collector:
traces_endpoint: http://<your-collector-ip>:4318/v1/traceslogs_endpoint: http://<your-collector-ip>:4318/v1/logsendpoint: http://<your-collector-ip>:4318/v1/metrics
- Remove the
signoz-ingestion-keyheader from the Kong configuration, and let your Collector append the authentication header to SigNoz.
Read the Collector configuration guide for implementation details.
Next Steps
- Import the Kong Gateway Dashboard template to monitor traffic in real-time.
- Set up Alerts for routing anomalies or upstream timeouts.
Get Help
If you need help with the steps in this topic, please reach out to us on SigNoz Community Slack.
If you are a SigNoz Cloud user, please use in product chat support located at the bottom right corner of your SigNoz instance or contact us at cloud-support@signoz.io.