StatsD is a popular network daemon that listens for statistics (like counters, timers, and gauges) sent over UDP or TCP and aggregates them before forwarding to a backend. You can send custom metrics from your applications using StatsD clients to SigNoz by configuring the OpenTelemetry Collector's statsd receiver.
This guide explains how to set up the OpenTelemetry Collector to listen for StatsD metrics and send them to SigNoz.
Prerequisites
- An instance of SigNoz (either Cloud or Self-Hosted)
- An instance of OpenTelemetry Collector running, if not, follow the installation instructions for your environment
Send StatsD Metrics to SigNoz
Step 1: Configure the StatsD receiver
The OpenTelemetry Collector uses the StatsD receiver to ingest metrics in the StatsD format.
Add the statsd receiver to your otel-collector-config.yaml file. Make sure it listens on the desired address (default for StatsD is UDP port 8125):
receivers:
statsd:
endpoint: "0.0.0.0:8125"
Explanation of the fields:
endpoint: The address (host:port) where the receiver will listen for UDP StatsD packets. Use0.0.0.0to listen on all interfaces.
Step 2: Configure the exporter to SigNoz Cloud
If you haven't already configured your Collector to send data to SigNoz Cloud, add an otlp exporter:
exporters:
otlp/signoz:
endpoint: ingest.<region>.signoz.cloud:443
tls:
insecure: false
headers:
"signoz-ingestion-key": "<your-ingestion-key>"
Verify these values:
<region>: Your SigNoz Cloud region<your-ingestion-key>: Your SigNoz ingestion key
Step 3: Enable the receiver and exporter in your pipeline
Enable both the statsd receiver and the otlp/signoz exporter in your metrics pipeline. Add them to service.pipelines.metrics. Append the receiver and exporter to your configuration like:
service:
pipelines:
metrics:
receivers: [otlp, statsd] # Append statsd to your existing receivers
exporters: [otlp/signoz] # Ensure your exporter is included here
Step 4: Expose the StatsD port
Depending on how you deploy the OpenTelemetry Collector, you need to ensure the StatsD UDP port (8125) is accessible to your applications.
No additional port mapping is required, just ensure your firewall allows incoming UDP traffic on port 8125.
Check if the collector is listening on the port:
sudo netstat -ulnp | grep 8125
If your Collector runs in Kubernetes (e.g., using the signoz/k8s-infra Helm chart), you must expose the UDP port in the Service definition.
Update your values.yaml for the k8s-infra chart:
otelAgent:
ports:
statsd:
enabled: true
containerPort: 8125
servicePort: 8125
hostPort: 8125
protocol: UDP
Then upgrade your Helm release:
helm upgrade my-release signoz/k8s-infra -f values.yaml
If you run the Collector via Docker, ensure you publish the UDP port using the -p flag:
docker run -d --name otel-collector \
-v $(pwd)/otel-collector-config.yaml:/etc/otel-collector-config.yaml \
-p 4317:4317 \
-p 4318:4318 \
-p 8125:8125/udp \
signoz/signoz-otel-collector:latest \
--config /etc/otel-collector-config.yaml
If using docker-compose.yaml, add the port mapping:
services:
otel-collector:
image: signoz/signoz-otel-collector:latest
ports:
- "4317:4317"
- "8125:8125/udp"
Step 5: Restart the OpenTelemetry Collector
After modifying the configuration file and exposing the necessary ports, restart the OpenTelemetry Collector for the changes to take effect.
Depending on your environment, you can restart the collector using the following commands:
- Docker:
docker restart otel-collector - VM:
sudo systemctl restart signoz-otel-collector(or the respective service name)
Validate
To test if the OpenTelemetry Collector successfully receives your StatsD metrics, you can send a dummy metric using nc (netcat):
echo "custom.test.metric:1|c" | nc -w 1 -u 127.0.0.1 8125
This command sends a StatsD counter metric named custom.test.metric with a value of 1. (Make sure you replace 127.0.0.1 with your Collector instance IP if testing remotely).
Wait for the aggregation interval (e.g., 60 seconds) to pass. Then, navigate to the SigNoz UI to verify:
- Navigate to Metrics Explorer.
- Search for
custom.test.metric. - You should see the metric data in the chart.

Troubleshooting
If you don't see your metrics in SigNoz, check the following:
Port Configuration
Ensure that port 8125/UDP is open and accessible from your application to the OpenTelemetry Collector.
- Docker: Did you map the port when starting the container? Ensure your
docker runcommand ordocker-compose.yamlincludes-p 8125:8125/udp. - Kubernetes: If your Collector runs in K8s, ensure the UDP port is exposed in the Service definition and properly mapped in your
otel-agentdaemonset. See Configure K8s Infra for exposing additional receiver ports. - Firewall: Check if firewalls (e.g., AWS Security Groups) are allowing UDP traffic on port
8125.
Check Collector Logs
Enable debug logging on your OpenTelemetry Collector to see if it receives the stats. Merge this setting with your existing service.telemetry configuration rather than replacing it:
service:
telemetry:
logs:
level: debug
Check the Collector's output for any errors related to parsing StatsD messages.
Metrics Export Errors
If metrics are received but not exported downstream, there might be an issue with your exporter configuration (keys, endpoints). Look for exporting failed errors in the OTel Collector logs. Ensure your otlp/signoz exporter is correctly pointing to your SigNoz cloud region endpoint with the valid ingestion key.
Next Steps
Now that you have your StatsD metrics flowing into SigNoz, you can use them to build custom visualizations and trigger alerts when things go wrong:
Get Help
If you need help with the steps in this topic, please reach out to us on SigNoz Community Slack.
If you are a SigNoz Cloud user, please use in product chat support located at the bottom right corner of your SigNoz instance or contact us at cloud-support@signoz.io.