I Built this Dashboard with OpenTelemetry to Monitor OpenClaw
OpenClaw has taken the world by storm over the last few weeks. But for people like me, who eat, sleep and breathe technology this looked like the best playground I'd stumbled into in a while.
I immediately got my hands dirty, experimenting with various connectors and channels, pushing it, breaking it, putting it back together. And somewhere along the way, I got weirdly attached. These little agents were out there doing things for me, fetching, reasoning, talking to APIs, and I started feeling almost responsible for it. Like, are they okay? Are they overworking? Are they burning through tokens and nobody's telling me?
Then the token limit hit annoyingly fast one day, and I realised I had no visibility into what was actually happening under the hood. I didn't just want to use my agents anymore, I wanted to look after them, know when they're struggling, know when they're stuck in a loop, catch errors before they aggravate.
That's when I came across Diagnostic-OTel, a built-in plugin that handles telemetry collection and lets you route it to whatever observability backend you're already comfortable with. Basically, a health monitor for my little agents.
This is a note on how I'm keeping an eye on my OpenClaw Agent with OpenTelemetry because if it's going to work this hard for me, the least I can do is make sure it's running well.

What type of Telemetry does Diagnostic-OTel provide?
Understanding what telemetry Diagnostic-OTel emits will help us plan our Dashboard and Alerts more effectively beforehand. OpenClaw uses OpenTelemetry internally for telemetry collection. We get the following:
- Traces: spans for model usage and webhook/message processing.
- Metrics: counters and histograms, token usage, cost, context size, run duration, message-flow counters, queue depth, and session state.
- Logs: the same structured records written to your Gateway log file, exported over OTLP when enabled.
The practical value is immediate. You get token cost attribution (which sessions are expensive and why), latency breakdown (is it the LLM call or the tool execution?), tool failure visibility, and error detection, all without writing a single line of custom instrumentation.
You can check the names and types of the exported metrics in detail in the Official OpenClaw Documentation.
Setting up the Diagnostic-OTel plugin in under 10 minutes
Prerequisites
- The latest version of OpenClaw is installed and configured.
- A backend with an endpoint to receive telemetry. In this article, we will be using SigNoz Cloud.
Step 1: Enable the Plugin
The diagnostics-otel plugin ships with OpenClaw but is disabled by default. You can enable it via CLI:
openclaw plugins enable diagnostics-otel
Or add it directly to your config file (~/.openclaw/openclaw.json):
{
"plugins": {
"allow": ["diagnostics-otel"],
"entries": {
"diagnostics-otel": {
"enabled": true
}
}
}
}
Step 2: Configure the OTEL Exporter
You can configure the exporter vi CLI:
openclaw config set diagnostics.enabled true
openclaw config set diagnostics.otel.enabled true
openclaw config set diagnostics.otel.traces true
openclaw config set diagnostics.otel.metrics true
openclaw config set diagnostics.otel.logs true
openclaw config set diagnostics.otel.protocol http/protobuf
openclaw config set diagnostics.otel.endpoint "https://ingest.<region>.signoz.cloud:443"
openclaw config set diagnostics.otel.headers '{"signoz-ingestion-key":"<YOUR_SIGNOZ_INGESTION_KEY>"}'
openclaw config set diagnostics.otel.serviceName "openclaw-gateway"
If you are following and using SigNoz Cloud, you can follow our Ingestion Key guide to get your Ingestion Region and API Key.
Step 3: Check your config (Optional)
You can quickly check your config using the following command:
openclaw config get diagnostics
Your output should look like this.
{
"enabled": true,
"otel": {
"enabled": true,
"endpoint": "https://ingest.<region>.signoz.cloud:443",
"protocol": "http/protobuf",
"headers": {
"signoz-ingestion-key": "<YOUR_SIGNOZ_INGESTION_KEY>"
},
"serviceName": "openclaw-gateway",
"traces": true,
"metrics": true,
"logs": true,
"sampleRate": 1,
"flushIntervalMs": 5000
}
}
OR you can check your ~/.openclaw/openclaw.json config file too.
Step 4: Restart your OpenClaw gateway
openclaw gateway restart
Step 5: Visualize in SigNoz
Open your SigNoz Cloud and navigate to Service Tab. If you have followed everything till here service name openclaw-gateway should be visible to you.

You can click on the service name to view the out-of-the-box dashboard provided by SigNoz.

Step 6: Customized Dashboard
You can import the custom dashboard JSON to create a new customised dashboard.
Walkthrough of the OpenClaw Overview dashboard showing LLM token usage, queue and session health metrics, and error logs.
Conclusion
When building autonomous workflows with OpenClaw, running blind isn't an option. Without tracking your model calls and tool executions, token budgets drain quickly and debugging agent loops is impossible.
The built-in diagnostics-otel plugin makes fixing this straightforward. With no custom code required, you can connect it directly to SigNoz and see exactly where your tokens are going.