SigNoz Cloud - This page is relevant for SigNoz Cloud editions.
Self-Host - This page is relevant for self-hosted SigNoz editions.

Getting Temporal Cloud Metrics into SigNoz

This guide shows how to collect Temporal Cloud metrics in SigNoz. You run an OpenTelemetry Collector in your infrastructure, configure it to scrape the Temporal Cloud OpenMetrics endpoint, and forward the metrics to SigNoz via OTLP.

Prerequisites

Setup

Step 1: Create a Temporal API key

Create a service account with the Metrics Read-Only role in the Temporal Cloud UI, then generate an API key for that service account. Full instructions are in the Temporal API key guide.

Verify the key works before proceeding:

curl -H "Authorization: Bearer <TEMPORAL_API_KEY>" https://metrics.temporal.io/v1/metrics

Replace <TEMPORAL_API_KEY> with the key you generated. A successful response returns a text stream of Prometheus metric lines.

Step 2: Configure the OpenTelemetry Collector

Choose the setup that matches your environment:

Add the prometheus receiver and SigNoz otlp exporter to your otel-collector-config.yaml. If your collector already has other receivers, append the prometheus block and add it to the pipeline — do not replace your existing config.

otel-collector-config.yaml
receivers:
  prometheus:
    config:
      scrape_configs:
        - job_name: temporal-cloud
          static_configs:
            - targets:
                - 'metrics.temporal.io'
          scheme: https
          metrics_path: /v1/metrics
          scrape_interval: 60s
          honor_timestamps: true
          authorization:
            type: Bearer
            credentials: '<TEMPORAL_API_KEY>'

processors:
  batch:

exporters:
  otlp:
    endpoint: "ingest.<region>.signoz.cloud:443"
    tls:
      insecure: false
    headers:
      "signoz-ingestion-key": "<SIGNOZ_INGESTION_KEY>"

service:
  pipelines:
    metrics:
      receivers: [prometheus]
      processors: [batch]
      exporters: [otlp]

Replace these placeholders:

The credentials field inlines the API key directly in the config file. Anyone with read access to this file can extract the key. For production deployments, consider using credentials_file with a path to a file containing the key (as shown in the Kubernetes tab), or restrict file permissions to the collector's service account.

Restart the collector to apply the config:

./otelcol-contrib --config ./otel-collector-config.yaml

Step 3: Reduce scrape cardinality (optional)

If you have many Temporal namespaces, use query parameters on metrics_path to filter what is scraped:

metrics_path: '/v1/metrics?namespaces=<your-namespace>'

You can also filter by metric name: ?metrics=temporal_cloud_v1_workflow_success_count. See high-cardinality management in the Temporal docs for details.

Step 4: Import the dashboard

In SigNoz, go to Dashboards → New Dashboard → Import JSON.

Download the dashboard JSON from Temporal Cloud Metrics. The dashboard includes panels for workflow success rate, service request counts, latency percentiles, and worker poll metrics.

Validate

Metrics should appear in SigNoz within 2–3 minutes of the collector starting. To confirm:

  1. Go to Metrics Explorer in SigNoz and search for temporal_cloud_v1_. You should see metrics like temporal_cloud_v1_service_request_count and temporal_cloud_v1_workflow_success_count.
  2. Open the imported dashboard and verify panels are populated.
Temporal Actions Metrics dashboard panel in SigNoz
Temporal Actions Metrics
Temporal Worker Poll metrics dashboard panel in SigNoz
Temporal Worker Poll Metrics
Temporal Service Requests metrics dashboard panel in SigNoz
Temporal Service Requests Metrics

Migrating from the Prometheus query endpoint (v0)

If you were previously using the Temporal Cloud Prometheus query endpoint (v0), note these breaking changes in the v1 OpenMetrics endpoint:

  • Metric names changed from temporal_cloud_v0_* to temporal_cloud_v1_*
  • No rate() needed: metrics are pre-computed per-second rates with delta temporality — do not wrap them with rate(), increase(), or irate()
  • Latency percentiles are now explicit metrics (e.g., temporal_cloud_v1_service_latency_p99) instead of histogram buckets — histogram_quantile() no longer applies
  • Authentication changed from mTLS certificates to API keys with the global endpoint https://metrics.temporal.io/v1/metrics

See the full migration guide for the complete metric name mapping table.

Troubleshooting

No metrics appear after 3 minutes

  • Likely cause: invalid API key, misconfigured receiver, or prometheus missing from the pipeline receivers list.
  • Fix: run the curl command from Step 1 to confirm the key is valid. Check collector logs for errors from prometheusreceiver. Confirm prometheus is listed under receivers in the metrics pipeline.
  • Verify: metrics with the prefix temporal_cloud_v1_ appear in Metrics Explorer.

Dashboard shows no data after metrics are visible

  • Likely cause: dashboard queries may reference old v0 metric names if you imported an older dashboard version.
  • Fix: confirm the dashboard queries use temporal_cloud_v1_* names. Re-download the latest dashboard JSON from the link in Step 4.
  • Verify: individual metric panels populate in the dashboard.

Next Steps

Get Help

If you need help with the steps in this topic, please reach out to us on SigNoz Community Slack.

If you are a SigNoz Cloud user, please use in product chat support located at the bottom right corner of your SigNoz instance or contact us at cloud-support@signoz.io.

Last updated: May 7, 2026

Edit on GitHub

Was this page helpful?

Your response helps us improve this page.