Hasura provides built-in OpenTelemetry support for distributed tracing, metrics, and logs on Hasura Cloud and Enterprise editions. Starting from version v2.18.0, you can configure Hasura to export telemetry data directly to SigNoz.
Prerequisites
- Hasura Cloud, or Hasura Self-Hosted Enterprise Edition
- Hasura GraphQL Engine v2.35.0 or later (recommended for all signals)
- An instance of SigNoz (either Cloud or Self-Hosted)
Configure OpenTelemetry Export
You can configure OpenTelemetry export for Hasura using either the Console UI or the CLI.
Configure via Console
- Log into your Hasura Console and navigate to the Settings tab (⚙).
- Click on OpenTelemetry Exporter.
- Configure your connection based on the type of telemetry you want to export:
- Endpoint:
https://ingest.<region>.signoz.cloud:443/v1/traces - Connection Type: HTTP/Protobuf
- Data Types: Select
traces
- Endpoint:
https://ingest.<region>.signoz.cloud:443/v1/metrics - Connection Type: HTTP/Protobuf
- Data Types: Select
metrics
- Endpoint:
https://ingest.<region>.signoz.cloud:443/v1/logs - Connection Type: HTTP/Protobuf
- Data Types: Select
logs

- Under Headers, add:
- Name:
signoz-ingestion-key - Value: Your SigNoz ingestion key
- Name:
- Optionally, under Attributes, set
service.nameto identify your Hasura instance (e.g.,hasura-prod).

If you run multiple Hasura applications, configure different service.name attributes for each to identify and filter metrics more easily.
- Click Update, then toggle the Status button to enable the integration.
Verify these values:
<region>: Your SigNoz Cloud region<your-ingestion-key>: Your SigNoz ingestion key
Alternatively Configure via CLI
Alternatively, you can apply this configuration via the Hasura CLI by updating your metadata/opentelemetry.yaml file:
status: enabled
data_types:
- traces
- metrics
- logs
exporter_otlp:
headers:
- name: signoz-ingestion-key
value: <your-ingestion-key>
resource_attributes:
- name: service.name
value: hasura-prod
otlp_traces_endpoint: https://ingest.<region>.signoz.cloud:443/v1/traces
otlp_metrics_endpoint: https://ingest.<region>.signoz.cloud:443/v1/metrics
otlp_logs_endpoint: https://ingest.<region>.signoz.cloud:443/v1/logs
protocol: http/protobuf
traces_propagators:
- tracecontext
batch_span_processor:
max_export_batch_size: 512
Apply the metadata:
hasura metadata apply
Available Telemetry
Hasura exports the following telemetry data to SigNoz:
Hasura traces operations across:
- Metadata APIs (
/v1/metadata) - Schema APIs (
/v2/query) - GraphQL API (
/v1/graphql) - Event triggers
- Scheduled triggers
Multiple operations are linked together with the same trace ID, enabling end-to-end request tracing.
Hasura exports OpenTelemetry metrics covering:
- API request rates and latencies
- Query execution times
- Event trigger performance
- Connection pool statistics
The available metrics are the same as those available via Prometheus. See Hasura Metrics Documentation.
All logs printed to the output stream are exported to SigNoz. Log structure follows the OpenTelemetry Logs Data Model:
bodycontains the log messageseveritycontains the log levelattributes.typecontains the Hasura log type
Trace Data Connectors
Hasura Data Connectors run as separate services alongside the GraphQL Engine. Each connector handles communication with a specific database. Data connectors only support traces. To visualize the full timeline of a GraphQL request (including the actual database execution), you must also configure OpenTelemetry for your data connectors using environment variables in their respective deployments.
The GraphQL Data Connectors (which power Athena, MariaDB, MySQL, Oracle, Redshift, Snowflake, etc.) are built on top of Quarkus. Set the following environment variables on the connector container:
QUARKUS_OTEL_EXPORTER_OTLP_ENDPOINT="https://ingest.<region>.signoz.cloud:443"
QUARKUS_OTEL_EXPORTER_OTLP_PROTOCOL="http/protobuf"
QUARKUS_OTEL_EXPORTER_OTLP_HEADERS="signoz-ingestion-key=<your-ingestion-key>"
QUARKUS_OTEL_RESOURCE_ATTRIBUTES="service.name=hasura-graphql-connector"
The Mongo Data Connector uses standard OpenTelemetry environment variables:
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="https://ingest.<region>.signoz.cloud:443/v1/traces"
OTEL_EXPORTER_OTLP_TRACES_PROTOCOL="http/protobuf"
OTEL_EXPORTER_OTLP_HEADERS="signoz-ingestion-key=<your-ingestion-key>"
OTEL_SERVICE_NAME="hasura-mongo-connector"
OTEL_PROPAGATORS="tracecontext"
Verify these values:
<region>: Your SigNoz Cloud region<your-ingestion-key>: Your SigNoz ingestion key
Validate
Verify data is appearing in SigNoz:
- Go to the Services section in SigNoz to see your Hasura service listed.
- Navigate to Traces to view distributed traces for your GraphQL operations.
- Use Dashboards > Metrics Explorer to query Hasura metrics.
- Check the Logs tab and filter by
service.nameto see Hasura logs.
Data should start appearing within a few minutes of enabling the integration.



Troubleshooting
Data is not appearing in SigNoz
- Check endpoint format: Ensure the endpoint URL includes the correct path suffix (
/v1/traces,/v1/metrics,/v1/logs). Hasura does not auto-append these. - Verify ingestion key: Confirm your
signoz-ingestion-keyheader value is correct and active. - Check region: Ensure the region in your endpoint matches your SigNoz Cloud region.
- Verify Hasura version: OpenTelemetry traces require Hasura v2.18.0+, metrics require v2.31.0+, and logs require v2.35.0+.
Connection errors
- If using Hasura Cloud, ensure your SigNoz endpoint uses HTTPS with port 443.
- If running Hasura as a Docker container, you may need to use
http://host.docker.internal:4318for local collectors.
Trace context not propagating
Hasura supports W3C Trace Context for trace propagation (from v2.35.0+). If upstream services aren't receiving trace context, ensure they also support W3C Trace Context headers.
Setup OpenTelemetry Collector (Optional)
What is the OpenTelemetry Collector?
Think of the OTel Collector as a middleman between your app and SigNoz. Instead of your application sending data directly to SigNoz, it sends everything to the Collector first, which then forwards it along.
Why use it?
- Cleaning up data — Filter out noisy traces you don't care about, or remove sensitive info before it leaves your servers.
- Keeping your app lightweight — Let the Collector handle batching, retries, and compression instead of your application code.
- Adding context automatically — The Collector can tag your data with useful info like which Kubernetes pod or cloud region it came from.
- Future flexibility — Want to send data to multiple backends later? The Collector makes that easy without changing your app.
See Switch from direct export to Collector for step-by-step instructions to convert your setup.
Configure your existing Collector to accept data and forward it. Ensure your otel-collector-config.yaml has the otlp HTTP receiver enabled and is present in your pipelines. We can configure Hasura to point to your collector endpoint (e.g., http://host.docker.internal:4318/v1/traces) instead of SigNoz directly.
For more details, see Why use the OpenTelemetry Collector? and the Collector configuration guide.
Next Steps
- Set up dashboards to visualize Hasura performance
- Create alerts for critical metrics like error rates or high latency
- Explore distributed tracing to identify performance bottlenecks
- Query logs to debug issues in your Hasura setup
Get Help
If you need help with the steps in this topic, please reach out to us on SigNoz Community Slack.
If you are a SigNoz Cloud user, please use in product chat support located at the bottom right corner of your SigNoz instance or contact us at cloud-support@signoz.io.