SigNoz Cloud - This page is relevant for SigNoz Cloud editions.
Self-Host - This page is relevant for self-hosted SigNoz editions.

Haystack Observability & Monitoring with OpenTelemetry

Overview

This guide walks you through setting up observability and monitoring for Haystack using OpenTelemetry and exporting traces, logs, and metrics to SigNoz. With this integration, you can observe and track various metrics for your Haystack applications and llm usage.

Monitoring Haystack in your AI applications with telemetry ensures full observability across your AI and LLM workflows. By leveraging SigNoz, you can analyze correlated traces, logs, and metrics in unified dashboards, configure alerts, and gain actionable insights to continuously improve reliability, responsiveness, and user experience.

Prerequisites

Monitoring Haystack

For more information on getting started with Haystack in your Python environment, refer to the Haystack quickstart guide.

No code auto-instrumentation is recommended for quick setup with minimal code changes. It's ideal when you want to get observability up and running without modifying your application code and are leveraging standard instrumentor libraries.

Step 1: Install the necessary packages in your Python environment.

pip install \
  opentelemetry-distro \
  opentelemetry-exporter-otlp \
  httpx \
  opentelemetry-instrumentation-httpx \
  opentelemetry-instrumentation-system-metrics \
  haystack-ai \
  openinference-instrumentation-haystack

Step 2: Add Automatic Instrumentation

opentelemetry-bootstrap --action=install

Step 3: Configure logging level

To ensure logs are properly captured and exported, configure the root logger to emit logs at the DEBUG level or higher:

import logging

logging.getLogger().setLevel(logging.DEBUG)
logging.getLogger("httpx").setLevel(logging.DEBUG)

This sets the minimum log level for the root logger to DEBUG, which ensures that logger.debug() calls and higher severity logs (INFO, WARNING, ERROR, CRITICAL) are captured by the OpenTelemetry logging auto-instrumentation and sent to SigNoz.

Step 4: Create an example Haystack application

main.py
from haystack.components.agents import Agent
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.dataclasses import ChatMessage

agent = Agent(
    chat_generator=OpenAIChatGenerator(model='gpt-4o-mini'),
    system_prompt="You are a helpful assistant.",
)

result = agent.run(messages=[ChatMessage.from_user("What is SigNoz?")])

print(result['last_message'].text)
πŸ“ Note

Before running this code, ensure that you have set the environment variable OPENAI_API_KEY with your generated OpenAI API key.

Step 5: Run your application with auto-instrumentation

Run your application with the following environment variables set. This configures OpenTelemetry to export traces, logs, and metrics to SigNoz Cloud and enables automatic log correlation:

OTEL_RESOURCE_ATTRIBUTES="service.name=<service_name>" \
OTEL_EXPORTER_OTLP_ENDPOINT="https://ingest.<region>.signoz.cloud:443" \
OTEL_EXPORTER_OTLP_HEADERS="signoz-ingestion-key=<your-ingestion-key>" \
OTEL_EXPORTER_OTLP_PROTOCOL=grpc \
OTEL_TRACES_EXPORTER=otlp \
OTEL_METRICS_EXPORTER=otlp \
OTEL_LOGS_EXPORTER=otlp \
OTEL_PYTHON_LOG_CORRELATION=true \
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true \
opentelemetry-instrument <your_run_command>
  • <service_name>Β is the name of your service
  • <region>: Your SigNoz Cloud region
  • <your-ingestion-key>: Your SigNoz ingestion key
  • Replace <your_run_command> with the actual command you would use to run your application. In this case we would use: python main.py
βœ… Info

Using self-hosted SigNoz? Most steps are identical. To adapt this guide, update the endpoint and remove the ingestion key header as shown in Cloud β†’ Self-Hosted.

View Traces, Logs, and Metrics in SigNoz

Your Haystack agent usage should now automatically emit traces and metrics.

You should be able to view traces in Signoz Cloud under the traces tab:

Haystack Trace View
Haystack Trace View

When you click on a trace in SigNoz, you'll see a detailed view of the trace, including all associated spans, along with their events and attributes.

Haystack Detailed Trace View
Haystack Detailed Trace View

You should be able to view logs in SigNoz Cloud under the logs tab. You can also view logs by clicking on the β€œRelated Logs” button in the trace view to see correlated logs:

Related logs
Related logs button
Haystack Logs View
Haystack Logs View

When you click on any of these logs in SigNoz, you'll see a detailed view of the log, including attributes:

Haystack Detailed Log View
Haystack Detailed Logs View

You should be able to see Haystack related metrics in Signoz Cloud under the metrics tab:

Haystack Metrics View
Haystack Metrics View

When you click on any of these metrics in SigNoz, you'll see a detailed view of the metric, including attributes:

Haystack Detailed Metrics View
Haystack Detailed Metrics View

Troubleshooting

If you don't see your telemetry data:

  1. Verify network connectivity - Ensure your application can reach SigNoz Cloud endpoints
  2. Check ingestion key - Verify your SigNoz ingestion key is correct
  3. Wait for data - OpenTelemetry batches data before sending, so wait 10-30 seconds after making API calls
  4. Try a console exporter β€” Enable a console exporter locally to confirm that your application is generating telemetry data before it’s sent to SigNoz

Next Steps

You can also check out our custom Haystack dashboardΒ here which provides specialized visualizations for monitoring your Haystack usage in applications. The dashboard includes pre-built charts specifically tailored for LLM usage, along with import instructions to get started quickly.

Haystack Dashboard
Haystack Dashboard Template

Setup OpenTelemetry Collector (Optional)

What is the OpenTelemetry Collector?

Think of the OTel Collector as a middleman between your app and SigNoz. Instead of your application sending data directly to SigNoz, it sends everything to the Collector first, which then forwards it along.

Why use it?

  • Cleaning up data β€” Filter out noisy traces you don't care about, or remove sensitive info before it leaves your servers.
  • Keeping your app lightweight β€” Let the Collector handle batching, retries, and compression instead of your application code.
  • Adding context automatically β€” The Collector can tag your data with useful info like which Kubernetes pod or cloud region it came from.
  • Future flexibility β€” Want to send data to multiple backends later? The Collector makes that easy without changing your app.

See Switch from direct export to Collector for step-by-step instructions to convert your setup.

For more details, see Why use the OpenTelemetry Collector? and the Collector configuration guide.

Additional resources:

Last updated: February 17, 2026

Edit on GitHub