SigNoz Cloud - This page is relevant for SigNoz Cloud editions.
Self-Host - This page is relevant for self-hosted SigNoz editions.

AutoGen Observability with SigNoz

Overview

This guide walks you through setting up observability and monitoring for AutoGen using OpenTelemetry and exporting logs, traces, and metrics to SigNoz. With this integration, you can observe various models performance, capture request/response details, and track system-level metrics in SigNoz, giving you real-time visibility into latency, error rates, and usage trends for your AutoGen applications.

Instrumenting AutoGen in your AI applications with telemetry ensures full observability across your AI workflows, making it easier to debug issues, optimize performance, and understand user interactions. By leveraging SigNoz, you can analyze correlated traces, logs, and metrics in unified dashboards, configure alerts, and gain actionable insights to continuously improve reliability, responsiveness, and user experience.

Prerequisites

  • A SigNoz Cloud account with an active ingestion key
  • Internet access to send telemetry data to SigNoz Cloud
  • Python 3.10+ with AutoGen installed (pip install autogen-agentchat autogen-ext autogen-core)
  • For Python: pip installed for managing Python packages and (optional but recommended) a Python virtual environment to isolate dependencies

Monitoring AutoGen

For more detailed info on tracing your AutoGen applications click here. For logging, click here.

No-code auto-instrumentation is recommended for quick setup with minimal code changes. It's ideal when you want to get observability up and running without modifying your application code and are leveraging standard instrumentor libraries.

Step 1: Install the necessary packages in your Python environment.

pip install \
  opentelemetry-api \
  opentelemetry-distro \
  opentelemetry-exporter-otlp \
  httpx \
  "autogen-agentchat" \
  "autogen-ext[openai]" \
  "autogen-core"

Step 2: Add Automatic Instrumentation

opentelemetry-bootstrap --action=install

Step 3: Instrument your AutoGen application

For Traces:

Configure AutoGen to use OpenTelemetry tracing by passing the tracer provider to the runtime:

from opentelemetry import trace
from autogen_core import SingleThreadedAgentRuntime

tracer_provider = trace.get_tracer_provider()
single_threaded_runtime = SingleThreadedAgentRuntime(tracer_provider=tracer_provider)

For Logs:

Configure AutoGen logging to capture trace logs:

import logging

from autogen_core import TRACE_LOGGER_NAME

logging.basicConfig(level=logging.WARNING)
logger = logging.getLogger(TRACE_LOGGER_NAME)
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.DEBUG)

πŸ“Œ Note: Ensure this is configured before initializing your AutoGen agents to properly instrument your application

Step 4: Run an example

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient
from opentelemetry import trace
from autogen_core import (

    SingleThreadedAgentRuntime,
    TRACE_LOGGER_NAME
)
import logging
import asyncio


logging.basicConfig(level=logging.WARNING)


logger = logging.getLogger(TRACE_LOGGER_NAME)
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.DEBUG)

# Get the tracer provider from your application
tracer_provider = trace.get_tracer_provider()

# for single threaded runtime
single_threaded_runtime = SingleThreadedAgentRuntime(tracer_provider=tracer_provider)


# Define a model client.
model_client = OpenAIChatCompletionClient(
    model="gpt-4o-mini"
)


# Define a simple function tool that the agent can use.
# For this example, we use a fake weather tool for demonstration purposes.
async def get_weather(city: str) -> str:
    """Get the weather for a given city."""
    return f"The weather in {city} is 73 degrees and Sunny."


# Define an AssistantAgent with the model, tool, system message, and reflection enabled.
# The system message instructs the agent via natural language.
agent = AssistantAgent(
    name="weather_agent",
    model_client=model_client,
    tools=[get_weather],
    system_message="You are a helpful assistant.",
    reflect_on_tool_use=True,
    model_client_stream=True,  # Enable streaming tokens from the model client.
)


# Run the agent and stream the messages to the console.
async def main() -> None:
    await Console(agent.run_stream(task="What is the weather in New York?"))
    # Close the connection to the model client.
    await model_client.close()

asyncio.run(main())

πŸ“Œ Note: Before running this code, ensure that you have set the environment variable OPENAI_API_KEY with your generated API key.

Step 5: Run your application with auto-instrumentation

OTEL_RESOURCE_ATTRIBUTES="service.name=<service_name>" \
OTEL_EXPORTER_OTLP_ENDPOINT="https://ingest.<region>.signoz.cloud:443" \
OTEL_EXPORTER_OTLP_HEADERS="signoz-ingestion-key=<your_ingestion_key>" \
OTEL_EXPORTER_OTLP_PROTOCOL=grpc \
OTEL_TRACES_EXPORTER=otlp \
OTEL_METRICS_EXPORTER=otlp \
OTEL_LOGS_EXPORTER=otlp \
OTEL_PYTHON_LOG_CORRELATION=true \
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true \
opentelemetry-instrument <your_run_command>
  • <service_name>Β is the name of your service
  • Set the <region> to match your SigNoz Cloud region
  • Replace <your_ingestion_key> with your SigNoz ingestion key
  • Replace <your_run_command> with the actual command you would use to run your application. For example: python main.py
βœ… Info

Using self-hosted SigNoz? Most steps are identical. To adapt this guide, update the endpoint and remove the ingestion key header as shown in Cloud β†’ Self-Hosted.

View Traces, Logs, and Metrics in SigNoz

Your AutoGen commands should now automatically emit traces, logs, and metrics.

You should be able to view traces in Signoz Cloud under the traces tab:

AutoGen Trace View
AutoGen Trace View

When you click on a trace in SigNoz, you'll see a detailed view of the trace, including all associated spans, along with their events and attributes.

AutoGen Detailed Trace View
AutoGen Detailed Trace View

You should be able to view logs in Signoz Cloud under the logs tab. You can also view logs by clicking on the β€œRelated Logs” button in the trace view to see correlated logs:

Related logs
Related logs button
AutoGen Logs View
AutoGen Logs View

When you click on any of these logs in SigNoz, you'll see a detailed view of the log, including attributes:

AutoGen Detailed Log View
AutoGen Detailed Logs View

You should be able to see AutoGen related metrics in Signoz Cloud under the metrics tab:

AutoGen Metrics View
AutoGen Metrics View

When you click on any of these metrics in SigNoz, you'll see a detailed view of the metric, including attributes:

AutoGen Detailed Metrics View
AutoGen Detailed Metrics View

Troubleshooting

If you don't see your telemetry data:

  1. Verify network connectivity - Ensure your application can reach SigNoz Cloud endpoints
  2. Check ingestion key - Verify your SigNoz ingestion key is correct
  3. Wait for data - OpenTelemetry batches data before sending, so wait 10-30 seconds after making API calls
  4. Try a console exporter β€” Enable a console exporter locally to confirm that your application is generating telemetry data before it’s sent to SigNoz

Next Steps

You can also check out our custom AutoGen dashboardΒ here which provides specialized visualizations for monitoring your AutoGen usage in applications. The dashboard includes pre-built charts specifically tailored for LLM usage, along with import instructions to get started quickly.

AutoGen Dashboard
AutoGen Dashboard Template

Additional resources:

Last updated: November 17, 2025

Edit on GitHub

Was this page helpful?