SigNoz Cloud - This page is relevant for SigNoz Cloud editions.

Pydantic AI Observability with SigNoz

Overview

This guide walks you through setting up observability and monitoring for Pydantic AI API using OpenTelemetry and exporting logs, traces, and metrics to SigNoz. With this integration, you can observe model and agent performance, capture request/response details, and track system-level metrics in SigNoz, giving you real-time visibility into latency, error rates, and usage trends for your Pydantic AI applications.

Instrumenting Pydantic AI in your AI applications with telemetry ensures full observability across your AI workflows, making it easier to debug issues, optimize performance, and understand user interactions. By leveraging SigNoz, you can analyze correlated traces, logs, and metrics in unified dashboards, configure alerts, and gain actionable insights to continuously improve reliability, responsiveness, and user experience.

Prerequisites

  • A SigNoz Cloud account with an active ingestion key
  • Internet access to send telemetry data to SigNoz Cloud
  • Pydantic AI integrated into your Python application.
  • For Python: pip installed for managing Python packages and (optional but recommended) a Python virtual environment to isolate dependencies

Monitoring Pydantic AI

For more detailed info on instrumenting your Pydantic AI applications click here.

No-code auto-instrumentation is recommended for quick setup with minimal code changes. It's ideal when you want to get observability up and running without modifying your application code and are leveraging standard instrumentor libraries.

Step 1: Install the necessary packages in your Python environment.

pip install \
  opentelemetry-distro \
  opentelemetry-exporter-otlp \
  httpx \
  opentelemetry-instrumentation-httpx \
  pydantic-ai

Step 2: Add Automatic Instrumentation

opentelemetry-bootstrap --action=install

Step 3: Instrument your Pydantic AI application

After setting up the OpenTelemetry configurations for traces, logs, and metrics, initialize Pydantic AI instrumentation by calling Agent.instrument_all():

from pydantic_ai.agent import Agent

# Initialize Pydantic AI instrumentation
Agent.instrument_all()

This call enables automatic tracing, logs, and metrics collection for all Pydantic AI agents in your application.

πŸ“Œ Note: Ensure this is called before any Pydantic AI related calls to properly configure instrumentation of your application

Step 4: Run an example

from pydantic_ai import Agent, RunContext
import asyncio

Agent.instrument_all()

roulette_agent = Agent(
    'openai:gpt-4o',
    deps_type=int,
    system_prompt=(
        'Use the `roulette_wheel` function to see if the '
        'customer has won based on the number they provide.'
    ),
    instrument=True
)

@roulette_agent.tool
async def roulette_wheel(ctx: RunContext[int], square: int) -> str:
    """check if the square is a winner"""
    return 'winner' if square == ctx.deps else 'loser'


async def main():
    success_number = 18
    result = await roulette_agent.run('Put my money on square eighteen', deps=success_number)
    print(result.output)

if __name__ == '__main__':
    asyncio.run(main())

πŸ“Œ Note: Pydantic AI supports a variety of model providers for LLMs. In this example, we're using OpenAI. Before running this code, ensure that you have set the environment variable OPENAI_API_KEY with your generated API key.

Step 5: Run your application with auto-instrumentation

OTEL_RESOURCE_ATTRIBUTES="service.name=<service_name>" \
OTEL_EXPORTER_OTLP_ENDPOINT="https://ingest.<region>.signoz.cloud:443" \
OTEL_EXPORTER_OTLP_HEADERS="signoz-ingestion-key=<your_ingestion_key>" \
OTEL_EXPORTER_OTLP_PROTOCOL=grpc \
OTEL_TRACES_EXPORTER=otlp \
OTEL_METRICS_EXPORTER=otlp \
OTEL_LOGS_EXPORTER=otlp \
OTEL_PYTHON_LOG_CORRELATION=true \
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true \
opentelemetry-instrument <your_run_command>
  • <service_name>Β is the name of your service
  • Set the <region> to match your SigNoz Cloud region
  • Replace <your_ingestion_key> with your SigNoz ingestion key
  • Replace <your_run_command> with the actual command you would use to run your application. For example: python main.py
βœ… Info

Using self-hosted SigNoz? Most steps are identical. To adapt this guide, update the endpoint and remove the ingestion key header as shown in Cloud β†’ Self-Hosted.

View Traces, Logs, and Metrics in SigNoz

Your Pydantic AI commands should now automatically emit traces, logs, and metrics.

You should be able to view traces in Signoz Cloud under the traces tab:

Pydantic Trace View
Pydantic AI Trace View

When you click on a trace in SigNoz, you'll see a detailed view of the trace, including all associated spans, along with their events and attributes.

Pydantic Detailed Trace View
Pydantic AI Detailed Trace View

You should be able to view logs in Signoz Cloud under the logs tab. You can also view logs by clicking on the β€œRelated Logs” button in the trace view to see correlated logs:

Related logs
Related logs button
Pydantic Logs View
Pydantic AI Logs View

When you click on any of these logs in SigNoz, you'll see a detailed view of the log, including attributes:

Pydantic Detailed Log View
Pydantic AI Detailed Logs View

You should be able to see Pydantic related metrics in Signoz Cloud under the metrics tab:

Pydantic Metrics View
Pydantic AI Metrics View

When you click on any of these metrics in SigNoz, you'll see a detailed view of the metric, including attributes:

Pydantic Detailed Metrics View
Pydantic AI Detailed Metrics View

Dashboard

You can also check out our custom Pydantic AI dashboardΒ here which provides specialized visualizations for monitoring your Pydantic AI usage in applications. The dashboard includes pre-built charts specifically tailored for LLM usage, along with import instructions to get started quickly.

Pydantic Dashboard
Pydantic AI Dashboard Template

Last updated: October 16, 2025

Edit on GitHub

Was this page helpful?