OpenAI OpenTelemetry Instrumentation
This document contains instructions on how to set up OpenTelemetry instrumentation for your OpenAI applications and view your application traces, logs, and metrics in SigNoz.
Requirements
- Python 3.8 or newer
- OpenAI Python library (
openai >= 1.0.0
) - Valid OpenAI API key
Setup
Step 1. Create a virtual environment
python3 -m venv .venv
source .venv/bin/activate
Step 2. Install the OpenTelemetry dependencies
pip install opentelemetry-distro~=0.51b0
pip install opentelemetry-exporter-otlp~=1.30.0
pip install opentelemetry-instrumentation-openai-v2
Step 3. Add automatic instrumentation
opentelemetry-bootstrap --action=install
Step 4. Run your application
OTEL_SERVICE_NAME=<service-name> \
OTEL_EXPORTER_OTLP_ENDPOINT="https://ingest.<region>.signoz.cloud:443" \
OTEL_EXPORTER_OTLP_HEADERS="signoz-ingestion-key=<your-ingestion-key>" \
OTEL_EXPORTER_OTLP_PROTOCOL=grpc \
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true \
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true \
OPENAI_API_KEY=<your-openai-key> \
opentelemetry-instrument <your_run_command>
Environment Variables Explained:
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED
- Enables automatic logging instrumentationOTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT
- Captures prompts and completions as logsSet the
<region>
to match your SigNoz Cloud regionReplace
<your-ingestion-key>
with your SigNoz ingestion keyReplace
<service-name>
with the name of your serviceReplace
<your-openai-key>
with your OpenAI API key
What data gets captured?
This instrumentation captures comprehensive telemetry data for your OpenAI applications:
Traces
- Spans for each OpenAI API call with timing information
- Operation details like model name, max tokens etc.
- Token usage including input and output
- Request and response metadata such as finish reason and model used
Logs
- Structured logs for each API call when
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true
is set - Message content logs (prompts and completions) when
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true
is enabled - Error logs for failed API calls with detailed error information and stack traces
- Performance logs showing request duration and timing information
Log Levels and Content:
INFO
level logs for successful API calls with metadataERROR
level logs for failed requests with error detailsDEBUG
level logs for detailed request/response information (when debug logging is enabled)
Metrics
- Duration metrics showing how long OpenAI calls take
- Token usage metrics tracking consumption over time
- Request rate metrics showing API call frequency
- Error rate metrics for monitoring API failures
More details about the metrics can be found here.
Validating instrumentation
Validate your traces, logs, and metrics in SigNoz:
- Trigger OpenAI API calls in your app. Make several API calls to generate some data. Then, wait for some time.
- In SigNoz, open the
Services
tab. Hit theRefresh
button on the top right corner, and your application should appear in the list ofApplications
. - Go to the
Traces
tab, and apply relevant filters to see your application's traces. - Check the
Logs
tab to see captured logs from your OpenAI calls. - Visit the
Metrics
tab to view token usage and performance metrics.



Capturing Message Content (Optional)
By default, message content such as prompts and completions are not captured. To capture message content as log events, set the environment variable:
export OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true
Note: Be cautious when enabling this in production as it may capture sensitive user data. This feature is already included in the run command above.
Troubleshooting your installation
Spans are not being reported
If spans are not being reported to SigNoz, try enabling debug exporter which writes the JSON formatted trace data to the console by setting env var OTEL_TRACES_EXPORTER=console
.
OTEL_SERVICE_NAME=my-openai-app \
OTEL_TRACES_EXPORTER=console \
opentelemetry-instrument python app.py
You should see trace data in your console output that looks like:
{
"name": "chat_completions_create",
"context": {
"trace_id": "0xedb7caf0c8b082a9578460a201759193",
"span_id": "0x57cf7eee198e1fed",
"trace_state": "[]"
},
"kind": "SpanKind.CLIENT",
"parent_id": null,
"start_time": "2025-01-15T10:30:00.804758Z",
"end_time": "2025-01-15T10:30:01.204805Z",
"status": {
"status_code": "UNSET"
},
"attributes": {
"gen_ai.system": "openai",
"gen_ai.request.model": "gpt-4o-mini",
"gen_ai.usage.total_tokens": 150
},
"events": [],
"links": [],
"resource": {
"telemetry.sdk.language": "python",
"telemetry.sdk.name": "opentelemetry",
"telemetry.sdk.version": "1.30.0",
"service.name": "my-openai-app"
}
}
Common Issues
If you don't see your telemetry data:
- Check your OpenAI API key - Make sure
OPENAI_API_KEY
environment variable is set - Verify network connectivity - Ensure your application can reach SigNoz Cloud endpoints
- Check ingestion key - Verify your SigNoz ingestion key is correct
- Wait for data - OpenTelemetry batches data before sending, so wait 10-30 seconds after making API calls