✅ Info
Before using this dashboard, instrument your Pipecat applications with OpenTelemetry and configure export to SigNoz. See the Pipecat observability guide for complete setup instructions.
This dashboard offers a clear view into Pipecat usage and performance. It highlights key metrics such as model distribution, error rates, request volumes, and latency trends. Teams can also track detailed records of errors, to better understand adoption patterns and optimize reliability and efficiency.
Dashboard Preview

Dashboards → + New dashboard → Import JSON
What This Dashboard Monitors
This dashboard tracks critical performance metrics for your Pipecat usage using OpenTelemetry to help you:
- Track Reliability: Monitor error rates to identify reliability issues and ensure applications maintain a smooth, dependable experience.
- Analyze Model Adoption: Understand which AI models (via providers like OpenAI, Azure OpenAI, etc.) are being used through Pipecat to track preferences and measure adoption of different models.
- Monitor Usage Patterns: Observe token consumption and request volume trends over time to spot adoption curves, peak cycles, and unusual spikes.
- Ensure Responsiveness: Track P95 latency to surface potential slowdowns, spikes, or regressions and maintain consistent user experience.
- Understand Service Distribution: See which services and programming languages are leveraging Pipecat across your stack.
Metrics Included
Token Usage Metrics
- Total Token Usage (Input & Output): Displays the split between input tokens (user prompts) and output tokens (model responses), showing exactly how much work the system is doing over time.
- Token Usage Over Time: Time series visualization showing token consumption trends to identify adoption patterns, peak cycles, and baseline activity.
Performance & Reliability
- Total Error Rate: Tracks the percentage of Pipecat calls that return errors, providing a quick way to identify reliability issues.
- Latency (P95 Over Time): Measures the 95th percentile latency of requests over time to surface potential slowdowns and ensure consistent responsiveness.
- Average Latency of TTS Over Time for Voice Agent: Monitors the average text-to-speech latency trends for voice agents, helping identify performance degradation in speech synthesis that could impact user experience.
- Average Latency of STT Over Time: Tracks the average speech-to-text latency trends, enabling teams to monitor voice recognition performance and identify potential delays in processing user speech input.
- HTTP Request Duration: Monitors the duration of outbound HTTP requests made during LLM calls, helping identify network bottlenecks and API response time patterns that impact overall Pipecat performance.
Usage Analysis
- LLM Model Distribution: Shows which AI models from various providers (OpenAI, Azure OpenAI, etc.) are being called through Pipecat, helping track preferences and measure adoption across different models.
- TTS Model Distribution: Displays the distribution of text-to-speech models being used across voice agents, helping track which TTS providers and models are most popular.
- STT Model Distribution: Shows the distribution of speech-to-text models being utilized, enabling teams to understand which voice recognition models are being adopted.
- Conversations Over Time: Captures the volume of conversations using Pipecat over time, revealing demand patterns and high-traffic windows.
- Average Number of Turns per Conversation: Tracks the mean number of back-and-forth exchanges in each conversation to understand engagement depth and interaction patterns.
- Number of Conversations Below Set Threshold Turns: Counts conversations that fall below a configured turn threshold, helping identify potential issues with conversation quality or premature terminations.
- List of Conversations Below Set Threshold Turns (Trace ID): Detailed table listing specific conversation trace IDs that fall below the turn threshold for deeper investigation and debugging.
- Services and Languages Using Pipecat: Breakdown showing where Pipecat is being adopted across different services and programming languages in your stack.
- Pipecat Logs: Comprehensive list of all generated logs for Pipecat applications associated with the given service name.
Error Tracking
- Error Records: Table logging all recorded errors with clickable records that link to the originating trace for detailed error investigation.