SigNoz Cloud - This page is relevant for SigNoz Cloud editions.
Self-Host - This page is relevant for self-hosted SigNoz editions.

Temporal AI Agents Dashboard

Info

Before using this dashboard, instrument your Temporal applications with OpenTelemetry and configure export to SigNoz. See the Temporal observability guide for complete setup instructions.

This dashboard provides visibility into AI agent workloads running on Temporal. It tracks LLM usage patterns, agent performance, application health, and Temporal Cloud infrastructure metrics to help teams monitor and optimize their agentic AI implementations.

Dashboard Preview

Temporal Dashboard
Temporal Dashboard Template

Dashboards → + New dashboard → Import JSON

What This Dashboard Monitors

This dashboard tracks critical performance metrics for your Temporal usage using OpenTelemetry to help you:

  • LLM Performance: Monitor token consumption, model distribution, request volumes, latency trends, and cache utilization to optimize AI workload efficiency and costs.
  • Application Health: Track service adoption, request patterns, HTTP performance, error rates, and logs to ensure reliability and identify issues across your Temporal applications.
  • Agent Activity: Understand agent distribution, usage patterns, and trace details to optimize agent-based workflows and troubleshoot agent execution.
  • Temporal Cloud Infrastructure: Monitor worker poll success rates, synchronization, and timeout patterns to maintain healthy task distribution and identify infrastructure issues.

Metrics Included

LLM

  • Total Token Usage (Input & Output): Tracks LLM token consumption across your AI agent workflows. By splitting input tokens (prompts sent to models) and output tokens (model responses), you can monitor LLM usage efficiency, identify cost drivers, and track adoption trends across your agent workloads.
  • Model Calls: Shows which LLM models (GPT-4, Claude, Gemini, etc.) are being invoked most frequently, helping you track model preferences, measure adoption of newer releases, and align usage with performance or cost goals.
  • Token Distribution by Model: This breakdown reveals how token usage is spread across different model variants, helping you identify which models drive the most consumption and optimize your workload distribution for cost and performance.
  • Requests Over Time: Tracks the volume of Agent requests made through your Temporal agent workflows over time, revealing demand patterns, peak usage windows, and helping you plan capacity and cost controls.
  • Latency (P95 Over Time): Monitors the 95th percentile latency of LLM API requests in your Temporal agent workflows, helping identify performance bottlenecks and ensure responsive AI interactions.
  • Cache Utilization Rate: Tracks how effectively LLM response caching (semantic or prompt caching) is being utilized. Higher cache hit rates reduce costs and latency by reusing previous LLM responses for similar requests.

App

  • Services and Languages Using Temporal: Lists all services and their programming languages that are running AI agent workloads on Temporal.
  • Number of Requests Over Time: Shows the total volume of application requests handled by your Temporal agent workflows over time.
  • HTTP Request Duration (Over Time): Tracks the average duration of outbound HTTP requests over time, providing insight into external API performance and network latency affecting your agent workflows.
  • Total Error Rate: Tracks the percentage of failed operations in your Temporal agent workflows. Monitor this to identify reliability issues and ensure your AI applications maintain a smooth, dependable experience.
  • Error Records: Lists all recorded errors with timestamps. Click on any entry to view the full trace where the error originated, enabling quick root cause analysis.
  • Logs: Lists all logs related to your Temporal agent workflows. Use this for troubleshooting, auditing usage patterns, and correlating issues with specific request flows. Click any row to view the full log entry details.

Agents

  • Agent Distribution: This panel shows the distribution of various agents being used within Temporal, helping you understand which agents are most active and how adoption is spread across different agent types in your implementation.
  • Agents List: Lists all agents invoked within your Temporal workflows, showing each agent's name, total number of invocations, and average execution latency.
  • Agent Traces: This panel displays all traces generated by selected agents, with each entry linking directly to the full trace.

Temporal Cloud

  • Temporal Poll Success Rate: Tracks the rate of successful polls made by Temporal workers to retrieve tasks from task queues. A declining success rate indicates connection issues or queue congestion that may delay task execution.
  • Worker Poll Success Sync Rate: Monitors the rate of successful synchronous polls over time, indicating how effectively workers are retrieving and processing available tasks without delays.
  • Worker Poll Timeout Rate: Tracks the frequency of poll timeouts when workers fail to receive responses within the expected timeframe. High timeout rates may indicate network issues, server congestion, or misconfigurations that could impact task execution reliability.

Last updated: December 23, 2025

Edit on GitHub

Was this page helpful?