Monitor LLM Apps and Agents,
Correlate with Logs & Metrics
Track AI workflows, RAG pipelines, and agents alongside microservices. Get unified alerting,
dashboards, and correlation across your entire stack.

Everything You Need to
Monitor LLM Applications
Correlate LLM Traces with System Logs
Jump from a slow LLM trace to application logs to infrastructure metrics in one click. Understand if latency is from model inference, database queries, or network issues. No context switching between tools.
Get Notified Before Issues Impact Users
Set alerts on any metric or trace attribute - token limits, error rates, P99 latency, or custom thresholds. Build dashboards that combine LLM metrics with infrastructure health.
Trace Every Step from User Input to Final Response
Visualize complete agent workflows with distributed tracing. See every model call, tool invocation, and reasoning step in waterfall views. Quickly identify loops, bottlenecks, and failed tool calls.
Control Your LLM Costs with Granular Token Tracking
Track input/output tokens by model, operation, and user. Get cost breakdowns, prompt efficiency scores, and budget alerts to optimize spending without sacrificing quality.
Monitor LLMs Alongside Your Entire Stack
Track Kubernetes pods running your agents, database queries in your RAG pipeline, and API gateway traffic to your LLM endpoints. One platform for complete system observability.
Start Monitoring in Minutes, Not Days
Instant setup for LangChain, LlamaIndex, CrewAI. Automatic instrumentation for OpenAI, Anthropic, Bedrock. Framework-specific dashboards included.
How SigNoz Compares to
LLM-Only Tools
| Feature | SigNoz | Langfuse | LangSmith | Braintrust |
|---|---|---|---|---|
| LLM Tracing | Full traces with OpenTelemetry | OpenTelemetry-based | Async distributed tracing | Request-level tracing |
| Production Alerts | Any metric | No alerting | LLM metrics only | LLM metrics only |
| Prompt Management | Via integrations | Version control with caching | A/B testing built-in | Side-by-side comparison |
| Evaluation/Scoring | Via integrations | LLM-as-judge, custom evals | Built-in evaluators | Dataset/task/scorer framework |
| Infra Correlation | Metrics, logs, traces together | LLM-only | LLM-only | LLM-only |
| Application Correlation | Cross-service tracing | |||
| Kubernetes/Docker Monitoring | Native support | |||
| Database Query Tracking | Built-in | |||
| Dashboards | Advanced query builder | Limited presets | Limited presets | Basic charts |
Works with Your Favorite LLM Tools
Automatic instrumentation for every part of your LLM stack. From model
providers to vector databases to agent frameworks, get instant visibility
without writing custom telemetry code.
LLM Frameworks
Capture full agent execution and chain tracing with LangChain
Monitor query engines and indexing pipelines in LlamaIndex
Track multi-agent orchestration and delegation using CrewAI
Observe complete RAG pipeline performance with Haystack
Trace conversational agent interactions in AutoGen
Monitor real-time voice AI pipelines with Pipecat
Model Providers
Monitor OpenAI GPT-4, GPT-3.5, and embedding calls
Track requests to Anthropic Claude 3 and Claude 2
Cover all Amazon Bedrock models including Claude, Llama, and Titan
Observe Google Vertex AI Gemini and PaLM inference
Vector Stores & Databases
Trace vector search operations and latency in Pinecone
Monitor hybrid search queries and filters with Weaviate
Route and monitor any model through LiteLLM proxy
Observe vector similarity search performance using Qdrant
Tools & APIs
Track real-time communication infrastructure with LiveKit
Monitor voice AI application flows in Vapi
Observe workflow automation and LLM chains in n8n
Validate data structures and responses with Pydantic
Start Monitoring Your LLM Apps in Minutes
Get started in three steps:

Simple
usage-based
pricing
Tired of Datadog's unpredictable bills or New Relic's user-based pricing?
We're here for you.
Developers
Love
SigNoz
Cloud
Fully managed, SOC 2-compliant, ideal for teams who want to start quickly without managing infrastructure.
Self-Host
For tighter security & data residency requirements. It is Apache 2.0 open source, built on open standards.
10 million+
OSS Downloads
25k+
GitHub Stars
Senior SRE, Shaped
Charlie Shen
Lead DevOps Engineer, Brainfish
I've studied more than 10 observability tools in the market. We eventually landed on SigNoz, which says a lot. Compared to Elastic Cloud, it's a breeze with SigNoz.
Niranjan Ravichandra
Co-founder & CTO, Cedana
Getting started with SigNoz was incredibly easy. We were able to set up the OpenTelemetry collector quickly and start monitoring our systems almost immediately.
Poonkuyilan V
IT Infrastructure Lead, The Hindu
Recently, we configured alerts for pod restarts and were able to quickly identify and resolve the root cause before it escalated. Additionally, SigNoz's tracing capabilities helped us spot unwanted calls to third-party systems, allowing us to optimize our applications.
Avneesh Kumar
VP of Engineering, Mailmodo
We have started saving almost six hours on a daily basis, which we can now invest in other tech debts and backlogs. The best thing about SigNoz is that it's open source. I can go into the source code and look at what's happening. That's a great confidence booster for long-term usage.
Khushhal Reddy
Senior Backend Engineer, Kiwi
SigNoz is something we use daily. If I have ten tabs open, six of them are SigNoz. We used traces and it helped us take 30 seconds down to 3 seconds.