Monitor LLM Apps and Agents, Correlate with Logs & Metrics

Track AI workflows, RAG pipelines, and agents alongside microservices. Get unified alerting, dashboards, and correlation across your entire stack.

Llm observability hero

Everything You Need to Monitor LLM Applications

UNIFIED OBSERVABILITY PLATFORM

Correlate LLM Traces with System Logs

Jump from a slow LLM trace to application logs to infrastructure metrics in one click. Understand if latency is from model inference, database queries, or network issues. No context switching between tools.

POWERFUL ALERTS AND CUSTOM DASHBOARDS

Get Notified Before Issues Impact Users

Set alerts on any metric or trace attribute - token limits, error rates, P99 latency, or custom thresholds. Build dashboards that combine LLM metrics with infrastructure health.

END-TO-END REQUEST TRACING

Trace Every Step from User Input to Final Response

Visualize complete agent workflows with distributed tracing. See every model call, tool invocation, and reasoning step in waterfall views. Quickly identify loops, bottlenecks, and failed tool calls.

TOKEN USAGE & COST ANALYTICS

Control Your LLM Costs with Granular Token Tracking

Track input/output tokens by model, operation, and user. Get cost breakdowns, prompt efficiency scores, and budget alerts to optimize spending without sacrificing quality.

PRODUCTION-READY INFRASTRUCTURE MONITORING

Monitor LLMs Alongside Your Entire Stack

Track Kubernetes pods running your agents, database queries in your RAG pipeline, and API gateway traffic to your LLM endpoints. One platform for complete system observability.

PRE-BUILT FRAMEWORK INTEGRATIONS

Start Monitoring in Minutes, Not Days

Instant setup for LangChain, LlamaIndex, CrewAI. Automatic instrumentation for OpenAI, Anthropic, Bedrock. Framework-specific dashboards included.

How SigNoz Compares to
LLM-Only Tools

FeatureSigNozLangfuseLangSmithBraintrust
LLM TracingFull traces with OpenTelemetryOpenTelemetry-basedAsync distributed tracingRequest-level tracing
Production AlertsAny metricNo alertingLLM metrics onlyLLM metrics only
Prompt ManagementVia integrationsVersion control with cachingA/B testing built-inSide-by-side comparison
Evaluation/ScoringVia integrationsLLM-as-judge, custom evalsBuilt-in evaluatorsDataset/task/scorer framework
Infra CorrelationMetrics, logs, traces togetherLLM-onlyLLM-onlyLLM-only
Application CorrelationCross-service tracing
Kubernetes/Docker MonitoringNative support
Database Query TrackingBuilt-in
DashboardsAdvanced query builderLimited presetsLimited presetsBasic charts

Works with Your Favorite LLM Tools

Automatic instrumentation for every part of your LLM stack. From model providers to vector databases to agent frameworks, get instant visibility without writing custom telemetry code.

See All Integrations

LLM Frameworks

Capture full agent execution and chain tracing with LangChain

Monitor query engines and indexing pipelines in LlamaIndex

Track multi-agent orchestration and delegation using CrewAI

Observe complete RAG pipeline performance with Haystack

Trace conversational agent interactions in AutoGen

Monitor real-time voice AI pipelines with Pipecat

Model Providers

Monitor OpenAI GPT-4, GPT-3.5, and embedding calls

Track requests to Anthropic Claude 3 and Claude 2

Cover all Amazon Bedrock models including Claude, Llama, and Titan

Observe Google Vertex AI Gemini and PaLM inference

Vector Stores & Databases

Trace vector search operations and latency in Pinecone

Monitor hybrid search queries and filters with Weaviate

Route and monitor any model through LiteLLM proxy

Observe vector similarity search performance using Qdrant

Tools & APIs

Track real-time communication infrastructure with LiveKit

Monitor voice AI application flows in Vapi

Observe workflow automation and LLM chains in n8n

Validate data structures and responses with Pydantic

Start Monitoring Your LLM Apps in Minutes

Get started in three steps:

ISign up for free SigNoz Cloud account
IIInstall your framework's instrumentation package
IIIAdd two lines to initialize tracing
Your existing application code remains completely untouched while traces start flowing to SigNoz in real-time, giving you instant visibility into every aspect of your LLM operations.
Start Monitoring

Simple
usage-based
pricing

Pricing you can trust

Tired of Datadog's unpredictable bills or New Relic's user-based pricing?
We're here for you.

Pricing per unit
Retention
Scale of ingestion (per month)
Estimated usage
Subtotal
Traces IconTraces
$0.3/GB
0GB
100TB
GB
$0
Logs IconLogs
$0.3/GB
0GB
100TB
GB
$0
Metrics IconMetrics
$0.1/mn samples
0M
100B
mn
$0
Monthly estimate
$49
Calculate your exact monthly billCheck Pricing

Developers
Love
SigNoz

Cloud

Fully managed, SOC 2-compliant, ideal for teams who want to start quickly without managing infrastructure.

Self-Host

For tighter security & data residency requirements. It is Apache 2.0 open source, built on open standards.

undefined Logo

10 million+

OSS Downloads

undefined Logo

25k+

GitHub Stars

ShapedEvery single time we have an issue, SigNoz is always the first place to check. It was super straightforward to migrate - just updating the exporter configuration, basically three lines of code.Karl Lyons
Senior SRE, Shaped
Charlie Shen

Charlie Shen

Lead DevOps Engineer, Brainfish

I've studied more than 10 observability tools in the market. We eventually landed on SigNoz, which says a lot. Compared to Elastic Cloud, it's a breeze with SigNoz.

Niranjan Ravichandra

Niranjan Ravichandra

Co-founder & CTO, Cedana

Getting started with SigNoz was incredibly easy. We were able to set up the OpenTelemetry collector quickly and start monitoring our systems almost immediately.

Poonkuyilan V

Poonkuyilan V

IT Infrastructure Lead, The Hindu

Recently, we configured alerts for pod restarts and were able to quickly identify and resolve the root cause before it escalated. Additionally, SigNoz's tracing capabilities helped us spot unwanted calls to third-party systems, allowing us to optimize our applications.

Avneesh Kumar

Avneesh Kumar

VP of Engineering, Mailmodo

We have started saving almost six hours on a daily basis, which we can now invest in other tech debts and backlogs. The best thing about SigNoz is that it's open source. I can go into the source code and look at what's happening. That's a great confidence booster for long-term usage.

Khushhal Reddy

Khushhal Reddy

Senior Backend Engineer, Kiwi

SigNoz is something we use daily. If I have ten tabs open, six of them are SigNoz. We used traces and it helped us take 30 seconds down to 3 seconds.