Real-world workflows you can run with the SigNoz MCP Server and any MCP-compatible AI assistant.
Each guide walks through a specific scenario - the prompt to try, what to expect, and what the MCP server does under the hood.
Search, filter, and analyze logs by asking questions in plain English - no query syntax required.
Ask 'why is this slow?' and get a full span breakdown identifying the bottleneck service.
Paste a trace ID from a support ticket and reconstruct the full request path with root cause.
Find where errors originate in the call chain when error rates spike on a service.
When multiple services alert simultaneously, identify whether it's a cascade from one failure or separate incidents.
Compare key metrics before and after a deployment to detect performance regressions or unexpected changes.
Generate a handoff summary of recent incidents and ongoing issues for the next on-call engineer.
Identify noisy, flapping, and stale alerts by analyzing which alerts correlate with actual service degradation and which don't.
Profile request paths via traces while building features to find overhead before it reaches production.
Debug a failed request from your IDE and get the full trace with span breakdown and error logs without opening a browser.
Create custom dashboards by describing what you want to visualize in plain English.
Instantly generate focused dashboards for active incidents with relevant metrics and traces.
Quickly create production ready alerts for newly deployed services using plain English.
After an incident is resolved, compile a timeline of alerts, log events, trace anomalies, and metric changes into a clean evidence summary.