Your on-call shift is ending. The next engineer is about to take over, and they need to know what happened during your shift.
Instead of manually writing up a handoff doc, you ask your AI assistant, connected to the SigNoz MCP, to generate a summary from recent alert history.
Prerequisites
- Connect your AI assistant to SigNoz using the MCP Server guide.
- Make sure you have alerts configured in SigNoz. See Alerts if you haven't set this up.
Generate the Handoff Summary
Ask your AI assistant to pull the alert history and format it as a handoff brief:
Get the alert history for the last 48 hours. For each alert that fired, tell me which service was affected, when it fired, peak severity, and whether it's resolved or still open. Format as a handoff summary.
Your assistant will use the SigNoz MCP server to fetch the alert history and generate a structured handoff. Here's an example response:
All 3 alerts currently firing (severity: error):
**Frontend Service**: 65 flaps in 48h. Peak error rate 100%. OPEN.
**Payment Service**: 119 flaps in 48h (one every 24 min). Peak error rate 50%. OPEN.
**Checkout Service**: 44 flaps in 48h. Peak error rate 100%. OPEN.
All three alerts have been flapping continuously—error rates dip below threshold
briefly then spike back up. None have cleanly resolved. The high flap counts suggest
the underlying issue isn't being fixed, just intermittently masked by low traffic.
Use Claude Desktop for this use case. Claude will generate an interactive handoff brief visualization that's easy to scan and share with the incoming on-call engineer.

Under the Hood
During this investigation, the MCP server called these tools:
| Step | MCP Tool | What It Did |
|---|---|---|
| 1 | signoz_list_alerts | Retrieved all currently firing alerts |
| 1 | signoz_get_alert_history | Fetched the complete alert state transition history for the 48-hour window to calculate flap counts and identify patterns |
| 1 | signoz_get_alert | Retrieved detailed information for each alert including severity, conditions, and current status |
Related Use Cases
- Alert Correlation Analysis - When multiple services alert simultaneously, identify whether it's a cascade from one failure or separate incidents.
- Error Rate Spike Explainer - Investigate where errors originate in the call chain when a single service starts failing.
- Latency Spike Explainer - Trace the bottleneck when latency spikes on one service.
- Natural Language Log Exploration - Search, filter, and analyze logs in SigNoz by asking questions in plain English.
- Reconstruct a Bug from a Trace ID - Deep-dive into a specific failing request by pasting its trace ID.
If you need help with the steps in this topic, please reach out to us on SigNoz Community Slack.
If you are a SigNoz Cloud user, please use in product chat support located at the bottom right corner of your SigNoz instance or contact us at cloud-support@signoz.io.