SigNoz Cloud - This page is relevant for SigNoz Cloud editions.
Self-Host - This page is relevant for self-hosted SigNoz editions.

Optimize Performance During Development

You're building a new recommendations feature for your e-commerce API. The functionality works, but when you test it in staging, the response feels slow. You could ship it and optimize later, or you could catch the performance problem now, before it reaches production.

You open your AI assistant, connected to SigNoz via the MCP server, and start investigating.

Prerequisites

Step 1: Get the Span Breakdown

Show me the span breakdown for /api/recommendations in staging. Which spans take the most time? Is there unnecessary serialization or duplicate DB calls?

The assistant searches for recent traces from the /api/recommendations endpoint and pulls the detailed span tree.

Step 2: Analyze the Response

The assistant returns a detailed breakdown:

Span breakdown for /api/recommendations in staging

Average end-to-end latency: ~2.08 seconds. This is the same across samples — the endpoint is consistently slow, not spiky.

What the trace looks like:
The handler (recommendations.handler) does this, sequentially, for each of 30 recommended products:

1. db.fetch_recommendation_ids — one-time lookup, ~0.76 ms. Returns the list of product IDs for the user.
2. For every product ID, in a loop (30 iterations):
   - db.fetch_product_by_id — SELECT * FROM products WHERE id=?
   - db.fetch_user_preference_match — SELECT preferences FROM users WHERE id=?
   - serialization.deserialize_product_metadata

Top time consumers (averages across all staging requests, last 6h):

Span                                   | Avg duration | Calls per request | Total per request
---------------------------------------|--------------|-------------------|------------------
db.fetch_product_by_id                 | ~44 ms       | 30                | ~1,320 ms (~64%)
db.fetch_user_preference_match         | ~18 ms       | 30                | ~550 ms (~27%)
serialization.deserialize_product_metadata | ~6 ms    | 30                | ~185 ms (~9%)
db.fetch_recommendation_ids            | ~0.9 ms      | 1                 | ~1 ms

The DB calls alone account for roughly 90% of the total request time (~1.87s out of 2.08s), all of it serial.

Yes — there are clear problems:

1. Classic N+1 on products. db.fetch_product_by_id runs 30 times with SELECT * FROM products WHERE id=?.
   The recommendation IDs are already known after the first query. Replace the loop with a single
   SELECT * FROM products WHERE id IN (...) (or a JOIN against the recommendations query).
   Expected savings: ~1.28s → ~50ms.

2. Duplicate DB call — db.fetch_user_preference_match is called 30 times with the same user_id.
   The SQL is SELECT preferences FROM users WHERE id=? and the user doesn't change within a single request.
   This is fetched once per product iteration instead of once per request. Hoist it out of the loop
   (or cache it in a request-scoped variable).
   Expected savings: ~530ms (~25% of total latency gone immediately).

3. Redundant serialization. serialization.deserialize_product_metadata fires 30 times at ~6 ms each.
   This is almost certainly re-deserializing the same blob repeatedly, or deserializing per-row when
   it could be done in bulk. Worth looking at whether it can be lifted out or memoized — ~185 ms to be recovered.

4. SELECT * on products. Pulling every column when the response likely only needs a subset wastes
   bandwidth and deserialization time — contributes to why fetch_product_by_id is ~44 ms per row
   rather than a few ms.

Rough optimization projection:
Batch product fetch + hoist user preference lookup + bulk deserialize → endpoint drops from ~2.08s to
~100-200ms, conservatively a 10x improvement.

The single highest-ROI fix is hoisting fetch_user_preference_match out of the loop — that's a one-line
change for ~25% latency reduction.

You now have a clear map of where the time is going and exactly what to fix. You caught the N+1 query, the duplicate user lookup, and the redundant serialization all before shipping to production.

Final Summary

Starting from a slow-feeling endpoint in staging, the trace analysis revealed:

  1. N+1 query on products: 30 individual database calls instead of one batched query, accounting for 64% of total request time.
  2. Duplicate user preference lookup: The same query running 30 times per request when it should run once, wasting 27% of request time.
  3. Redundant serialization: Deserializing product metadata 30 times when it could be done in bulk, adding 9% overhead.
  4. Inefficient column selection: Using SELECT * when only a subset of columns is needed, inflating per-row processing time.

The trace breakdown showed that a simple refactor — batching the product fetch, hoisting the user lookup, and optimizing serialization — could reduce latency significantly. You found all of this while building the feature, not after users complained.

Tips for Your Own Investigations

  • Profile while building, not after shipping. Traces in staging or local environments reveal inefficiencies before users feel them.
  • Check serialization and deserialization. Spans labeled deserialize, marshal, unmarshal, or parse that run repeatedly often point to unnecessary data processing.
  • Compare expected vs actual span counts. If you expect one database query but see 30, or expect one API call but see 10, investigate why the pattern doesn't match your mental model.
  • Ask for optimization suggestions. The AI can estimate potential savings and prioritize fixes based on the trace breakdown.

Under the Hood

During this investigation, the MCP server called these tools:

StepMCP ToolWhat It Did
1signoz_search_tracesFound recent traces for /api/recommendations in the staging environment
1signoz_get_trace_detailsRetrieved the full span tree for a representative trace, showing the nested structure and timing of each operation
1signoz_aggregate_tracesComputed average span durations across multiple requests to confirm the pattern is consistent, not an anomaly

If you need help with the steps in this topic, please reach out to us on SigNoz Community Slack.

If you are a SigNoz Cloud user, please use in product chat support located at the bottom right corner of your SigNoz instance or contact us at cloud-support@signoz.io.

Last updated: April 21, 2026

Edit on GitHub

Was this page helpful?

Your response helps us improve this page.