This guide shows you how to send traces from a Google Cloud Function to SigNoz using OpenTelemetry. The default path uses zero-code auto instrumentation through environment variables. If you need deeper control, use the optional code-based setup section.
Prerequisites
- A Google Cloud project with billing enabled.
- Cloud Functions API enabled in your project.
- A 2nd gen Cloud Function using Node.js 20.
- A SigNoz Cloud account or self-hosted SigNoz instance.
Send traces to SigNoz
Step 1. Create an HTTP Cloud Function
In Google Cloud Console:
- Open Cloud Functions and click Create function.
- Select 2nd gen.
- Set Runtime to Node.js 20.
- Set Trigger to HTTP.
You will add runtime environment variables and code in the next steps.
Step 2. Configure OpenTelemetry environment variables.
In Runtime, build, connections and security settings -> Runtime environment variables, add:
OTEL_TRACES_EXPORTER=otlp
OTEL_EXPORTER_OTLP_PROTOCOL=http/json
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://ingest.<region>.signoz.cloud/v1/traces
OTEL_EXPORTER_OTLP_HEADERS=signoz-ingestion-key=<your-ingestion-key>
OTEL_SERVICE_NAME=<your-service-name>
OTEL_PROPAGATORS=tracecontext
OTEL_RESOURCE_ATTRIBUTES=deployment.environment.name=production
Verify these values:
<region>: Your SigNoz Cloud region.<your-ingestion-key>: Your SigNoz ingestion key.<your-service-name>: Name that will appear in SigNoz Services (for example,gcp-fn-orders).OTEL_RESOURCE_ATTRIBUTES: Comma-separated resource attributes. Keepdeployment.environment.name=productionor set your environment value.
Step 3. Install the required packages
Run this in your Cloud Function project directory:
npm install --save \
@google-cloud/functions-framework \
@opentelemetry/api \
@opentelemetry/auto-instrumentations-node \
@opentelemetry/exporter-trace-otlp-http \
@opentelemetry/sdk-node
This creates or updates package.json and package-lock.json with the dependencies used by the examples in this guide.
Step 4. Update index.js
Add the register import at the top of your entry file. Your handler code stays unchanged -- no manual tracing calls are needed.
'use strict';
require('@opentelemetry/auto-instrumentations-node/register');
const functions = require('@google-cloud/functions-framework');
functions.http('entryPoint', (req, res) => {
res.status(200).json({
ok: true,
message: req.query.message || 'Hello from Cloud Functions',
});
});
What this does:
- The
registerimport loads OpenTelemetry auto-instrumentation before your function handler code runs. - Common incoming/outgoing HTTP operations are traced automatically.
Step 5. Deploy and invoke the function
- Deploy the function.
- Copy the HTTPS trigger URL after deployment.
- Invoke it once:
curl "https://<your-cloud-function-url>?message=hello-signoz"
Validate
After invoking the function, verify traces in SigNoz:
- Open SigNoz and go to Services.
- Find the service name you set in
OTEL_SERVICE_NAME. - Open Traces and filter by
service.name=<your-service-name>. - Open a trace and confirm request spans from your function are present.
Code-Based Setup (Optional)
Step 1. Install OpenTelemetry packages
npm install --save \
@google-cloud/functions-framework \
@google-cloud/opentelemetry-cloud-trace-propagator \
@opentelemetry/core \
@opentelemetry/exporter-trace-otlp-http \
@opentelemetry/resource-detector-gcp \
@opentelemetry/resources \
@opentelemetry/sdk-node \
@opentelemetry/semantic-conventions
Step 2. Create tracing.js
Create a new file named tracing.js in your function source.
'use strict';
const { NodeSDK } = require('@opentelemetry/sdk-node');
const {
CompositePropagator,
W3CBaggagePropagator,
W3CTraceContextPropagator,
} = require('@opentelemetry/core');
const { CloudPropagator } = require('@google-cloud/opentelemetry-cloud-trace-propagator');
const { gcpDetector } = require('@opentelemetry/resource-detector-gcp');
const { OTLPTraceExporter } = require('@opentelemetry/exporter-trace-otlp-http');
const { resourceFromAttributes } = require('@opentelemetry/resources');
const {
ATTR_SERVICE_NAME,
ATTR_SERVICE_NAMESPACE,
ATTR_SERVICE_VERSION,
} = require('@opentelemetry/semantic-conventions');
const { ATTR_DEPLOYMENT_ENVIRONMENT_NAME } = require('@opentelemetry/semantic-conventions/incubating'); // incubating: may change in future versions
const functionId = process.env.FUNCTION_ID || 'A';
const deploymentEnvironment =
process.env.OTEL_DEPLOYMENT_ENVIRONMENT || process.env.DEPLOYMENT_ENVIRONMENT || 'local';
const resourceAttributes = {
[ATTR_SERVICE_NAME]: process.env.OTEL_SERVICE_NAME || `function-${functionId.toLowerCase()}`,
[ATTR_DEPLOYMENT_ENVIRONMENT_NAME]: deploymentEnvironment,
};
if (process.env.OTEL_SERVICE_NAMESPACE) {
resourceAttributes[ATTR_SERVICE_NAMESPACE] = process.env.OTEL_SERVICE_NAMESPACE;
}
if (process.env.OTEL_SERVICE_VERSION) {
resourceAttributes[ATTR_SERVICE_VERSION] = process.env.OTEL_SERVICE_VERSION;
}
const textMapPropagator = new CompositePropagator({
propagators: [
new W3CTraceContextPropagator(),
new CloudPropagator(),
new W3CBaggagePropagator(),
],
});
const sdk = new NodeSDK({
resource: resourceFromAttributes(resourceAttributes),
autoDetectResources: true,
resourceDetectors: [gcpDetector],
traceExporter: new OTLPTraceExporter(),
textMapPropagator,
});
(async () => {
try {
await sdk.start();
} catch (err) {
console.error('Error starting OTel SDK', err);
}
})();
async function shutdownAndExit(exitCode) {
try {
await sdk.shutdown();
process.exit(exitCode);
} catch (err) {
console.error('Error shutting down OTel SDK', err);
process.exit(1);
}
}
process.on('SIGTERM', () => shutdownAndExit(0));
process.on('SIGINT', () => shutdownAndExit(0));
module.exports = { sdk };
Step 3. Load tracing.js before your function handler
Import tracing.js first, then keep your function logic as normal:
'use strict';
require('./tracing');
const functions = require('@google-cloud/functions-framework');
functions.http('entryPoint', (req, res) => {
res.status(200).json({
ok: true,
message: req.query.message || 'Hello from Cloud Functions',
});
});
Setup OpenTelemetry Collector (Optional)
What is the OpenTelemetry Collector?
Think of the OTel Collector as a middleman between your app and SigNoz. Instead of your application sending data directly to SigNoz, it sends everything to the Collector first, which then forwards it along.
Why use it?
- Cleaning up data — Filter out noisy traces you don't care about, or remove sensitive info before it leaves your servers.
- Keeping your app lightweight — Let the Collector handle batching, retries, and compression instead of your application code.
- Adding context automatically — The Collector can tag your data with useful info like which Kubernetes pod or cloud region it came from.
- Future flexibility — Want to send data to multiple backends later? The Collector makes that easy without changing your app.
See Switch from direct export to Collector for step-by-step instructions to convert your setup.
For more details, see Why use the OpenTelemetry Collector? and the Collector configuration guide.
Troubleshooting
Why don't traces appear in SigNoz?
- Check that
require('@opentelemetry/auto-instrumentations-node/register')is the first import in yourindex.jsfor the zero-code setup. - Check that
OTEL_EXPORTER_OTLP_TRACES_ENDPOINTincludes/v1/traces. - Check that
OTEL_EXPORTER_OTLP_PROTOCOLishttp/json. - Check that
OTEL_EXPORTER_OTLP_HEADERSis exactlysignoz-ingestion-key=<your-ingestion-key>. - Invoke the function a few times; cold starts can delay the first export.
Why do I see Cannot find module '@opentelemetry/auto-instrumentations-node/register'?
The auto-instrumentation package is missing from your deployment artifact. Re-run the package install step, redeploy, and invoke again.
Why not use NODE_OPTIONS in this guide?
Cloud Functions can reject reserved runtime environment variable names. This guide uses an explicit register import to avoid runtime env var conflicts.
Why do I see 401/403 from the SigNoz endpoint?
Your ingestion key is incorrect or missing. Recopy it from the SigNoz ingestion key page and redeploy.
Why is my service missing from the Services list?
Set OTEL_SERVICE_NAME to a stable value, redeploy, and invoke again.
Next steps
- Correlate traces with logs to speed up debugging.
- Create trace-based alerts for latency and error spikes.
- Build dashboards for latency and error trends.