Overview
This guide walks you through migrating logs from New Relic to SigNoz. You will:
- Check your current log sources in New Relic
- Set up log collection with OpenTelemetry
- Configure log parsing to match your formats
- Validate logs are flowing correctly to SigNoz
Key Differences: New Relic vs SigNoz
| Aspect | New Relic | SigNoz |
|---|---|---|
| Collection | APM agents, Infrastructure agent, Fluentd, Log API | OpenTelemetry Collector and OTel receivers like FluentBit |
| Query Language | NRQL, Lucene-like | Query Builder, ClickHouse SQL |
| Storage | NRDB (proprietary) | ClickHouse (open source) |
| Log Processing | Parsing rules, drop filters | Log pipelines, OTel processors |
SigNoz uses the OpenTelemetry Collector for log collection, which provides flexible parsing and routing capabilities.
Prerequisites
Before starting, ensure you have:
- A SigNoz account (Cloud) or a running SigNoz instance (Self-Hosted)
- Access to your New Relic account to review current log sources
- Access to your log sources (servers, containers, applications)
Step 1: Assess Your Current Log Sources
Before migrating, inventory what's sending logs to New Relic.
List Your Log Sources
Run this NRQL query in New Relic's Query Builder:
SELECT uniques(entity.name), uniques(hostname), uniques(filePath)
FROM Log
SINCE 7 days ago
LIMIT MAX
For a breakdown by source type:
SELECT count(*)
FROM Log
SINCE 7 days ago
FACET instrumentation.provider, entity.name
LIMIT MAX
Categorize Your Log Sources
Group logs by how they're being collected:
| Source Type | How to Identify | Migration Path |
|---|---|---|
| Kubernetes | k8s.cluster.name, k8s.pod.name attributes | Kubernetes log collection |
| Docker | container.name, container.id attributes | Docker log collection |
| APM agent logs | instrumentation.provider contains agent name | Application logs via OTel SDK |
| Fluentd/Fluent Bit | You have Fluentd/Fluent Bit configured | Redirect to OTel Collector |
| Log API | Direct API calls in your code | Use OTLP exporter |
| Cloud Provider Logs | cloud.provider attribute | Cloud provider log collection |
| Infrastructure agent | hostname populated, file paths like /var/log/* | File logs via filelog receiver |
Save this inventory. You'll use it to validate your migration is complete.
Step 2: Set Up the OpenTelemetry Collector
Install the OpenTelemetry Collector if you haven't already:
Install the OpenTelemetry Collector in your environment.
Configure the OTLP exporter for logs:
exporters:
otlp:
endpoint: ingest.<region>.signoz.cloud:443
headers:
signoz-ingestion-key: <SIGNOZ_INGESTION_KEY>
tls:
insecure: false
service:
pipelines:
logs:
exporters: [otlp]
Replace:
<region>: Your SigNoz Cloud region (us,eu, orin)<SIGNOZ_INGESTION_KEY>: Your ingestion key from Settings → Ingestion Settings
Step 3: Migrate Each Log Source
Work through each source type from your inventory.
From Kubernetes
For Kubernetes environments, the easiest approach is to use the SigNoz K8s Infra Helm chart which automatically collects logs from all pods—including your application logs.
If you're using the SigNoz Kubernetes Infra Helm chart, all pod logs are automatically collected, including application logs. You don't need separate application log configuration—stdout/stderr from all containers will appear in SigNoz immediately after deploying the chart.
This means if your application writes logs to stdout (which is the standard practice in Kubernetes), they're already being collected. No additional instrumentation needed.
Manual setup (if not using the Helm chart)
If you prefer manual setup, deploy the OpenTelemetry Collector as a DaemonSet and configure the filelog receiver for container logs.
From Docker Containers
For standalone Docker environments, you can collect logs from all containers at once—including your application logs—using the OpenTelemetry Collector.
By mounting Docker's log directory, you automatically collect logs from every container, including application containers. If your apps write to stdout/stderr (Docker's default logging), no additional application-level log configuration is needed.
See Collect Docker Container Logs for complete setup instructions including:
- Running the OTel Collector as a Docker container
- Configuring the
filelogreceiver for/var/lib/docker/containers/*/*.log - Parsing container metadata
- Filtering specific containers
From New Relic APM Agent Log Forwarding
If logs were forwarded via New Relic APM agents, you have several options depending on your language.
Java: Use the OpenTelemetry Java Agent
The OTel Java agent can collect logs automatically from Logback and Log4j without code changes:
OTEL_LOGS_EXPORTER=otlp \
OTEL_EXPORTER_OTLP_ENDPOINT="https://ingest.<region>.signoz.cloud:443" \
OTEL_EXPORTER_OTLP_HEADERS="signoz-ingestion-key=<SIGNOZ_INGESTION_KEY>" \
OTEL_RESOURCE_ATTRIBUTES=service.name=<service_name> \
java -javaagent:/path/opentelemetry-javaagent.jar -jar <myapp>.jar
See Collecting Application Logs Using OTEL Java Agent for Logback/Log4j configuration options.
Python: Enable logs auto-instrumentation
Python logs can be collected automatically without SDK code changes—just add an environment variable:
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true \
OTEL_EXPORTER_OTLP_ENDPOINT=https://ingest.<region>.signoz.cloud:443 \
OTEL_EXPORTER_OTLP_HEADERS=signoz-ingestion-key=<SIGNOZ_INGESTION_KEY> \
opentelemetry-instrument --logs_exporter otlp python app.py
See Python Logs Auto-Instrumentation for details.
Node.js: Use Pino or Winston
For Node.js applications, use OpenTelemetry-compatible logging libraries:
- Pino: Install
pino-opentelemetry-transportand configure the transport. See Node.js Pino Logs. - Winston: Install
@opentelemetry/winston-transportand add the transport. See Node.js Winston Logs.
Other languages and methods
For all available log collection methods, see the Send Logs to SigNoz central guide which covers:
- File-based collection
- Syslog
- HTTP/OTLP direct send
- Language-specific integrations
From Fluentd/FluentBit
If you're using Fluentd or FluentBit, redirect output to the OpenTelemetry Collector.
FluentBit configuration:
[OUTPUT]
Name forward
Match *
Host ${OTEL_COLLECTOR_HOST}
Port 8006
OTel Collector configuration to receive:
receivers:
fluentforward:
endpoint: 0.0.0.0:8006
service:
pipelines:
logs:
receivers: [fluentforward]
exporters: [otlp]
See FluentBit to SigNoz for detailed setup.
From Log API
If you're sending logs directly to New Relic's Log API, you send logs to SigNoz using HTTP.
Before (New Relic Log API):
curl -X POST https://log-api.newrelic.com/log/v1 \
-H "Content-Type: application/json" \
-H "Api-Key: <NEW_RELIC_LICENSE_KEY>" \
-d '{"message": "Log message", "level": "info"}'
After (HTTP to SigNoz):
curl --location 'https://ingest.<REGION>.signoz.cloud:443/logs/json' \
--header 'Content-Type: application/json' \
--header 'signoz-ingestion-key: <SIGNOZ_INGESTION_KEY>' \
--data '[
{
"trace_id": "000000000000000018c51935df0b93b9",
"span_id": "18c51935df0b93b9",
"trace_flags": 0,
"severity_text": "info",
"severity_number": 4,
"attributes": {
"method": "GET",
"path": "/api/users"
},
"resources": {
"host": "myhost",
"namespace": "prod"
},
"message": "This is a log line"
}
]'
See Send Logs to SigNoz for detailed setup.
From Cloud Provider Logs
If you were using New Relic's cloud integrations for AWS, Azure, or GCP logs, SigNoz provides native integrations:
| Provider | Log Sources | Setup Guide |
|---|---|---|
| AWS | CloudWatch Logs, S3, Lambda | AWS Cloud Integrations |
| Azure | Azure Monitor, Event Hubs | Azure Monitoring |
| GCP | Cloud Logging, Pub/Sub | GCP Monitoring |
From Infrastructure Agent (File Logs)
Replace New Relic Infrastructure agent file log collection with the filelog receiver.
Step 1: Add the filelog receiver
receivers:
filelog:
include:
- /var/log/myapp/*.log
- /var/log/syslog
start_at: end
include_file_path: true
include_file_name: true
Step 2: Enable in the logs pipeline
service:
pipelines:
logs:
receivers: [otlp, filelog]
processors: [batch]
exporters: [otlp]
See Collect Logs from File for more information.
Finding More Log Sources
For a complete list of all supported log collection methods, see Send Logs to SigNoz - this central guide covers:
- Application logs (by language)
- Infrastructure logs (syslog, journald)
- Cloud provider logs
- Container and Kubernetes logs
- Third-party integrations
Step 4: Log Parsing
SigNoz provides two approaches for parsing logs:
Option 1: Log Pipelines (Recommended)
SigNoz includes a UI-based Log Pipelines feature that lets you parse and transform logs without modifying collector configuration. This is the recommended approach because:
- No collector restarts required
- Visual interface for building parsers
- Test patterns against sample logs before applying
- Easy to modify and maintain
Log Pipelines support multiple processors:
| Processor | Use Case |
|---|---|
| Regex | Extract fields using regular expressions |
| Grok | Use predefined patterns (like %{IP:client_ip}) for common formats |
| JSON Parser | Parse JSON log bodies into attributes |
| Timestamp Parser | Extract and normalize timestamps |
| Severity Parser | Map log levels to standard severity |
| Add/Remove/Move/Copy | Transform log attributes |
See Log Pipelines Processors for detailed documentation on each processor.
Option 2: Collector-Level Parsing
For high-volume parsing or when you need to reduce data before it reaches SigNoz, configure parsing in the OpenTelemetry Collector using operators in the filelog receiver.
Example for Apache/Nginx access logs:
operators:
- type: regex_parser
regex: '^(?P<remote_addr>[^ ]*) - (?P<remote_user>[^ ]*) \[(?P<time_local>[^\]]*)\] "(?P<request>[^"]*)" (?P<status>[^ ]*) (?P<body_bytes_sent>[^ ]*)'
timestamp:
parse_from: attributes.time_local
layout: '%d/%b/%Y:%H:%M:%S %z'
For collector-level parsing patterns, see Parsing Logs with the OpenTelemetry Collector.
Validate
Verify logs are flowing correctly by comparing against your source inventory.
Check Logs Are Arriving
- In SigNoz, navigate to Logs in the left sidebar.
- Use the Logs Explorer to browse recent logs.
- Verify logs from each source in your inventory appear.
Verify Log Attributes
- Click on a log entry to expand it.
- Check that parsed fields (timestamp, severity, service name) are correct.
- Verify file paths or container names match your sources.
Test Search and Filtering
- Search for a known log message using full-text search.
- Filter by severity level (e.g.,
severity_text = 'ERROR'). - Filter by service or source (e.g.,
service.name = 'your-service').
Troubleshooting
Logs not appearing in SigNoz
- Check Collector status: Verify the OpenTelemetry Collector is running and check its logs for errors.
- Verify file permissions: Ensure the Collector has read access to log files.
- Check include paths: Confirm the
includepatterns infilelogreceiver match your log file paths. - Test connectivity: Verify the Collector can reach
ingest.<region>.signoz.cloud:443.
Logs appear but are unparsed
If logs show as raw text without parsed fields:
- Check regex patterns: Test your regex against sample log lines using a regex tester.
- Verify operator order: Parsers must be in the correct sequence.
- Check timestamp format: Ensure the
layoutmatches your actual timestamp format.
Missing logs from some sources
- Check start_at setting: Use
start_at: endfor new logs only,start_at: beginningto include existing logs. - Verify exclude patterns: Ensure you're not accidentally excluding wanted log files.
- Check file rotation: The Collector handles rotation, but verify log files aren't being truncated.
Duplicate logs
If you see the same log multiple times:
- Check for multiple collectors: Ensure only one Collector instance is reading each log file.
- Review include patterns: Overlapping patterns can cause duplicates.
Timestamp parsing issues
If logs appear with wrong timestamps:
- Verify timezone: Ensure the
layoutincludes timezone if present in logs. - Check layout format: Go time layouts use reference time
2006-01-02 15:04:05(not strftime). - Test with sample: Parse a single log line first to verify the pattern.
Next Steps
Once your logs are flowing to SigNoz:
- Create dashboards with log-based panels
- Set up log-based alerts for error patterns
- Configure log pipelines for advanced processing
- Correlate logs with traces using trace IDs
- Explore Logs Explorer features for analysis