SigNoz Cloud - This page is relevant for SigNoz Cloud editions.
Self-Host - This page is relevant for self-hosted SigNoz editions.

Migrate Logs from New Relic

Overview

This guide walks you through migrating logs from New Relic to SigNoz. You will:

  1. Check your current log sources in New Relic
  2. Set up log collection with OpenTelemetry
  3. Configure log parsing to match your formats
  4. Validate logs are flowing correctly to SigNoz

Key Differences: New Relic vs SigNoz

AspectNew RelicSigNoz
CollectionAPM agents, Infrastructure agent, Fluentd, Log APIOpenTelemetry Collector and OTel receivers like FluentBit
Query LanguageNRQL, Lucene-likeQuery Builder, ClickHouse SQL
StorageNRDB (proprietary)ClickHouse (open source)
Log ProcessingParsing rules, drop filtersLog pipelines, OTel processors

SigNoz uses the OpenTelemetry Collector for log collection, which provides flexible parsing and routing capabilities.

Prerequisites

Before starting, ensure you have:

  • A SigNoz account (Cloud) or a running SigNoz instance (Self-Hosted)
  • Access to your New Relic account to review current log sources
  • Access to your log sources (servers, containers, applications)

Step 1: Assess Your Current Log Sources

Before migrating, inventory what's sending logs to New Relic.

List Your Log Sources

Run this NRQL query in New Relic's Query Builder:

SELECT uniques(entity.name), uniques(hostname), uniques(filePath)
FROM Log 
SINCE 7 days ago 
LIMIT MAX

For a breakdown by source type:

SELECT count(*) 
FROM Log
SINCE 7 days ago
FACET instrumentation.provider, entity.name
LIMIT MAX

Categorize Your Log Sources

Group logs by how they're being collected:

Source TypeHow to IdentifyMigration Path
Kubernetesk8s.cluster.name, k8s.pod.name attributesKubernetes log collection
Dockercontainer.name, container.id attributesDocker log collection
APM agent logsinstrumentation.provider contains agent nameApplication logs via OTel SDK
Fluentd/Fluent BitYou have Fluentd/Fluent Bit configuredRedirect to OTel Collector
Log APIDirect API calls in your codeUse OTLP exporter
Cloud Provider Logscloud.provider attributeCloud provider log collection
Infrastructure agenthostname populated, file paths like /var/log/*File logs via filelog receiver

Save this inventory. You'll use it to validate your migration is complete.

Step 2: Set Up the OpenTelemetry Collector

Install the OpenTelemetry Collector if you haven't already:

  1. Install the OpenTelemetry Collector in your environment.

  2. Configure the OTLP exporter for logs:

otel-collector-config.yaml
exporters:
  otlp:
    endpoint: ingest.<region>.signoz.cloud:443
    headers:
      signoz-ingestion-key: <SIGNOZ_INGESTION_KEY>
    tls:
      insecure: false

service:
  pipelines:
    logs:
      exporters: [otlp]

Replace:

Step 3: Migrate Each Log Source

Work through each source type from your inventory.

From Kubernetes

For Kubernetes environments, the easiest approach is to use the SigNoz K8s Infra Helm chart which automatically collects logs from all pods—including your application logs.

💡 Tip

If you're using the SigNoz Kubernetes Infra Helm chart, all pod logs are automatically collected, including application logs. You don't need separate application log configuration—stdout/stderr from all containers will appear in SigNoz immediately after deploying the chart.

This means if your application writes logs to stdout (which is the standard practice in Kubernetes), they're already being collected. No additional instrumentation needed.

Manual setup (if not using the Helm chart)

If you prefer manual setup, deploy the OpenTelemetry Collector as a DaemonSet and configure the filelog receiver for container logs.

From Docker Containers

For standalone Docker environments, you can collect logs from all containers at once—including your application logs—using the OpenTelemetry Collector.

💡 Tip

By mounting Docker's log directory, you automatically collect logs from every container, including application containers. If your apps write to stdout/stderr (Docker's default logging), no additional application-level log configuration is needed.

See Collect Docker Container Logs for complete setup instructions including:

  • Running the OTel Collector as a Docker container
  • Configuring the filelog receiver for /var/lib/docker/containers/*/*.log
  • Parsing container metadata
  • Filtering specific containers

From New Relic APM Agent Log Forwarding

If logs were forwarded via New Relic APM agents, you have several options depending on your language.

Java: Use the OpenTelemetry Java Agent

The OTel Java agent can collect logs automatically from Logback and Log4j without code changes:

OTEL_LOGS_EXPORTER=otlp \
OTEL_EXPORTER_OTLP_ENDPOINT="https://ingest.<region>.signoz.cloud:443" \
OTEL_EXPORTER_OTLP_HEADERS="signoz-ingestion-key=<SIGNOZ_INGESTION_KEY>" \
OTEL_RESOURCE_ATTRIBUTES=service.name=<service_name> \
java -javaagent:/path/opentelemetry-javaagent.jar -jar <myapp>.jar

See Collecting Application Logs Using OTEL Java Agent for Logback/Log4j configuration options.

Python: Enable logs auto-instrumentation

Python logs can be collected automatically without SDK code changes—just add an environment variable:

OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true \
OTEL_EXPORTER_OTLP_ENDPOINT=https://ingest.<region>.signoz.cloud:443 \
OTEL_EXPORTER_OTLP_HEADERS=signoz-ingestion-key=<SIGNOZ_INGESTION_KEY> \
opentelemetry-instrument --logs_exporter otlp python app.py

See Python Logs Auto-Instrumentation for details.

Node.js: Use Pino or Winston

For Node.js applications, use OpenTelemetry-compatible logging libraries:

  • Pino: Install pino-opentelemetry-transport and configure the transport. See Node.js Pino Logs.
  • Winston: Install @opentelemetry/winston-transport and add the transport. See Node.js Winston Logs.

Other languages and methods

For all available log collection methods, see the Send Logs to SigNoz central guide which covers:

  • File-based collection
  • Syslog
  • HTTP/OTLP direct send
  • Language-specific integrations

From Fluentd/FluentBit

If you're using Fluentd or FluentBit, redirect output to the OpenTelemetry Collector.

FluentBit configuration:

[OUTPUT]
  Name        forward
  Match       *
  Host        ${OTEL_COLLECTOR_HOST}
  Port        8006

OTel Collector configuration to receive:

otel-collector-config.yaml
receivers:
  fluentforward:
    endpoint: 0.0.0.0:8006

service:
  pipelines:
    logs:
      receivers: [fluentforward]
      exporters: [otlp]

See FluentBit to SigNoz for detailed setup.

From Log API

If you're sending logs directly to New Relic's Log API, you send logs to SigNoz using HTTP.

Before (New Relic Log API):

curl -X POST https://log-api.newrelic.com/log/v1 \
  -H "Content-Type: application/json" \
  -H "Api-Key: <NEW_RELIC_LICENSE_KEY>" \
  -d '{"message": "Log message", "level": "info"}'

After (HTTP to SigNoz):

curl --location 'https://ingest.<REGION>.signoz.cloud:443/logs/json' \
--header 'Content-Type: application/json' \
--header 'signoz-ingestion-key: <SIGNOZ_INGESTION_KEY>' \
--data '[
    {
        "trace_id": "000000000000000018c51935df0b93b9",
        "span_id": "18c51935df0b93b9",
        "trace_flags": 0,
        "severity_text": "info",
        "severity_number": 4,
        "attributes": {
            "method": "GET",
            "path": "/api/users"
        },
        "resources": {
            "host": "myhost",
            "namespace": "prod"
        },
        "message": "This is a log line"
    }
]'

See Send Logs to SigNoz for detailed setup.

From Cloud Provider Logs

If you were using New Relic's cloud integrations for AWS, Azure, or GCP logs, SigNoz provides native integrations:

ProviderLog SourcesSetup Guide
AWSCloudWatch Logs, S3, LambdaAWS Cloud Integrations
AzureAzure Monitor, Event HubsAzure Monitoring
GCPCloud Logging, Pub/SubGCP Monitoring

From Infrastructure Agent (File Logs)

Replace New Relic Infrastructure agent file log collection with the filelog receiver.

Step 1: Add the filelog receiver

otel-collector-config.yaml
receivers:
  filelog:
    include:
      - /var/log/myapp/*.log
      - /var/log/syslog
    start_at: end
    include_file_path: true
    include_file_name: true

Step 2: Enable in the logs pipeline

otel-collector-config.yaml
service:
  pipelines:
    logs:
      receivers: [otlp, filelog]
      processors: [batch]
      exporters: [otlp]

See Collect Logs from File for more information.

Finding More Log Sources

For a complete list of all supported log collection methods, see Send Logs to SigNoz - this central guide covers:

  • Application logs (by language)
  • Infrastructure logs (syslog, journald)
  • Cloud provider logs
  • Container and Kubernetes logs
  • Third-party integrations

Step 4: Log Parsing

SigNoz provides two approaches for parsing logs:

SigNoz includes a UI-based Log Pipelines feature that lets you parse and transform logs without modifying collector configuration. This is the recommended approach because:

  • No collector restarts required
  • Visual interface for building parsers
  • Test patterns against sample logs before applying
  • Easy to modify and maintain

Log Pipelines support multiple processors:

ProcessorUse Case
RegexExtract fields using regular expressions
GrokUse predefined patterns (like %{IP:client_ip}) for common formats
JSON ParserParse JSON log bodies into attributes
Timestamp ParserExtract and normalize timestamps
Severity ParserMap log levels to standard severity
Add/Remove/Move/CopyTransform log attributes

See Log Pipelines Processors for detailed documentation on each processor.

Option 2: Collector-Level Parsing

For high-volume parsing or when you need to reduce data before it reaches SigNoz, configure parsing in the OpenTelemetry Collector using operators in the filelog receiver.

Example for Apache/Nginx access logs:

operators:
  - type: regex_parser
    regex: '^(?P<remote_addr>[^ ]*) - (?P<remote_user>[^ ]*) \[(?P<time_local>[^\]]*)\] "(?P<request>[^"]*)" (?P<status>[^ ]*) (?P<body_bytes_sent>[^ ]*)'
    timestamp:
      parse_from: attributes.time_local
      layout: '%d/%b/%Y:%H:%M:%S %z'

For collector-level parsing patterns, see Parsing Logs with the OpenTelemetry Collector.

Validate

Verify logs are flowing correctly by comparing against your source inventory.

Check Logs Are Arriving

  1. In SigNoz, navigate to Logs in the left sidebar.
  2. Use the Logs Explorer to browse recent logs.
  3. Verify logs from each source in your inventory appear.

Verify Log Attributes

  1. Click on a log entry to expand it.
  2. Check that parsed fields (timestamp, severity, service name) are correct.
  3. Verify file paths or container names match your sources.

Test Search and Filtering

  1. Search for a known log message using full-text search.
  2. Filter by severity level (e.g., severity_text = 'ERROR').
  3. Filter by service or source (e.g., service.name = 'your-service').

Troubleshooting

Logs not appearing in SigNoz

  1. Check Collector status: Verify the OpenTelemetry Collector is running and check its logs for errors.
  2. Verify file permissions: Ensure the Collector has read access to log files.
  3. Check include paths: Confirm the include patterns in filelog receiver match your log file paths.
  4. Test connectivity: Verify the Collector can reach ingest.<region>.signoz.cloud:443.

Logs appear but are unparsed

If logs show as raw text without parsed fields:

  1. Check regex patterns: Test your regex against sample log lines using a regex tester.
  2. Verify operator order: Parsers must be in the correct sequence.
  3. Check timestamp format: Ensure the layout matches your actual timestamp format.

Missing logs from some sources

  1. Check start_at setting: Use start_at: end for new logs only, start_at: beginning to include existing logs.
  2. Verify exclude patterns: Ensure you're not accidentally excluding wanted log files.
  3. Check file rotation: The Collector handles rotation, but verify log files aren't being truncated.

Duplicate logs

If you see the same log multiple times:

  1. Check for multiple collectors: Ensure only one Collector instance is reading each log file.
  2. Review include patterns: Overlapping patterns can cause duplicates.

Timestamp parsing issues

If logs appear with wrong timestamps:

  1. Verify timezone: Ensure the layout includes timezone if present in logs.
  2. Check layout format: Go time layouts use reference time 2006-01-02 15:04:05 (not strftime).
  3. Test with sample: Parse a single log line first to verify the pattern.

Next Steps

Once your logs are flowing to SigNoz:

Last updated: November 30, 2025

Edit on GitHub

Was this page helpful?