SigNoz Cloud - This page is relevant for SigNoz Cloud editions.
Self-Host - This page is relevant for self-hosted SigNoz editions.

Migrate Logs from ELK Stack

Overview

This guide walks you through migrating logs from the ELK Stack (Elasticsearch, Logstash, Kibana) and Beats to SigNoz. You will:

  1. Assess your current log collection (Filebeat, Logstash)
  2. Set up log collection with OpenTelemetry
  3. Configure log parsing to match your formats
  4. Validate logs are flowing correctly to SigNoz

Key Differences: ELK vs SigNoz

AspectELK StackSigNoz
CollectionFilebeat, Logstash, Elastic AgentOpenTelemetry Collector, FluentBit
Query LanguageKQL, LuceneQuery Builder, ClickHouse SQL
StorageElasticsearch (Indices)ClickHouse (Columnar)
Log ProcessingLogstash pipelines, Ingest Node pipelinesLog pipelines, OTel processors

SigNoz uses the OpenTelemetry Collector for log collection, which provides flexible parsing and routing capabilities similar to Logstash but with a unified agent approach.

Prerequisites

Before starting, ensure you have:

  • A SigNoz account (Cloud) or a running SigNoz instance (Self-Hosted)
  • Access to your ELK configuration (Filebeat.yml, Logstash.conf)
  • Access to your log sources (servers, containers)

Step 1: Assess Your Current Log Sources

Before migrating, inventory what's sending logs to Elasticsearch.

Identify Log Shippers

Check your infrastructure for:

  • Filebeat: Look for filebeat.yml configurations. Note the paths (inputs) and processors.
  • Logstash: Look for logstash.conf pipelines. Note the input plugins (beats, tcp, file) and filter sections.
  • Elastic Agent: Check for Fleet policies.

Categorize Your Log Sources

Group logs by how they're being collected:

Source TypeELK ComponentMigration Path
File LogsFilebeat (log input)Use filelog receiver
KubernetesFilebeat (DaemonSet)Kubernetes log collection
DockerFilebeat (docker input)Docker log collection
Logstash PipelinesLogstashForward to OTel Collector
SyslogLogstash/Filebeat syslog inputUse syslog receiver

Step 2: Set Up the OpenTelemetry Collector

Install the OpenTelemetry Collector if you haven't already:

  1. Install the OpenTelemetry Collector in your environment.

  2. Configure the OTLP exporter for logs

Step 3: Migrate Each Log Source

Work through each source type from your inventory. For all available log collection methods, see the Send Logs to SigNoz central guide.

From Filebeat (File Logs)

Replace Filebeat's file input with the OpenTelemetry Collector's filelog receiver.

Before (Filebeat):

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/myapp/*.log

After (OTel Collector):

otel-collector-config.yaml
receivers:
  filelog:
    include:
      - /var/log/myapp/*.log
    start_at: end
    include_file_path: true
    include_file_name: true
    operators:
      # Optional: Parse JSON logs if applicable
      - type: json_parser
        if: 'body matches "^\\{"'

Enable it in the pipeline:

otel-collector-config.yaml
service:
  pipelines:
    logs:
      receivers: [otlp, filelog]
      processors: [batch]
      exporters: [otlp]

See Collect Logs from File for more details.

From Kubernetes

If you were using Filebeat as a DaemonSet, replace it with the SigNoz K8s Infra Helm chart.

💡 Tip

If you're using the SigNoz Kubernetes Infra Helm chart, all pod logs are automatically collected, including application logs. You don't need separate application log configuration—stdout/stderr from all containers will appear in SigNoz immediately after deploying the chart.

From Docker Containers

If you were using Filebeat's Docker input, use the OTel Collector to collect logs from all containers.

💡 Tip

By mounting Docker's log directory, you automatically collect logs from every container. See Collect Docker Container Logs.

From Logstash

If you have complex Logstash pipelines you wish to retain temporarily, you can configure Logstash to forward logs to the OpenTelemetry Collector.

Logstash Configuration:

output {
  http {
    url => "http://<otel-collector-host>:4318/v1/logs"
    http_method => "post"
    format => "json_batch"
  }
}

OTel Collector Configuration: Ensure the otlp receiver is enabled on HTTP (port 4318).

otel-collector-config.yaml
receivers:
  otlp:
    protocols:
      http:
        endpoint: 0.0.0.0:4318

Enable it in the pipeline:

otel-collector-config.yaml
service:
  pipelines:
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlp]

See Logstash to SigNoz for detailed setup.

From Syslog

Replace Logstash/Filebeat syslog inputs with the syslog receiver in your OpenTelemetry Collector configuration.

otel-collector-config.yaml
receivers:
  syslog:
    tcp:
      listen_address: '0.0.0.0:54527'
    protocol: rfc3164
    location: UTC
    operators:
      - type: move
        from: attributes.message
        to: body

Enable it in the pipeline:

otel-collector-config.yaml
service:
  pipelines:
    logs:
      receivers: [otlp, syslog]
      processors: [batch]
      exporters: [otlp]

See Syslog to SigNoz for detailed setup.

Step 4: Log Parsing

SigNoz provides two approaches for parsing logs, similar to Logstash filters or Ingest Node pipelines.

SigNoz includes a UI-based Log Pipelines feature that lets you parse and transform logs without modifying collector configuration. This is the recommended approach.

ProcessorELK EquivalentUse Case
RegexGrok / DissectExtract fields using regex
GrokGrokUse predefined patterns
JSON ParserJSON filterParse JSON log bodies
Timestamp ParserDate filterExtract and normalize timestamps
Add/RemoveMutate filterTransform log attributes

See Log Pipelines Processors for details.

Option 2: Collector-Level Parsing

For high-volume parsing, configure parsing in the OpenTelemetry Collector using operators in the filelog receiver or processors.

operators:
  - type: regex_parser
    regex: '^(?P<time>[^ ]*) (?P<severity>[^ ]*) (?P<msg>.*)$'
    timestamp:
      parse_from: attributes.time
      layout: '%Y-%m-%d %H:%M:%S'

Validate

Verify logs are flowing correctly by comparing against your source inventory.

Check Logs Are Arriving

  1. In SigNoz, navigate to Logs in the left sidebar.
  2. Use the Logs Explorer to browse recent logs.
  3. Verify logs from each source in your inventory appear.

Verify Log Attributes

  1. Click on a log entry to expand it.
  2. Check that parsed fields (timestamp, severity, service name) are correct.
  3. Verify file paths or container names match your sources.

Troubleshooting

Logs not appearing in SigNoz

  1. Check Collector status: Verify the OpenTelemetry Collector is running and check its logs for errors.
  2. Verify file permissions: Ensure the Collector has read access to log files.
  3. Check include paths: Confirm the include patterns in filelog receiver match your log file paths.
  4. Test connectivity: Verify the Collector can reach ingest.<region>.signoz.cloud:443.

Logs appear but are unparsed

If logs show as raw text without parsed fields:

  1. Check pipeline configuration: Ensure your Log Pipeline processors are correctly configured in SigNoz UI.
  2. Verify regex/grok: Test your patterns against sample log lines.
  3. Check timestamp format: Ensure the layout matches your actual timestamp format.

Duplicate logs

If you see the same log multiple times:

  1. Check for multiple collectors: Ensure only one Collector instance is reading each log file.
  2. Review include patterns: Overlapping patterns can cause duplicates.

Next Steps

Once your logs are flowing to SigNoz:

Last updated: December 1, 2025

Edit on GitHub

Was this page helpful?