Overview
This guide walks you through migrating logs from the ELK Stack (Elasticsearch, Logstash, Kibana) and Beats to SigNoz. You will:
- Assess your current log collection (Filebeat, Logstash)
- Set up log collection with OpenTelemetry
- Configure log parsing to match your formats
- Validate logs are flowing correctly to SigNoz
Key Differences: ELK vs SigNoz
| Aspect | ELK Stack | SigNoz |
|---|---|---|
| Collection | Filebeat, Logstash, Elastic Agent | OpenTelemetry Collector, FluentBit |
| Query Language | KQL, Lucene | Query Builder, ClickHouse SQL |
| Storage | Elasticsearch (Indices) | ClickHouse (Columnar) |
| Log Processing | Logstash pipelines, Ingest Node pipelines | Log pipelines, OTel processors |
SigNoz uses the OpenTelemetry Collector for log collection, which provides flexible parsing and routing capabilities similar to Logstash but with a unified agent approach.
Prerequisites
Before starting, ensure you have:
- A SigNoz account (Cloud) or a running SigNoz instance (Self-Hosted)
- Access to your ELK configuration (Filebeat.yml, Logstash.conf)
- Access to your log sources (servers, containers)
Step 1: Assess Your Current Log Sources
Before migrating, inventory what's sending logs to Elasticsearch.
Identify Log Shippers
Check your infrastructure for:
- Filebeat: Look for
filebeat.ymlconfigurations. Note thepaths(inputs) andprocessors. - Logstash: Look for
logstash.confpipelines. Note theinputplugins (beats, tcp, file) andfiltersections. - Elastic Agent: Check for Fleet policies.
Categorize Your Log Sources
Group logs by how they're being collected:
| Source Type | ELK Component | Migration Path |
|---|---|---|
| File Logs | Filebeat (log input) | Use filelog receiver |
| Kubernetes | Filebeat (DaemonSet) | Kubernetes log collection |
| Docker | Filebeat (docker input) | Docker log collection |
| Logstash Pipelines | Logstash | Forward to OTel Collector |
| Syslog | Logstash/Filebeat syslog input | Use syslog receiver |
Step 2: Set Up the OpenTelemetry Collector
Install the OpenTelemetry Collector if you haven't already:
Install the OpenTelemetry Collector in your environment.
Configure the OTLP exporter for logs
Step 3: Migrate Each Log Source
Work through each source type from your inventory. For all available log collection methods, see the Send Logs to SigNoz central guide.
From Filebeat (File Logs)
Replace Filebeat's file input with the OpenTelemetry Collector's filelog receiver.
Before (Filebeat):
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/myapp/*.log
After (OTel Collector):
receivers:
filelog:
include:
- /var/log/myapp/*.log
start_at: end
include_file_path: true
include_file_name: true
operators:
# Optional: Parse JSON logs if applicable
- type: json_parser
if: 'body matches "^\\{"'
Enable it in the pipeline:
service:
pipelines:
logs:
receivers: [otlp, filelog]
processors: [batch]
exporters: [otlp]
See Collect Logs from File for more details.
From Kubernetes
If you were using Filebeat as a DaemonSet, replace it with the SigNoz K8s Infra Helm chart.
If you're using the SigNoz Kubernetes Infra Helm chart, all pod logs are automatically collected, including application logs. You don't need separate application log configuration—stdout/stderr from all containers will appear in SigNoz immediately after deploying the chart.
From Docker Containers
If you were using Filebeat's Docker input, use the OTel Collector to collect logs from all containers.
By mounting Docker's log directory, you automatically collect logs from every container. See Collect Docker Container Logs.
From Logstash
If you have complex Logstash pipelines you wish to retain temporarily, you can configure Logstash to forward logs to the OpenTelemetry Collector.
Logstash Configuration:
output {
http {
url => "http://<otel-collector-host>:4318/v1/logs"
http_method => "post"
format => "json_batch"
}
}
OTel Collector Configuration: Ensure the otlp receiver is enabled on HTTP (port 4318).
receivers:
otlp:
protocols:
http:
endpoint: 0.0.0.0:4318
Enable it in the pipeline:
service:
pipelines:
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlp]
See Logstash to SigNoz for detailed setup.
From Syslog
Replace Logstash/Filebeat syslog inputs with the syslog receiver in your OpenTelemetry Collector configuration.
receivers:
syslog:
tcp:
listen_address: '0.0.0.0:54527'
protocol: rfc3164
location: UTC
operators:
- type: move
from: attributes.message
to: body
Enable it in the pipeline:
service:
pipelines:
logs:
receivers: [otlp, syslog]
processors: [batch]
exporters: [otlp]
See Syslog to SigNoz for detailed setup.
Step 4: Log Parsing
SigNoz provides two approaches for parsing logs, similar to Logstash filters or Ingest Node pipelines.
Option 1: Log Pipelines (Recommended)
SigNoz includes a UI-based Log Pipelines feature that lets you parse and transform logs without modifying collector configuration. This is the recommended approach.
| Processor | ELK Equivalent | Use Case |
|---|---|---|
| Regex | Grok / Dissect | Extract fields using regex |
| Grok | Grok | Use predefined patterns |
| JSON Parser | JSON filter | Parse JSON log bodies |
| Timestamp Parser | Date filter | Extract and normalize timestamps |
| Add/Remove | Mutate filter | Transform log attributes |
See Log Pipelines Processors for details.
Option 2: Collector-Level Parsing
For high-volume parsing, configure parsing in the OpenTelemetry Collector using operators in the filelog receiver or processors.
operators:
- type: regex_parser
regex: '^(?P<time>[^ ]*) (?P<severity>[^ ]*) (?P<msg>.*)$'
timestamp:
parse_from: attributes.time
layout: '%Y-%m-%d %H:%M:%S'
Validate
Verify logs are flowing correctly by comparing against your source inventory.
Check Logs Are Arriving
- In SigNoz, navigate to Logs in the left sidebar.
- Use the Logs Explorer to browse recent logs.
- Verify logs from each source in your inventory appear.
Verify Log Attributes
- Click on a log entry to expand it.
- Check that parsed fields (timestamp, severity, service name) are correct.
- Verify file paths or container names match your sources.
Troubleshooting
Logs not appearing in SigNoz
- Check Collector status: Verify the OpenTelemetry Collector is running and check its logs for errors.
- Verify file permissions: Ensure the Collector has read access to log files.
- Check include paths: Confirm the
includepatterns infilelogreceiver match your log file paths. - Test connectivity: Verify the Collector can reach
ingest.<region>.signoz.cloud:443.
Logs appear but are unparsed
If logs show as raw text without parsed fields:
- Check pipeline configuration: Ensure your Log Pipeline processors are correctly configured in SigNoz UI.
- Verify regex/grok: Test your patterns against sample log lines.
- Check timestamp format: Ensure the
layoutmatches your actual timestamp format.
Duplicate logs
If you see the same log multiple times:
- Check for multiple collectors: Ensure only one Collector instance is reading each log file.
- Review include patterns: Overlapping patterns can cause duplicates.
Next Steps
Once your logs are flowing to SigNoz:
- Create dashboards with log-based panels
- Set up log-based alerts for error patterns
- Configure log pipelines for advanced processing
- Correlate logs with traces using trace IDs