Overview
This guide walks you through migrating logs from Loki (LGTM Stack) to SigNoz. You will:
- Inventory your current log sources
- Set up the OpenTelemetry Collector
- Configure log collection to replace Promtail/FluentBit
- Validate logs are flowing correctly
SigNoz uses the OpenTelemetry Collector for log collection, which provides a more robust and standard-compliant alternative to Promtail.
Prerequisites
Before starting, ensure you have:
- A SigNoz account (Cloud) or a running SigNoz instance (Self-Hosted)
- Access to your existing log collection configuration (Promtail config, FluentBit config)
- Administrative access to deploy the OpenTelemetry Collector
Step 1: Assess Your Current Log Sources
Before migrating, list what you are currently collecting in Loki.
List Your Log Streams
Run this LogQL query in Grafana (against your Loki datasource) to see log volume by job:
count_over_time({job=~".+"}[1h])
This will give you a list of jobs (e.g., varlogs, kubernetes-pods, systemd) that are sending logs.
Categorize Your Sources
Group your logs by how they are collected:
| Source Type | Current Agent | Migration Path |
|---|---|---|
| File Logs | Promtail (static_configs) | Use Filelog Receiver |
| Kubernetes Pods | Promtail (kubernetes_sd_configs) | Use K8s Infra Chart |
| Systemd/Journald | Promtail (journal) | Use Journald Receiver |
| Syslog | Promtail (syslog) | Use Syslog Receiver |
| FluentBit | FluentBit | Forward to OTel Collector |
Step 2: Set Up the OpenTelemetry Collector
Most migration paths require the OpenTelemetry Collector.
- Install the OpenTelemetry Collector in your environment.
- Configure the OTLP exporter to send logs to SigNoz Cloud.
Step 3: Migrate Each Log Source
Work through each source type from your inventory.
From Promtail (File Logs)
If you use Promtail to tail static files (e.g., /var/log/*.log), use the filelog receiver in the OpenTelemetry Collector.
Promtail Config:
scrape_configs:
- job_name: system
static_configs:
- targets:
- localhost
labels:
job: varlogs
__path__: /var/log/*.log
OTel Collector Config:
receivers:
filelog:
include:
- /var/log/*.log
start_at: end
include_file_path: true
include_file_name: true
Refer to File Log Collection for more details.
From Kubernetes Pods
If you use Promtail to collect Kubernetes pod logs, we recommend switching to the SigNoz K8s Infra Helm Chart. It automatically deploys an OpenTelemetry Collector DaemonSet configured to collect logs from all pods, enrich them with K8s metadata, and send them to SigNoz.
See Kubernetes Log Collection for details.
From Systemd (Journald)
If you collect logs from systemd journal:
Promtail Config:
scrape_configs:
- job_name: journal
journal:
max_age: 12h
labels:
job: systemd-journal
OTel Collector Config:
receivers:
journald:
directory: /var/log/journal
start_at: end
Refer to Systemd Log Collection for more details.
From Syslog
If you collect syslog messages:
OTel Collector Config:
receivers:
syslog:
tcp:
listen_address: '0.0.0.0:54527'
protocol: rfc3164
location: UTC
operators:
- type: move
from: attributes.message
to: body
Refer to Syslog Log Collection for more details.
From FluentBit
If you are already using FluentBit, you can reconfigure it to forward logs to the OpenTelemetry Collector instead of Loki.
- Update FluentBit Output:
[OUTPUT]
Name forward
Match *
Host ${OTEL_COLLECTOR_HOST}
Port 24224
- Configure Collector Receiver:
receivers:
fluentforward:
endpoint: 0.0.0.0:24224
Refer to FluentBit Log Collection for more details.
Finding More Log Sources
For a complete list of all supported log collection methods, see Send Logs to SigNoz.
Step 4: Log Parsing
Loki often relies on query-time parsing (LogQL). SigNoz supports ingestion-time parsing.
Ingestion-Time Parsing (Log Pipelines)
We recommend parsing logs to extract structured attributes. This makes queries faster and allows for aggregation.
Use SigNoz Log Pipelines in the UI to build parsers (JSON, Regex, Grok) to extract structured attributes.
See Log Pipelines.
Validate
Verify logs are flowing correctly.
Check Logs Are Arriving
- In SigNoz, navigate to Logs in the left sidebar.
- Use the Logs Explorer to browse recent logs.
- Verify logs from each source in your inventory appear.
Verify Attributes
- Click on a log entry to expand it.
- Check that attributes like
job,service.name,k8s.pod.nameare present and correct.
Troubleshooting
Logs not appearing
- Check Collector status: Verify the OpenTelemetry Collector is running.
- Check file permissions: Ensure the Collector has read access to log files (
/var/log/...). - Check include paths: Verify glob patterns match your files.
Unparsed logs
If logs appear as raw text:
- Check Log Pipelines: Ensure you have a pipeline configured to parse the logs.
- Check JSON parsing: If logs are JSON, use the JSON parser processor in Log Pipelines.
Next Steps
Once your logs are flowing to SigNoz:
- Create dashboards with log-based panels
- Set up log-based alerts for error patterns
- Correlate logs with traces using trace IDs