SigNoz Cloud - This page is relevant for SigNoz Cloud editions.
Self-Host - This page is relevant for self-hosted SigNoz editions.

Migrate Logs from LGTM Stack

Overview

This guide walks you through migrating logs from Loki (LGTM Stack) to SigNoz. You will:

  1. Inventory your current log sources
  2. Set up the OpenTelemetry Collector
  3. Configure log collection to replace Promtail/FluentBit
  4. Validate logs are flowing correctly

SigNoz uses the OpenTelemetry Collector for log collection, which provides a more robust and standard-compliant alternative to Promtail.

Prerequisites

Before starting, ensure you have:

  • A SigNoz account (Cloud) or a running SigNoz instance (Self-Hosted)
  • Access to your existing log collection configuration (Promtail config, FluentBit config)
  • Administrative access to deploy the OpenTelemetry Collector

Step 1: Assess Your Current Log Sources

Before migrating, list what you are currently collecting in Loki.

List Your Log Streams

Run this LogQL query in Grafana (against your Loki datasource) to see log volume by job:

count_over_time({job=~".+"}[1h])

This will give you a list of jobs (e.g., varlogs, kubernetes-pods, systemd) that are sending logs.

Categorize Your Sources

Group your logs by how they are collected:

Source TypeCurrent AgentMigration Path
File LogsPromtail (static_configs)Use Filelog Receiver
Kubernetes PodsPromtail (kubernetes_sd_configs)Use K8s Infra Chart
Systemd/JournaldPromtail (journal)Use Journald Receiver
SyslogPromtail (syslog)Use Syslog Receiver
FluentBitFluentBitForward to OTel Collector

Step 2: Set Up the OpenTelemetry Collector

Most migration paths require the OpenTelemetry Collector.

  1. Install the OpenTelemetry Collector in your environment.
  2. Configure the OTLP exporter to send logs to SigNoz Cloud.

Step 3: Migrate Each Log Source

Work through each source type from your inventory.

From Promtail (File Logs)

If you use Promtail to tail static files (e.g., /var/log/*.log), use the filelog receiver in the OpenTelemetry Collector.

Promtail Config:

scrape_configs:
- job_name: system
  static_configs:
  - targets:
      - localhost
    labels:
      job: varlogs
      __path__: /var/log/*.log

OTel Collector Config:

otel-collector-config.yaml
receivers:
  filelog:
    include:
      - /var/log/*.log
    start_at: end
    include_file_path: true
    include_file_name: true

Refer to File Log Collection for more details.

From Kubernetes Pods

If you use Promtail to collect Kubernetes pod logs, we recommend switching to the SigNoz K8s Infra Helm Chart. It automatically deploys an OpenTelemetry Collector DaemonSet configured to collect logs from all pods, enrich them with K8s metadata, and send them to SigNoz.

See Kubernetes Log Collection for details.

From Systemd (Journald)

If you collect logs from systemd journal:

Promtail Config:

scrape_configs:
  - job_name: journal
    journal:
      max_age: 12h
      labels:
        job: systemd-journal

OTel Collector Config:

otel-collector-config.yaml
receivers:
  journald:
    directory: /var/log/journal
    start_at: end

Refer to Systemd Log Collection for more details.

From Syslog

If you collect syslog messages:

OTel Collector Config:

otel-collector-config.yaml
receivers:
  syslog:
    tcp:
      listen_address: '0.0.0.0:54527'
    protocol: rfc3164
    location: UTC
    operators:
      - type: move
        from: attributes.message
        to: body

Refer to Syslog Log Collection for more details.

From FluentBit

If you are already using FluentBit, you can reconfigure it to forward logs to the OpenTelemetry Collector instead of Loki.

  1. Update FluentBit Output:
[OUTPUT]
    Name        forward
    Match       *
    Host        ${OTEL_COLLECTOR_HOST}
    Port        24224
  1. Configure Collector Receiver:
otel-collector-config.yaml
receivers:
  fluentforward:
    endpoint: 0.0.0.0:24224

Refer to FluentBit Log Collection for more details.

Finding More Log Sources

For a complete list of all supported log collection methods, see Send Logs to SigNoz.

Step 4: Log Parsing

Loki often relies on query-time parsing (LogQL). SigNoz supports ingestion-time parsing.

Ingestion-Time Parsing (Log Pipelines)

We recommend parsing logs to extract structured attributes. This makes queries faster and allows for aggregation.

Use SigNoz Log Pipelines in the UI to build parsers (JSON, Regex, Grok) to extract structured attributes.

See Log Pipelines.

Validate

Verify logs are flowing correctly.

Check Logs Are Arriving

  1. In SigNoz, navigate to Logs in the left sidebar.
  2. Use the Logs Explorer to browse recent logs.
  3. Verify logs from each source in your inventory appear.

Verify Attributes

  1. Click on a log entry to expand it.
  2. Check that attributes like job, service.name, k8s.pod.name are present and correct.

Troubleshooting

Logs not appearing

  1. Check Collector status: Verify the OpenTelemetry Collector is running.
  2. Check file permissions: Ensure the Collector has read access to log files (/var/log/...).
  3. Check include paths: Verify glob patterns match your files.

Unparsed logs

If logs appear as raw text:

  1. Check Log Pipelines: Ensure you have a pipeline configured to parse the logs.
  2. Check JSON parsing: If logs are JSON, use the JSON parser processor in Log Pipelines.

Next Steps

Once your logs are flowing to SigNoz:

Last updated: November 30, 2025

Edit on GitHub

Was this page helpful?