Overview
This guide walks you through migrating logs from Datadog to SigNoz. You will:
- Check your current log sources in Datadog
- Set up log collection with OpenTelemetry
- Configure log parsing to match your formats
- Validate logs are flowing correctly to SigNoz
Key Differences: Datadog vs SigNoz
| Aspect | Datadog | SigNoz |
|---|---|---|
| Collection | Datadog Agent, Log Forwarders, Log API | OpenTelemetry Collector, OTel receivers |
| Query Language | Datadog Log Query Syntax | Query Builder, ClickHouse SQL |
| Storage | Proprietary | ClickHouse (open source) |
| Log Processing | Pipelines, Grok Parser, Remapper | Log Pipelines, OTel processors |
| Log Forwarding | Datadog Agent integrations | FluentBit, Fluentd, OTel receivers |
Prerequisites
Before starting, ensure you have:
- A SigNoz account (Cloud) or a running SigNoz instance (Self-Hosted)
- Access to your Datadog account to review current log sources
- Access to your log sources (servers, containers, applications)
Step 1: Assess Your Current Log Sources
Before migrating, inventory what's sending logs to Datadog.
List Your Log Sources
- Navigate to Logs → Configuration in Datadog.
- Review the Pipelines section to see which log sources are configured.
- Check Indexes to see log volume by source.
Alternatively, run a query in Log Explorer:
* | count by service, source, host
Categorize Your Log Sources
Group logs by how they're being collected:
| Source Type | How to Identify | Migration Path |
|---|---|---|
| Kubernetes | kube_* tags, pod/container metadata | Kubernetes log collection |
| Docker | docker.* source, container tags | Docker log collection |
| Application logs | Service name tags, source: values | Application logs via OTel SDK |
| Datadog Agent (file tailing) | source:file, file path tags | File logs via filelog receiver |
| Syslog | source:syslog | Syslog receiver |
| AWS CloudWatch | source:cloudwatch, AWS tags | AWS integration |
| Log API | Direct API calls in your code | HTTP/OTLP direct send |
Save this inventory. You'll use it to validate your migration is complete.
Step 2: Set Up the OpenTelemetry Collector
Install the OpenTelemetry Collector if you haven't already:
Install the OpenTelemetry Collector in your environment.
Configure the OTLP exporter for Logs
Step 3: Migrate Each Log Source
From Kubernetes
For Kubernetes environments, the easiest approach is to use the SigNoz K8s Infra Helm chart which automatically collects logs from all pods.
If you're using the SigNoz Kubernetes Infra Helm chart, all pod logs are automatically collected, including application logs. You don't need separate application log configuration—stdout/stderr from all containers will appear in SigNoz immediately after deploying the chart.
Manual setup (if not using the Helm chart)
Deploy the OpenTelemetry Collector as a DaemonSet and configure the filelog receiver for container logs.
See Kubernetes Logging for complete setup instructions.
From Docker Containers
For standalone Docker environments, you can collect logs from all containers using the OpenTelemetry Collector.
By mounting Docker's log directory, you automatically collect logs from every container. If your apps write to stdout/stderr (Docker's default logging), no additional application-level log configuration is needed.
See Collect Docker Container Logs for complete setup instructions including:
- Running the OTel Collector as a Docker container
- Configuring the
filelogreceiver for/var/lib/docker/containers/*/*.log - Parsing container metadata
- Filtering specific containers
From Application Logs
If you were using Datadog APM automatic log injection or dd-trace log forwarding, migrate to OpenTelemetry log collection.
Java: Use the OpenTelemetry Java Agent
The OTel Java agent can collect logs automatically from Logback and Log4j:
OTEL_LOGS_EXPORTER=otlp \
OTEL_EXPORTER_OTLP_ENDPOINT="https://ingest.<region>.signoz.cloud:443" \
OTEL_EXPORTER_OTLP_HEADERS="signoz-ingestion-key=<SIGNOZ_INGESTION_KEY>" \
OTEL_RESOURCE_ATTRIBUTES=service.name=<service_name> \
java -javaagent:/path/opentelemetry-javaagent.jar -jar <myapp>.jar
See Collecting Application Logs Using OTEL Java Agent for Logback/Log4j configuration options.
Python: Enable logs auto-instrumentation
OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true \
OTEL_EXPORTER_OTLP_ENDPOINT=https://ingest.<region>.signoz.cloud:443 \
OTEL_EXPORTER_OTLP_HEADERS=signoz-ingestion-key=<SIGNOZ_INGESTION_KEY> \
opentelemetry-instrument --logs_exporter otlp python app.py
See Python Logs Auto-Instrumentation for details.
Node.js: Use Pino or Winston
For Node.js applications, use OpenTelemetry-compatible logging libraries:
- Pino: Install
pino-opentelemetry-transportand configure the transport. See Node.js Pino Logs. - Winston: Install
@opentelemetry/winston-transportand add the transport. See Node.js Winston Logs.
Other languages and methods
For all available log collection methods, see the Send Logs to SigNoz guide.
From Datadog Agent (File Tailing)
Replace Datadog Agent file log collection with the filelog receiver.
Step 1: Add the filelog receiver
receivers:
filelog:
include:
- /var/log/myapp/*.log
- /var/log/syslog
start_at: end
include_file_path: true
include_file_name: true
Step 2: Enable it in the logs pipeline
service:
pipelines:
logs:
receivers: [otlp, filelog]
processors: [batch]
exporters: [otlp]
See Collect Logs from File for more configuration options.
From Syslog
If you were forwarding syslog to Datadog, configure the OTel Collector's syslog receiver:
receivers:
syslog:
tcp:
listen_address: '0.0.0.0:54527'
protocol: rfc3164
location: UTC
operators:
- type: move
from: attributes.message
to: body
Enable it in your logs pipeline by modifying the otel-collector-config.yaml file:
service:
...
logs:
receivers: [otlp, syslog]
processors: [batch]
exporters: [otlp]
See Collect Syslogs for detailed setup.
From Cloud Provider Logs
If you were using Datadog's cloud integrations for AWS, Azure, or GCP logs, SigNoz provides native integrations:
| Provider | Log Sources | Setup Guide |
|---|---|---|
| AWS | CloudWatch Logs, S3, Lambda | AWS Cloud Integrations |
| Azure | Azure Monitor, Event Hubs | Azure Monitoring |
| GCP | Cloud Logging, Pub/Sub | GCP Monitoring |
From Log API
If you were sending logs directly to Datadog's Log API, switch to SigNoz's HTTP endpoint.
Before (Datadog Log API):
curl -X POST "https://http-intake.logs.datadoghq.com/v1/input" \
-H "Content-Type: application/json" \
-H "DD-API-KEY: <DD_API_KEY>" \
-d '{"message": "Log message", "ddsource": "myapp", "service": "my-service"}'
After (HTTP to SigNoz):
curl --location 'https://ingest.<REGION>.signoz.cloud:443/logs/json' \
--header 'Content-Type: application/json' \
--header 'signoz-ingestion-key: <SIGNOZ_INGESTION_KEY>' \
--data '[
{
"severity_text": "info",
"severity_number": 9,
"attributes": {
"source": "myapp"
},
"resources": {
"service.name": "my-service"
},
"message": "Log message"
}
]'
Replace <REGION> with your SigNoz Cloud region (us, eu, or in) and <SIGNOZ_INGESTION_KEY> with your ingestion key from Settings → Ingestion Settings.
See Send Logs via HTTP for detailed setup.
Step 4: Migrate Log Parsing
If you had Datadog log pipelines with parsing rules, migrate them to SigNoz.
Option 1: Log Pipelines (Recommended)
SigNoz includes a UI-based Log Pipelines feature that lets you parse and transform logs without modifying collector configuration:
- No collector restarts required
- Visual interface for building parsers
- Test patterns against sample logs before applying
Log Pipelines support multiple processors:
| Processor | Use Case |
|---|---|
| Regex | Extract fields using regular expressions |
| Grok | Use predefined patterns for common formats |
| JSON Parser | Parse JSON log bodies into attributes |
| Timestamp Parser | Extract and normalize timestamps |
| Severity Parser | Map log levels to standard severity |
| Add/Remove/Move/Copy | Transform log attributes |
See Log Pipelines Processors for detailed documentation.
Option 2: Collector-Level Parsing
You can also configure parsing in the OpenTelemetry Collector using operators in the filelog receiver.
Example: Parsing Apache access logs
Datadog Grok pattern:
%{IP:client_ip} - %{USERNAME:remote_user} \[%{HTTPDATE:time_local}\] "%{WORD:method} %{URIPATHPARAM:request} HTTP/%{NUMBER:http_version}" %{NUMBER:status} %{NUMBER:body_bytes_sent}
OTel Collector equivalent:
receivers:
filelog:
include:
- /var/log/apache2/access.log
operators:
- type: regex_parser
regex: '^(?P<client_ip>[^ ]*) - (?P<remote_user>[^ ]*) \[(?P<time_local>[^\]]*)\] "(?P<method>[^ ]*) (?P<request>[^ ]*) HTTP/(?P<http_version>[^"]*)" (?P<status>[^ ]*) (?P<body_bytes_sent>[^ ]*)'
timestamp:
parse_from: attributes.time_local
layout: '%d/%b/%Y:%H:%M:%S %z'
Validate
Verify logs are flowing correctly by comparing against your source inventory.
Check Logs Are Arriving
- In SigNoz, navigate to Logs in the left sidebar.
- Use the Logs Explorer to browse recent logs.
- Verify logs from each source in your inventory appear.
Verify Log Attributes
- Click on a log entry to expand it.
- Check that parsed fields (timestamp, severity, service name) are correct.
- Verify file paths or container names match your sources.
Troubleshooting
Logs not appearing in SigNoz
- Check Collector status: Verify the OpenTelemetry Collector is running and check its logs for errors.
- Verify file permissions: Ensure the Collector has read access to log files.
- Check include paths: Confirm the
includepatterns infilelogreceiver match your log file paths. - Test connectivity: Verify the Collector can reach
ingest.<region>.signoz.cloud:443.
Logs appear but are unparsed
If logs show as raw text without parsed fields:
- Use Log Pipelines: Configure parsing in SigNoz UI instead of at the collector level.
- Verify operator order: Parsers must be in the correct sequence.
- Check regex patterns: Test your regex against sample log lines using a regex tester.
Missing logs from some sources
- Check start_at setting: Use
start_at: endfor new logs only,start_at: beginningto include existing logs. - Verify exclude patterns: Ensure you're not accidentally excluding wanted log files.
- Check file rotation: The Collector handles rotation, but verify log files aren't being truncated.
Duplicate logs
If you see the same log multiple times:
- Check for multiple collectors: Ensure only one Collector instance is reading each log file.
- Disable Datadog Agent: Make sure the Datadog Agent isn't still running alongside the OTel Collector.
- Review include patterns: Overlapping patterns can cause duplicates.
Missing trace correlation
If logs don't link to traces:
- Verify trace context injection: Ensure your application is injecting trace/span IDs into logs.
- Check attribute names: SigNoz expects
trace_idandspan_idattributes. - Review Log Pipelines: Add processors to extract/rename trace context fields.
Read more about trace correlation here.
Next Steps
Once your logs are flowing to SigNoz:
- Create dashboards with log-based panels
- Set up log-based alerts for error patterns
- Configure log pipelines for advanced processing
- Correlate logs with traces using trace IDs
- Explore Logs Explorer features for analysis