Overview
By default, the filelog receiver treats each newline as the end of a log entry. This breaks stack traces, multi-line error messages, and other logs that span multiple lines into separate entries.
Here is an example of multiline logs:
2024-06-20 18:58:05,898 ERROR:Exception on main handler
Traceback (most recent call last):
File "python-logger.py", line 9, in make_log
return area[10]
IndexError: string index out of range
2024-06-20 18:58:05,898 DEBUG:Query Started
The example above has two log entries spread over multiple lines. Since each newline is treated as the end of a log entry by default, the collector creates separate entries for each line:

Parse Multiline Logs at Receiver
The filelog receiver has a built-in multiline configuration block. This is the recommended approach because it recombines logs before they enter the pipeline.
Step 1: Identify the start or end pattern
Look at your log format and find a regex that matches the beginning (or end) of each log entry.
2024-06-20 18:58:05,898 ERROR:Exception on main handler
Traceback (most recent call last):
File "python-logger.py", line 9, in make_log
return area[10]
IndexError: string index out of range
2024-06-20 18:58:05,898 DEBUG:Query Started
For the above log lines, each new entry starts with a date. The line_start_pattern would be:
^\d{4}-\d{2}-\d{2}
Step 2: Add the multiline configuration to your receiver
receivers:
filelog:
include:
- /var/log/example/multiline.log
multiline:
line_start_pattern: ^\d{4}-\d{2}-\d{2}
force_flush_period: 5s
The multiline block must contain exactly one of line_start_pattern or line_end_pattern. You can also set omit_pattern: true to exclude the matched pattern from the combined entry.
The filelog receiver waits for more content before flushing the last multiline entry. If your last log line never appears in SigNoz, set force_flush_period at the receiver level (default is 500ms). Example: force_flush_period: 5s.
Step 3: Enable the receiver in your logs pipeline
service:
pipelines:
logs:
receivers: [filelog]
processors: []
exporters: [otlp]
Using multiline in the SigNoz k8s-infra Helm chart
If you are running SigNoz on Kubernetes with the k8s-infra Helm chart, use the presets.logsCollection.multiline key in your Helm values instead of editing the raw receiver config:
presets:
logsCollection:
multiline:
line_start_pattern: '^\d{4}-\d{2}-\d{2}'
The preset propagates this directly to the filelog receiver's multiline block on the otelAgent DaemonSet. You do not need to touch otelAgent.config directly.
Use Recombine Operator to Combine Multiline Logs
If you cannot use the receiver-level multiline configuration, you can use the recombine operator inside a logstransform processor.
The logstransform processor has development stability and is not included in the official otelcol-contrib distribution or the SigNoz k8s-infra Helm chart. Using it on a standard k8s-infra installation results in a collector startup error (unknown type: logstransform). If you are on k8s-infra, use the presets.logsCollection.multiline approach above instead.
Step 1: Define the recombine processor
Using the same example logs:
2024-06-20 18:58:05,898 ERROR:Exception on main handler
Traceback (most recent call last):
File "python-logger.py", line 9, in make_log
return area[10]
IndexError: string index out of range
2024-06-20 18:58:05,898 DEBUG:Query Started
Since each new log line starts with a date, configure is_first_entry to match that pattern:
processors:
logstransform/multiline:
operators:
- type: recombine
combine_field: body
is_first_entry: body matches '^\d{4}-\d{2}-\d{2}'
source_identifier: attributes["log.file.path"]
This matches the first line of each multiline entry and combines the body field for each unique log.file.path value.
Step 2: Add the processor to your logs pipeline
The processor must be enabled in the service.pipelines.logs.processors list:
service:
pipelines:
logs:
receivers: [filelog]
processors: [logstransform/multiline]
exporters: [otlp]
After deployment, multiline logs appear as single entries in SigNoz:

Validate
After applying either approach, verify in SigNoz that:
- Navigate to the Logs tab.
- Search for logs from the service you configured.
- Confirm that stack traces and multi-line entries appear as single log entries instead of fragmented lines.
Troubleshooting
All logs combined into one entry — Check that is_first_entry or line_start_pattern matches the correct pattern. If the regex is too broad or too narrow, entries will not split correctly.
logstransform processor causes collector startup error in k8s-infra — The logstransform processor is not included in the official otelcol-contrib image used by the k8s-infra chart. You will see unknown type: logstransform in the collector logs. Use presets.logsCollection.multiline in your Helm values instead (see the receiver section above).
Processor defined but not applied — Ensure the processor is listed in service.pipelines.logs.processors. Defining a processor alone does not enable it.
Next steps
Now that multiline logs are recombined properly, explore related log management features:
- Logs management overview - Query, filter, and analyze your logs in SigNoz
- Log-based alerts - Create alerts based on log patterns
- Collect Kubernetes pod logs - Collect logs from Kubernetes workloads
Get Help
If you need help with the steps in this topic, please reach out to us on SigNoz Community Slack.
If you are a SigNoz Cloud user, please use in product chat support located at the bottom right corner of your SigNoz instance or contact us at cloud-support@signoz.io.