If you use Logstash to collect logs in your stack, you can forward them to SigNoz via the OpenTelemetry Collector using the TCP protocol.
Prerequisites
- A running Logstash instance
- An instance of SigNoz (either Cloud or Self-Hosted)
Send Logs to SigNoz
Step 1: Add OpenTelemetry Collector Binary
Add the OpenTelemetry Collector binary to your VM by following the OTel binary setup guide.
Step 2: Configure the tcplog Receiver
Merge the following into your existing config.yaml. If your config already has a processors: block, add the new entries under it rather than replacing it. For more on receivers, see the OTel Collector configuration guide.
receivers:
tcplog/logstash:
# max_log_size: 1MiB # default; increase if your logs exceed this size
listen_address: '0.0.0.0:2256'
operators:
# Parse the JSON that Logstash sends via codec => json_lines
# Non-JSON logs pass through unparsed (on_error: send_quiet)
- type: json_parser
timestamp:
parse_from: attributes["@timestamp"]
layout_type: gotime
layout: '2006-01-02T15:04:05.999Z07:00'
on_error: send_quiet
# Move Logstash's `message` field to the OTel log body
- type: move
from: attributes.message
to: body
processors:
batch:
resourcedetection:
detectors: [system, env]
timeout: 5s
transform:
log_statements:
- context: log
statements:
# Promote service.name to a resource attribute for consistent querying
- set(resource.attributes["service.name"], log.attributes["service.name"]) where log.attributes["service.name"] != nil
- delete_key(log.attributes, "service.name") where log.attributes["service.name"] != nil
exporters:
otlp:
endpoint: "https://ingest.<region>.signoz.cloud:443"
headers:
signoz-ingestion-key: "<your-ingestion-key>"
service:
pipelines:
logs:
receivers: [tcplog/logstash]
processors: [resourcedetection, transform, batch]
exporters: [otlp]
Verify these values:
<region>: Your SigNoz Cloud region.<your-ingestion-key>: Your SigNoz ingestion key.
- Port
2256is used here, but you can use any available port. on_error: send_quietlets non-JSON logs pass through unparsed rather than dropping them — useful when Logstash emits plain-text startup messages alongside JSON logs.- The
systemdetector populateshost.name(OS hostname, DNS-resolved first) andos.type. If a configured detector is unavailable, the collector will refuse to start. Full detector options: Resource Attributes for Logs. - For more configuration options for the
tcplogreceiver, see tcplog receiver docs.
Step 3: Update Logstash Configuration
Add the following output block to your Logstash configuration:
output {
tcp {
codec => json_lines # Ensures logs are sent in JSON format line-by-line
host => "localhost"
port => 2256
}
}
- This config assumes Logstash is running on the same host as the Collector. Set the
hostvalue if Logstash is running elsewhere.
Step 4: Start the Services
Restart the OpenTelemetry Collector and Logstash:
sudo systemctl restart otelcol-contrib
sudo systemctl restart logstash
Change the service name if you installed the Collector under a different name (e.g., otelcol).
Step 1: Create the OTel Collector Config
Create a file named otel-collector-config.yaml with the tcplog receiver:
receivers:
tcplog/logstash:
# max_log_size: 1MiB # default; increase if your logs exceed this size
listen_address: '0.0.0.0:2256'
operators:
# Parse the JSON that Logstash sends via codec => json_lines
# Non-JSON logs pass through unparsed (on_error: send_quiet)
- type: json_parser
timestamp:
parse_from: attributes["@timestamp"]
layout_type: gotime
layout: '2006-01-02T15:04:05.999Z07:00'
on_error: send_quiet
# Move Logstash's `message` field to the OTel log body
- type: move
from: attributes.message
to: body
processors:
batch:
resourcedetection:
detectors: [system, env]
timeout: 5s
transform:
log_statements:
- context: log
statements:
# Promote service.name to a resource attribute for consistent querying
- set(resource.attributes["service.name"], log.attributes["service.name"]) where log.attributes["service.name"] != nil
- delete_key(log.attributes, "service.name") where log.attributes["service.name"] != nil
exporters:
otlp:
endpoint: "https://ingest.<region>.signoz.cloud:443"
headers:
signoz-ingestion-key: "<your-ingestion-key>"
service:
pipelines:
logs:
receivers: [tcplog/logstash]
processors: [resourcedetection, transform, batch]
exporters: [otlp]
Verify these values:
<region>: Your SigNoz Cloud region.<your-ingestion-key>: Your SigNoz ingestion key.
on_error: send_quietlets non-JSON logs pass through unparsed rather than dropping them — useful when Logstash emits plain-text startup messages alongside JSON logs.- The
systemdetector populateshost.name(OS hostname, DNS-resolved first) andos.type. In Docker,host.nameis the container's hostname, a random ID by default. To use your machine's hostname instead, addenvironment: [OTEL_RESOURCE_ATTRIBUTES=host.name=<your-machine-name>]to theotel-collectorservice. If a configured detector is unavailable, the collector will refuse to start. Full detector options: Resource Attributes for Logs. - For more configuration options for the
tcplogreceiver, see tcplog receiver docs.
Step 2: Create the Logstash Config
Create a logstash.conf file with an input to receive logs and an output to forward them to the OTel Collector:
input {
gelf {
port => 12201
}
}
output {
tcp {
codec => json_lines
host => "otel-collector"
port => 2256
}
}
- The
gelfinput above listens on port12201for incoming logs. otel-collectoris the Docker service name defined in the next step. Docker networking resolves this to the correct container automatically.
Step 3: Create/Update the Docker Compose file
x-logging: &default-logging
driver: "gelf"
options:
gelf-address: "udp://localhost:12201"
services:
otel-collector:
image: otel/opentelemetry-collector-contrib:0.148.0
volumes:
- ./otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml
logstash:
image: docker.elastic.co/logstash/logstash:8.17.4
ports:
- "12201:12201/udp" # GELF input port
volumes:
- ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
depends_on:
- otel-collector
# Example application sending logs to Logstash
example-app:
image: busybox
command: sh -c "while true; do echo 'Hello from example app!'; sleep 5; done"
logging: *default-logging
depends_on:
- logstash
Do not apply the *default-logging anchor to the logstash or otel-collector services if they share the same Compose file. Doing so will cause an infinite logging loop where Logstash attempts to ingest its own logs.
Step 4: Start the Services
docker compose up -d
Validate
Once the services are running:
- Open SigNoz and navigate to Logs > Logs Explorer.
- Filter by
resource.host.nameto confirm logs are arriving. Thesystemdetector populates this attribute on every log. If your Logstash pipeline sets aservice.namefield on events, filter byservice.name = '<service_name>'to narrow by service. - Click on a log entry and verify it contains:
body: themessagevalue from the Logstash eventattributes: remaining Logstash JSON fields (varies by your Logstash pipeline config)resource.host.name: the OTel Collector's OS hostnameresource.service.name: promoted from the Logstash event if your pipeline sets it

Transform Logs (Optional)
The config above parses Logstash's JSON output into structured attributes and populates host.name via resource detection. To enrich logs further, see SigNoz Log Pipelines.
Troubleshooting
Logs are not appearing in SigNoz
- Verify that the OpenTelemetry Collector is running and the
tcplog/logstashreceiver is enabled in both thereceiverssection and theservice.pipelines.logspipeline. - Confirm that Logstash is sending logs to the correct host and port (e.g.,
localhost:2256). - Check the OpenTelemetry Collector logs for errors related to the
tcplogreceiver. - After fixing, restart both services and check Logs > Logs Explorer in SigNoz to confirm logs appear.
Port conflict
- If port
2256is already in use, choose a different port. Update the port in both the Collectorconfig.yamland the Logstash output configuration. - After updating, restart both services and verify logs appear in Logs > Logs Explorer.
Logs appear but are not parsed correctly
- Ensure the Logstash
outputblock usescodec => json_linesso logs are sent as newline-delimited JSON. - If logs arrive with raw JSON in
bodyand empty attributes, thejson_parseroperator may be failing silently. Check the Collector logs, theon_error: send_quietoption suppresses parse errors so non-JSON logs pass through unparsed. - Use Log Pipelines in SigNoz to parse and transform log fields further.
Next Steps
Get Help
If you need help with the steps in this topic, please reach out to us on SigNoz Community Slack.
If you are a SigNoz Cloud user, please use in product chat support located at the bottom right corner of your SigNoz instance or contact us at cloud-support@signoz.io.