Overview
Vector collects, transforms, and ships logs from various sources (syslog, files, Docker, Kubernetes) to SigNoz. SigNoz Cloud uses Vector's http sink; self-hosted SigNoz uses the opentelemetry sink.
Prerequisites
- Vector installed and running on your system
- An instance of SigNoz (Cloud or Self-Hosted)
Configure Vector
To send logs from Vector to SigNoz Cloud, you need to use the http sink.
Step 1: Install Vector
Install Vector on your VM or bare metal server. See the official Vector installation guide for your platform.
Step 2: Configure Vector
Open your Vector configuration file (usually /etc/vector/vector.yaml):
sudo nano /etc/vector/vector.yaml
Add the following configuration to collect logs from system log files and send them to SigNoz:
sources:
system_logs:
type: file
include:
- /var/log/syslog
- /var/log/messages
- /var/log/*.log
read_from: end
sinks:
signoz_sink:
type: http
inputs:
- system_logs
uri: "https://ingest.<region>.signoz.cloud/logs/vector"
encoding:
codec: json
request:
headers:
signoz-ingestion-key: "<your-ingestion-key>"
Verify these values:
<region>: Your SigNoz Cloud region.<your-ingestion-key>: Your SigNoz ingestion key.
The http sink sends log events as JSON to SigNoz Cloud's ingestion endpoint. The signoz-ingestion-key header authenticates each request. Vector batches events before sending.
Step 3: Restart Vector
sudo systemctl restart vector
Verify Vector is running:
sudo systemctl status vector
Step 1: Install the Vector Helm chart
Deploy Vector using the official Vector Helm chart. Set role: Agent to run Vector as a DaemonSet so every node collects logs. Define the full pipeline in customConfig.
Add the Vector Helm repo and install the chart:
helm repo add vector https://helm.vector.dev
helm repo update
Step 2: Configure Vector
Create a values.yaml file with customConfig for the Vector configuration:
role: Agent
customConfig:
data_dir: /vector-data-dir
sources:
kubernetes_logs:
type: kubernetes_logs
sinks:
signoz_sink:
type: http
inputs:
- kubernetes_logs
uri: "https://ingest.<region>.signoz.cloud/logs/vector"
encoding:
codec: json
request:
headers:
signoz-ingestion-key: "<your-ingestion-key>"
service:
enabled: false
serviceHeadless:
enabled: false
Verify these values:
<region>: Your SigNoz Cloud region.<your-ingestion-key>: Your SigNoz ingestion key.
Step 3: Apply Configuration
helm install vector vector/vector \
--namespace vector \
--create-namespace \
--values values.yaml
Step 1: Create Vector Configuration
Create a vector.yaml file:
sources:
docker_logs:
type: docker_logs
sinks:
signoz_sink:
type: http
inputs:
- docker_logs
uri: "https://ingest.<region>.signoz.cloud/logs/vector"
encoding:
codec: json
request:
headers:
signoz-ingestion-key: "<your-ingestion-key>"
Verify these values:
<region>: Your SigNoz Cloud region.<your-ingestion-key>: Your SigNoz ingestion key.
Step 2: Run Vector Container
docker run -d \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/lib/docker:/var/lib/docker:ro \
-v $(pwd)/vector.yaml:/etc/vector/vector.yaml \
--name vector \
timberio/vector:latest
The opentelemetry sink used here is in beta. The configuration schema may change before it reaches stable.
This documentation was tested with Vector version 0.54.0.
Vector's opentelemetry sink requires events to be pre-structured as OTLP — raw log events from sources like file or docker_logs are not. The remap transform below builds the required OTLP structure and maps source metadata to OTel semantic conventions.
Step 1: Install Vector
Install Vector on your VM or bare metal server. See the official Vector installation guide for your platform.
Step 2: Configure Vector
Open your Vector configuration file (usually /etc/vector/vector.yaml):
sudo nano /etc/vector/vector.yaml
Add the following configuration:
sources:
system_logs:
type: file
include:
- /var/log/syslog
- /var/log/messages
- /var/log/*.log
read_from: end
transforms:
to_otlp:
type: remap
inputs:
- system_logs
source: |
msg = del(.message)
ts = del(.timestamp)
. = {
"resourceLogs": [{
"resource": {
"attributes": [
{"key": "host.name", "value": {"stringValue": del(.host) || ""}},
{"key": "log.file.path", "value": {"stringValue": del(.file) || ""}}
]
},
"scopeLogs": [{
"logRecords": [{
"timeUnixNano": to_string(to_unix_timestamp!(ts, unit: "nanoseconds")),
"body": {"stringValue": msg},
"attributes": [
{"key": "source_type", "value": {"stringValue": del(.source_type) || ""}}
]
}]
}]
}]
}
sinks:
signoz_sink:
type: opentelemetry
inputs:
- to_otlp
protocol:
type: http
uri: http://<signoz-host>:4318/v1/logs
method: post
encoding:
codec: otlp
Replace <signoz-host> with the hostname or IP of your SigNoz instance.
The opentelemetry sink sends logs over OTLP HTTP to your SigNoz collector on port 4318. The remap transform above shapes each log event into the OTLP resourceLogs structure before the sink serializes it.
Step 3: Restart Vector
sudo systemctl restart vector
Verify Vector is running:
sudo systemctl status vector
Step 1: Install the Vector Helm chart
Deploy Vector using the official Vector Helm chart. Set role: Agent to run Vector as a DaemonSet so every node collects logs. Define the full pipeline in customConfig.
Add the Vector Helm repo and install the chart:
helm repo add vector https://helm.vector.dev
helm repo update
Step 2: Configure Vector
Create a values.yaml file with customConfig for the Vector configuration:
role: Agent
customConfig:
data_dir: /vector-data-dir
api:
enabled: true
address: 127.0.0.1:8686
playground: false
sources:
kubernetes_logs:
type: kubernetes_logs
transforms:
to_otlp:
type: remap
inputs:
- kubernetes_logs
source: |
msg = del(.message)
ts = del(.timestamp)
. = {
"resourceLogs": [{
"resource": {
"attributes": [
{"key": "k8s.pod.name", "value": {"stringValue": del(.kubernetes.pod_name) || ""}},
{"key": "k8s.namespace.name", "value": {"stringValue": del(.kubernetes.pod_namespace) || ""}},
{"key": "k8s.container.name", "value": {"stringValue": del(.kubernetes.container_name) || ""}},
{"key": "k8s.node.name", "value": {"stringValue": del(.kubernetes.pod_node_name) || ""}}
]
},
"scopeLogs": [{
"logRecords": [{
"timeUnixNano": to_string(to_unix_timestamp!(ts, unit: "nanoseconds")),
"body": {"stringValue": msg},
"attributes": [
{"key": "stream", "value": {"stringValue": del(.stream) || ""}},
{"key": "source_type", "value": {"stringValue": del(.source_type) || ""}}
]
}]
}]
}]
}
sinks:
signoz_sink:
type: opentelemetry
inputs:
- to_otlp
protocol:
type: http
uri: http://<signoz-host>:4318/v1/logs
method: post
encoding:
codec: otlp
service:
enabled: false
serviceHeadless:
enabled: false
Replace <signoz-host> with the hostname or IP of your SigNoz instance.
Step 3: Apply Configuration
helm install vector vector/vector \
--namespace vector \
--create-namespace \
--values values.yaml
Step 1: Create Vector Configuration
Create a vector.yaml file:
sources:
docker_logs:
type: docker_logs
transforms:
to_otlp:
type: remap
inputs:
- docker_logs
source: |
msg = del(.message)
ts = del(.timestamp)
. = {
"resourceLogs": [{
"resource": {
"attributes": [
{"key": "container.name", "value": {"stringValue": del(.container_name) || ""}},
{"key": "container.image.name", "value": {"stringValue": del(.image) || ""}},
{"key": "container.id", "value": {"stringValue": del(.container_id) || ""}}
]
},
"scopeLogs": [{
"logRecords": [{
"timeUnixNano": to_string(to_unix_timestamp!(ts, unit: "nanoseconds")),
"body": {"stringValue": msg},
"attributes": [
{"key": "stream", "value": {"stringValue": del(.stream) || ""}},
{"key": "source_type", "value": {"stringValue": del(.source_type) || ""}}
]
}]
}]
}]
}
sinks:
signoz_sink:
type: opentelemetry
inputs:
- to_otlp
protocol:
type: http
uri: http://<signoz-host>:4318/v1/logs
method: post
encoding:
codec: otlp
Replace <signoz-host> with the hostname or IP of your SigNoz instance.
Step 2: Run Vector Container
docker run -d \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/lib/docker:/var/lib/docker:ro \
-v $(pwd)/vector.yaml:/etc/vector/vector.yaml \
--name vector \
timberio/vector:0.54.0-debian
This documentation was tested with Vector 0.54.0-debian. As opentelemetry sink is in beta, other versions may work but are not actively tested.
Log Mapping for SigNoz
SigNoz expects logs in a specific JSON format. You can either transform logs at the Vector level or use SigNoz pipelines later.
Supported Fields
| Field | Type | Description |
|---|---|---|
timestamp | int64 | Nanoseconds since Unix epoch (e.g., 1704061984975797000) |
body | string | Log message (also accepts message) |
trace_id | string | 32-character hex string |
span_id | string | 16-character hex string |
trace_flags | int | Integer (0 or 1 for sampled) |
severity_text | string | Log level: TRACE, DEBUG, INFO, WARN, ERROR, FATAL |
severity_number | int | OTel severity number: TRACE=1-4, DEBUG=5-8, INFO=9-12, WARN=13-16, ERROR=17-20, FATAL=21-24 |
attributes | object | Custom attributes |
resources | object | Resource attributes |
Example
{
"timestamp": 1704061984975797000,
"body": "Request processed successfully",
"trace_id": "000000000000000018c51935df0b93b9",
"span_id": "18c51935df0b93b9",
"trace_flags": 0,
"severity_text": "info",
"severity_number": 9,
"attributes": {
"method": "GET",
"path": "/api/users"
},
"resources": {
"host": "myhost",
"namespace": "prod"
}
}
Note: Any fields not in this schema are automatically added to log attributes.
Self-Hosted
The Vector remap transform in the config above already maps fields to OTel format. Resource attributes (like host.name, k8s.pod.name) follow OTel semantic conventions.
Validate
After configuring Vector, navigate to Logs → Logs Explorer in SigNoz. You should see incoming log entries within a few minutes. Filter by source_type to confirm the logs are from Vector.

For more details on querying logs, see the Logs Explorer documentation.
Setup OpenTelemetry Collector (Optional)
What is the OpenTelemetry Collector?
Think of the OTel Collector as a middleman between your app and SigNoz. Instead of your application sending data directly to SigNoz, it sends everything to the Collector first, which then forwards it along.
Why use it?
- Cleaning up data — Filter out noisy traces you don't care about, or remove sensitive info before it leaves your servers.
- Keeping your app lightweight — Let the Collector handle batching, retries, and compression instead of your application code.
- Adding context automatically — The Collector can tag your data with useful info like which Kubernetes pod or cloud region it came from.
- Future flexibility — Want to send data to multiple backends later? The Collector makes that easy without changing your app.
See Switch from direct export to Collector for step-by-step instructions to convert your setup.
For more details, see Why use the OpenTelemetry Collector? and the Collector configuration guide.
Troubleshooting
Logs are not appearing in SigNoz
- Symptom: No log entries in Logs Explorer after configuring Vector.
- Likely cause: Vector is not running, or endpoint/key is misconfigured.
- Fix:
- Check Vector status:
sudo systemctl status vector(VM) ordocker ps(Docker) - Check Vector logs:
sudo journalctl -u vector -f(VM) ordocker logs vector(Docker) - Verify endpoint and credentials (see below)
- Check Vector status:
- Verify: After fixing, confirm new logs appear in Logs → Logs Explorer within a few minutes.
Endpoint and credentials check:
- Cloud:
https://ingest.<region>.signoz.cloud/logs/vectorwith valid<your-ingestion-key> - Self-Hosted:
http://<signoz-host>:4318/v1/logswith collector reachable on port 4318 - Ensure no firewall is blocking the connection.
Self-Hosted: OTLP serialization error
If Vector logs show:
Failed serializing frame. error=Log event does not contain OTLP top-level fields (resourceLogs or resourceMetrics)
The remap transform is missing or its inputs field does not match the source name. Verify the transform's inputs matches the source component ID in your config.
Vector configuration validation
Validate your Vector configuration before restarting:
vector validate --config-yaml /etc/vector/vector.yaml
Next Steps
- Log Query/Filtering guides
- Alerts setup for logs
- Log Pipelines for processing and transforming logs
Get Help
If you need help with the steps in this topic, please reach out to us on SigNoz Community Slack.
If you are a SigNoz Cloud user, please use in product chat support located at the bottom right corner of your SigNoz instance or contact us at cloud-support@signoz.io.