OpenTelemetry Operator - Configure
This guide explains how to configure the OpenTelemetry Operator to deploy collectors as a DaemonSet or a Deployment, enabling you to send telemetry data to your SigNoz instance. We'll leverage the configurations provided in the OpenTelemetry Operator's documentation, adapting them for SigNoz. If you haven't installed the Operator yet, see the Install guide.
Prerequisites
- A running Kubernetes cluster and
kubectl
access - OpenTelemetry Operator installed with CRDs applied (see Install guide)
- A reachable SigNoz OTLP endpoint (gRPC 4317 and/or HTTP 4318)
k8s-infra
is a pre-configured Helm chart from SigNoz, tailored for Kubernetes infrastructure observability. It provides a straightforward, out-of-the-box solution: simply install the chart in your cluster, and it will automatically gather metrics, logs, traces, and events from your entire Kubernetes environment and no manual configuration needed.
For more information, refer to the K8s-infra Collection Agent.
Configuring for DaemonSet
A DaemonSet runs one OpenTelemetry Collector per node to perform node-local collection and enrichment.
What it collects
- Logs: Kubernetes container logs via
filelog/k8s
from/var/log/pods/*/*/*.log
- Host metrics: CPU, disk, filesystem, load, memory, and network via
hostmetrics
- Kubelet stats: Container/pod/node/volume metrics via
kubeletstats
- OTLP receiver: gRPC (4317) and HTTP (4318) for traces/metrics/logs
- Metadata enrichment:
k8sattributes
,resourcedetection
Use a DaemonSet when you need node-local collection (logs, host metrics) on every node. See the "DaemonSet RBAC" tab for required permissions and the "DaemonSet Collector" tab for the DaemonSet CR you can apply.
RBAC
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: signoz-collector-daemonset-collector-role
namespace: opentelemetery-operator-system
rules:
- apiGroups: [""]
resources: [pods, namespaces, nodes, persistentvolumeclaims]
verbs: [get, list, watch]
- apiGroups: ["apps"]
resources: [replicasets]
verbs: [get, list, watch]
- apiGroups: ["extensions"]
resources: [replicasets]
verbs: [get, list, watch]
- apiGroups: [""]
resources: [nodes, endpoints]
verbs: [list, watch]
- apiGroups: ["batch"]
resources: [jobs]
verbs: [list, watch]
- apiGroups: [""]
resources: [nodes/proxy]
verbs: [get]
- apiGroups: [""]
resources: [nodes/stats, configmaps, events]
verbs: [create, get]
- apiGroups: [""]
resourceNames: [otel-container-insight-clusterleader]
resources: [configmaps]
verbs: [get, update]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: signoz-collector-daemonset-collector-binding
namespace: opentelemetery-operator-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: signoz-collector-daemonset-collector-role
subjects:
- kind: ServiceAccount
name: signoz-collector-daemonset-collector
namespace: opentelemetery-operator-system
Collector CR
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: signoz-collector-daemonset
spec:
mode: daemonset
image: docker.io/otel/opentelemetry-collector-contrib:0.109.0
env:
- name: K8S_CLUSTER_NAME
value: "<YOUR_CLUSTER_NAME>"
- name: OTEL_EXPORTER_OTLP_INSECURE
value: "true"
- name: OTEL_EXPORTER_OTLP_INSECURE_SKIP_VERIFY
value: "false"
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: K8S_POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: K8S_HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: K8S_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: K8S_POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
- name: K8S_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SIGNOZ_COMPONENT
value: otel-daemonset
- name: OTEL_RESOURCE_ATTRIBUTES
value: signoz.component=$(SIGNOZ_COMPONENT),k8s.cluster.name=$(K8S_CLUSTER_NAME)
- name: OTEL_EXPORTER_OTLP_INSECURE
value: "true"
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: hostfs
hostPath:
path: /
volumeMounts:
- name: varlog
mountPath: /var/log
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: hostfs
mountPath: /hostfs
readOnly: true
mountPropagation: HostToContainer
config:
exporters:
otlp:
endpoint: https://ingest.{region}.signoz.cloud:443
headers:
signoz-ingestion-key: "<SIGNOZ_INGESTION_KEY>"
tls:
insecure: ${env:OTEL_EXPORTER_OTLP_INSECURE}
insecure_skip_verify: ${env:OTEL_EXPORTER_OTLP_INSECURE_SKIP_VERIFY}
extensions:
health_check:
endpoint: 0.0.0.0:13133
pprof:
endpoint: localhost:1777
zpages:
endpoint: localhost:55679
processors:
batch:
send_batch_size: 10000
timeout: 200ms
k8sattributes:
extract:
annotations: []
labels: []
metadata:
- k8s.namespace.name
- k8s.deployment.name
- k8s.statefulset.name
- k8s.daemonset.name
- k8s.cronjob.name
- k8s.job.name
- k8s.node.name
- k8s.node.uid
- k8s.pod.name
- k8s.pod.uid
- k8s.pod.start_time
filter:
node_from_env_var: K8S_NODE_NAME
passthrough: false
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.ip
- sources:
- from: resource_attribute
name: k8s.pod.uid
- sources:
- from: connection
resourcedetection:
detectors:
- k8snode
- env
- system
k8snode:
auth_type: serviceAccount
node_from_env_var: K8S_NODE_NAME
override: false
system:
resource_attributes:
host.id:
enabled: false
host.name:
enabled: false
os.type:
enabled: true
timeout: 2s
receivers:
filelog/k8s:
exclude:
- /var/log/pods/default_my-release*-signoz-*/*/*.log
- /var/log/pods/default_my-release*-k8s-infra-*/*/*.log
- /var/log/pods/kube-system_*/*/*.log
- /var/log/pods/*_hotrod*_*/*/*.log
- /var/log/pods/*_locust*_*/*/*.log
include:
- /var/log/pods/*/*/*.log
include_file_name: false
include_file_path: true
operators:
- id: container-parser
type: container
start_at: end
hostmetrics:
collection_interval: 30s
root_path: /hostfs
scrapers:
cpu: {}
disk:
exclude:
devices:
- ^ram\d+$
- ^zram\d+$
- ^loop\d+$
- ^fd\d+$
- ^hd[a-z]\d+$
- ^sd[a-z]\d+$
- ^vd[a-z]\d+$
- ^xvd[a-z]\d+$
- ^nvme\d+n\d+p\d+$
match_type: regexp
filesystem:
exclude_fs_types:
fs_types:
- autofs
- binfmt_misc
- bpf
- cgroup2?
- configfs
- debugfs
- devpts
- devtmpfs
- fusectl
- hugetlbfs
- iso9660
- mqueue
- nsfs
- overlay
- proc
- procfs
- pstore
- rpc_pipefs
- securityfs
- selinuxfs
- squashfs
- sysfs
- tracefs
match_type: strict
exclude_mount_points:
match_type: regexp
mount_points:
- /dev/*
- /proc/*
- /sys/*
- /run/credentials/*
- /run/k3s/containerd/*
- /var/lib/docker/*
- /var/lib/containers/storage/*
- /var/lib/kubelet/*
- /snap/*
load: {}
memory: {}
network:
exclude:
interfaces:
- ^veth.*$
- ^docker.*$
- ^br-.*$
- ^flannel.*$
- ^cali.*$
- ^cbr.*$
- ^cni.*$
- ^dummy.*$
- ^tailscale.*$
- ^lo$
match_type: regexp
kubeletstats:
auth_type: serviceAccount
collection_interval: 30s
endpoint: ${env:K8S_HOST_IP}:10250
extra_metadata_labels:
- container.id
- k8s.volume.type
insecure_skip_verify: true
metric_groups:
- container
- pod
- node
- volume
metrics:
container.cpu.usage:
enabled: true
container.uptime:
enabled: true
k8s.container.cpu_limit_utilization:
enabled: true
k8s.container.cpu_request_utilization:
enabled: true
k8s.container.memory_limit_utilization:
enabled: true
k8s.container.memory_request_utilization:
enabled: true
k8s.node.cpu.usage:
enabled: true
k8s.node.uptime:
enabled: true
k8s.pod.cpu.usage:
enabled: true
k8s.pod.cpu_limit_utilization:
enabled: true
k8s.pod.cpu_request_utilization:
enabled: true
k8s.pod.memory_limit_utilization:
enabled: true
k8s.pod.memory_request_utilization:
enabled: true
k8s.pod.uptime:
enabled: true
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
max_recv_msg_size_mib: 4
http:
endpoint: 0.0.0.0:4318
service:
extensions:
- health_check
- zpages
- pprof
pipelines:
logs:
exporters:
- otlp
processors:
- resourcedetection
- k8sattributes
- batch
receivers:
- otlp
- filelog/k8s
metrics:
exporters:
- otlp
processors:
- resourcedetection
- k8sattributes
- batch
receivers:
- otlp
- hostmetrics
- kubeletstats
traces:
exporters:
- otlp
processors:
- resourcedetection
- k8sattributes
- batch
receivers:
- otlp
telemetry:
logs:
encoding: json
metrics:
address: 0.0.0.0:8888
- Set your ingestion endpoint according to your SigNoz Cloud region. Refer to the SigNoz Cloud ingestion endpoint guide to find the correct endpoint for your deployment.
- Replace
<SIGNOZ_INGESTION_KEY>
with the one provided by SigNoz. - Replace
<CLUSTER_NAME>
with the name of the Kubernetes cluster or a unique identifier of the cluster. - Replace
<DEPLOYMENT_ENVIRONMENT>
with the deployment environment of your application. Example: "staging", "production", etc.
Apply the DaemonSet
# Apply DaemonSet RBAC
kubectl apply -f rbac-daemonset.yaml
# Apply the DaemonSet collector CR
kubectl apply -f collector-daemonset.yaml
# Notes:
# - Replace OTLP endpoints/tokens in the manifests or mount via Secrets.
# - Use secure endpoints for production where possible.
Verify the DaemonSet
# List DaemonSet collector pods
kubectl get pods -A -l app.kubernetes.io/name=opentelemetry-collector
# Tail DaemonSet logs (replace <NAMESPACE> if different)
kubectl logs -n <NAMESPACE> ds/signoz-collector-daemonset --since=5m
Configuring for Deployment
An independent Deployment runs a centralized pool of OpenTelemetry Collectors to aggregate cluster-level data.
What it collects
- Cluster metrics: Node/pod conditions, resource allocatable, runtime and kubelet metadata via
k8s_cluster
- Kubernetes events: Via
k8s_events
receiver - Metadata enrichment:
k8sattributes
,resourcedetection
Use a Deployment when you want a shared collector service with a smaller number of replicas. See the "Deployment RBAC" tab for required permissions and the "Deployment Collector" tab for the Deployment CR you can apply.
RBAC
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: signoz-collector-deployment-collector-role
namespace: opentelemetery-operator-system
rules:
- apiGroups: [""]
resources: [events, namespaces, namespaces/status, nodes, nodes/spec, pods, pods/status, replicationcontrollers, replicationcontrollers/status, resourcequotas, services]
verbs: [get, list, watch]
- apiGroups: ["apps"]
resources: [daemonsets, deployments, replicasets, statefulsets]
verbs: [get, list, watch]
- apiGroups: ["extensions"]
resources: [daemonsets, deployments, replicasets]
verbs: [get, list, watch]
- apiGroups: ["batch"]
resources: [jobs, cronjobs]
verbs: [get, list, watch]
- apiGroups: ["autoscaling"]
resources: [horizontalpodautoscalers]
verbs: [get, list, watch]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: signoz-collector-deployment-collector-binding
namespace: opentelemetery-operator-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: signoz-collector-deployment-collector-role
subjects:
- kind: ServiceAccount
name: signoz-collector-deployment-collector
namespace: opentelemetery-operator-system
Collector CR
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: signoz-collector-deployment
spec:
mode: deployment
image: docker.io/otel/opentelemetry-collector-contrib:0.109.0
env:
- name: OTEL_EXPORTER_OTLP_INSECURE
value: "true"
- name: OTEL_EXPORTER_OTLP_INSECURE_SKIP_VERIFY
value: "false"
- name: OTEL_SECRETS_PATH
value: /secrets
- name: K8S_CLUSTER_NAME
value:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: K8S_POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: K8S_HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: K8S_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: K8S_POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
- name: K8S_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SIGNOZ_COMPONENT
value: otel-deployment
- name: OTEL_RESOURCE_ATTRIBUTES
value: signoz.component=$(SIGNOZ_COMPONENT),k8s.cluster.name=$(K8S_CLUSTER_NAME)
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: hostfs
hostPath:
path: /
volumeMounts:
- name: varlog
mountPath: /var/log
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: hostfs
mountPath: /hostfs
readOnly: true
mountPropagation: HostToContainer
config:
exporters:
otlp:
endpoint: https://ingest.{region}.signoz.cloud:443
headers:
signoz-ingestion-key: "<SIGNOZ_INGESTION_KEY>"
tls:
insecure: ${env:OTEL_EXPORTER_OTLP_INSECURE}
insecure_skip_verify: ${env:OTEL_EXPORTER_OTLP_INSECURE_SKIP_VERIFY}
extensions:
health_check:
endpoint: 0.0.0.0:13133
pprof:
endpoint: localhost:1777
zpages:
endpoint: localhost:55679
processors:
batch:
send_batch_size: 10000
timeout: 1s
k8sattributes:
extract:
metadata:
- k8s.namespace.name
- k8s.deployment.name
- k8s.statefulset.name
- k8s.daemonset.name
- k8s.cronjob.name
- k8s.job.name
- k8s.node.name
- k8s.node.uid
- k8s.pod.name
- k8s.pod.uid
- k8s.pod.start_time
passthrough: false
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.ip
- sources:
- from: resource_attribute
name: k8s.pod.uid
- sources:
- from: connection
resourcedetection:
detectors:
- env
override: false
timeout: 2s
receivers:
k8s_cluster:
allocatable_types_to_report:
- cpu
- memory
collection_interval: 30s
metrics:
k8s.node.condition:
enabled: true
k8s.pod.status_reason:
enabled: true
node_conditions_to_report:
- Ready
- MemoryPressure
- DiskPressure
- PIDPressure
- NetworkUnavailable
resource_attributes:
container.runtime:
enabled: true
container.runtime.version:
enabled: true
k8s.container.status.last_terminated_reason:
enabled: true
k8s.kubelet.version:
enabled: true
k8s.pod.qos_class:
enabled: true
k8s_events:
auth_type: serviceAccount
service:
extensions:
- health_check
- zpages
- pprof
pipelines:
logs:
exporters:
- otlp
processors:
- k8sattributes
- resourcedetection
- batch
receivers:
- k8s_events
metrics/internal:
exporters:
- otlp
processors:
- k8sattributes
- resourcedetection
- batch
receivers:
- k8s_cluster
telemetry:
logs:
encoding: json
metrics:
address: 0.0.0.0:8888
- Set your ingestion endpoint according to your SigNoz Cloud region. Refer to the SigNoz Cloud ingestion endpoint guide to find the correct endpoint for your deployment.
- Replace
<SIGNOZ_INGESTION_KEY>
with the one provided by SigNoz. - Replace
<CLUSTER_NAME>
with the name of the Kubernetes cluster or a unique identifier of the cluster. - Replace
<DEPLOYMENT_ENVIRONMENT>
with the deployment environment of your application. Example: "staging", "production", etc.
Apply the Deployment
# Apply Deployment RBAC
kubectl apply -f rbac-deployment.yaml
# Apply the Deployment collector CR
kubectl apply -f collector-deployment.yaml
# Notes:
# - Replace OTLP endpoints/tokens in the manifests or mount via Secrets.
# - Use secure endpoints for production where possible.
Verify the Deployment
# List Deployment collector pods
kubectl get pods -A -l app.kubernetes.io/name=opentelemetry-collector
# Tail Deployment logs (replace <NAMESPACE> if different)
kubectl logs -n <NAMESPACE> deploy/signoz-collector-deployment --since=5m
Auto-instrumentation with OpenTelemetry Operator
The Operator can also inject language agents automatically into your workloads using the Instrumentation
CRD and simple pod annotations. This enables traces/metrics/logs without manual code changes.
- See the upstream guide: Automatic instrumentation via the Operator
- For language-specific steps, use our instrumentation docs. For example, Java: Kubernetes (OTel Operator) section
Last updated: August 21, 2025
Edit on GitHub