K8s Serverless (EKS Fargate) - Install
Follow this guide to set up an OpenTelemetry Collector sidecar alongside your applications on EKS Fargate. The sidecar collects OTLP telemetry from your app and sends it to SigNoz. It runs inside your pod, so no node-level access is required.
Note: On EKS Fargate, direct access to kubelet ports (
:10250
/:10255
) isn't available. This guide uses the Kubernetes API-server proxy to scrape kubelet stats/summary viakubeletstats
.
Prerequisites
- An EKS cluster with Fargate profiles set up
- A SigNoz backend (either SigNoz Cloud or self-hosted)
kubectl
access to update your app Deployment/Pod specs- A Kubernetes ServiceAccount mapped to your workloads with sufficient RBAC permissions
1. Before you start: Install k8s-infra
- Install the
k8s-infra
Helm chart to enable cluster-level collection functionality. (See: K8s-Infra - Install) - For Fargate nodes, disable the
otelAgent
(DaemonSet) in your Helm values. KeepotelDeployment
enabled if you want cluster metrics/events. - Then add the sidecar collector to your application pods (steps below). The sidecar acts as the per-pod agent (replacement for the DaemonSet
otelAgent
).
2. Create a ConfigMap for the sidecar collector
Create a ConfigMap containing the collector configuration and an in-cluster kubeconfig (used by kubeletstats
in API-proxy mode):
in-cluster kubeConfig ConfigMap
Add the following ConfigMap to provide a minimal in-cluster kubeconfig for the sidecar collector, used by kubeletstats
in API-proxy mode.
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-agent-sidecar-config
namespace: <YOUR_NAMESPACE>
data:
# minimal kubeconfig so the collector can talk to the API server via the pod SA
kubeconfig: |
apiVersion: v1
kind: Config
clusters:
- name: in-cluster
cluster:
server: https://kubernetes.default.svc:443
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
contexts:
- name: in-cluster
context: { cluster: in-cluster, user: in-cluster }
current-context: in-cluster
users:
- name: in-cluster
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
OtelCollector Config ConfigMap
Add the following to your application pod spec to inject the OpenTelemetry Collector as a sidecar.
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-agent-sidecar-config
namespace: <YOUR_NAMESPACE>
otel-agent-config.yaml: |
exporters:
debug:
sampling_initial: 2
sampling_thereafter: 500
verbosity: basic
otlp:
endpoint: ${env:OTEL_EXPORTER_OTLP_ENDPOINT}
headers:
signoz-ingestion-key: ${env:SIGNOZ_API_KEY}
tls:
insecure: ${env:OTEL_EXPORTER_OTLP_INSECURE}
insecure_skip_verify: ${env:OTEL_EXPORTER_OTLP_INSECURE_SKIP_VERIFY}
extensions:
health_check:
endpoint: 0.0.0.0:13133
pprof:
endpoint: localhost:1777
zpages:
endpoint: localhost:55679
processors:
batch:
send_batch_size: 10000
timeout: 200ms
k8sattributes:
extract:
annotations: []
labels: []
metadata:
- k8s.namespace.name
- k8s.deployment.name
- k8s.statefulset.name
- k8s.daemonset.name
- k8s.cronjob.name
- k8s.job.name
- k8s.node.name
- k8s.node.uid
- k8s.pod.name
- k8s.pod.uid
- k8s.pod.start_time
filter:
node_from_env_var: K8S_NODE_NAME
passthrough: false
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.ip
- sources:
- from: resource_attribute
name: k8s.pod.uid
- sources:
- from: connection
resourcedetection:
detectors:
- eks
- env
- system
override: false
system:
resource_attributes:
host.id:
enabled: false
host.name:
enabled: false
os.type:
enabled: true
timeout: 2s
receivers:
kubeletstats:
auth_type: kubeConfig
collection_interval: 30s
endpoint: ${env:K8S_NODE_NAME}
insecure_skip_verify: true
metric_groups:
- container
- pod
- node
- volume
metrics:
container.cpu.usage:
enabled: true
container.uptime:
enabled: true
k8s.container.cpu_limit_utilization:
enabled: true
k8s.container.cpu_request_utilization:
enabled: true
k8s.container.memory_limit_utilization:
enabled: true
k8s.container.memory_request_utilization:
enabled: true
k8s.node.cpu.usage:
enabled: true
k8s.node.uptime:
enabled: true
k8s.pod.cpu.usage:
enabled: true
k8s.pod.cpu_limit_utilization:
enabled: true
k8s.pod.cpu_request_utilization:
enabled: true
k8s.pod.memory_limit_utilization:
enabled: true
k8s.pod.memory_request_utilization:
enabled: true
k8s.pod.uptime:
enabled: true
otlp:
protocols:
grpc: { endpoint: 0.0.0.0:4317, max_recv_msg_size_mib: 4 }
http: { endpoint: 0.0.0.0:4318 }
service:
extensions: [health_check, zpages, pprof]
pipelines:
logs:
receivers: [otlp]
processors: [resourcedetection, k8sattributes, batch]
exporters: [debug, otlp]
metrics:
receivers: [otlp, kubeletstats]
processors: [resourcedetection, k8sattributes, batch]
exporters: [otlp, debug]
traces:
receivers: [otlp]
processors: [resourcedetection, k8sattributes, batch]
exporters: [otlp, debug]
telemetry:
logs: { encoding: json }
metrics: { address: 0.0.0.0:8888 }
This configuration reads exporter endpoints and credentials from environment variables you set in your Deployment.
3. Cluster roles and privileges
Create a ServiceAccount and minimal RBAC so the sidecar can call the API-server proxy path:
apiVersion: v1
kind: ServiceAccount
metadata:
name: otel-agent-sidecar
namespace: <YOUR_NAMESPACE>
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: otel-kubeletstats-proxy
rules:
# Needed for: GET /api/v1/nodes/<node>/proxy/stats/summary
- apiGroups: [""]
resources: ["nodes/stats", "nodes/proxy"]
verbs: ["get"]
# Informers for enrichment (k8sattributes/resourcedetection)
- apiGroups: [""]
resources: ["pods", "namespaces", "nodes"]
verbs: ["get","list","watch"]
# Optional: owner lookups (ReplicaSets)
- apiGroups: ["apps"]
resources: ["replicasets"]
verbs: ["get","list","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: otel-kubeletstats-proxy
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: otel-kubeletstats-proxy
subjects:
- kind: ServiceAccount
name: otel-agent-sidecar
namespace: <YOUR_NAMESPACE>
4. Add the sidecar container to your app
Add the following sidecar container to your Deployments/Apps. Replace placeholders like <OTLP_EXPORTER_OTLP_ENDPOINT>
and <YOUR_K8S_CLUSTER_NAME>
.
# Example Deployment fragment
spec:
template:
spec:
## Service Account for otel-agent-sidecar container ##
serviceAccountName: otel-agent-sidecar
## Mount the otel-agent config configMap
volumes:
- name: otel-agent-config-vol
configMap:
name: otel-agent-sidecar-config
containers:
- name: your-app
image: <YOUR_IMAGE>
env:
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://otel-collector:4317
###### Otel Agent Sidecar container ####
- name: opentelemetry-collector-contrib
imageOverride:
repository: otel/opentelemetry-collector-contrib
tag: 0.109.0
command:
- /otelcol-contrib --config=/conf/otel-agent-config.yaml
env:
# This endpoint should be reachable from your pod
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: <OTLP_EXPORTER_OTLP_ENDPOINT>
- name: OTEL_EXPORTER_OTLP_INSECURE
value: "true"
- name: OTEL_SECRETS_PATH
value: /secrets
- name: K8S_CLUSTER_NAME
value: <YOUR_K8S_CLUSTER_NAME>
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: K8S_POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: K8S_HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
- name: K8S_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: K8S_POD_UID
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.uid
- name: K8S_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: SIGNOZ_COMPONENT
value: otel-agent
- name: OTEL_RESOURCE_ATTRIBUTES
value: signoz.component=$(SIGNOZ_COMPONENT),k8s.cluster.name=$(K8S_CLUSTER_NAME),k8s.node.name=$(K8S_NODE_NAME),host.name=$(K8S_NODE_NAME)
livenessProbe:
failureThreshold: 6
httpGet:
path: /
port: 13133
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
ports:
- containerPort: 13133
name: health-check
protocol: TCP
- containerPort: 8888
name: metrics
protocol: TCP
- containerPort: 4317
name: otlp
protocol: TCP
- containerPort: 4318
name: otlp-http
protocol: TCP
readinessProbe:
failureThreshold: 6
httpGet:
path: /
port: 13133
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
requests:
cpu: 100m
memory: 100Mi
useDefault:
env: true
volumeMounts:
- mountPath: /conf
name: otel-agent-config-vol
- mountPath: /var/log
name: varlog
readOnly: true
5. Apply manifests
kubectl apply -f otel-agent-sidecar-config.yaml
kubectl apply -f rbac.yaml
kubectl apply -f app-deployment.yaml
Notes & troubleshooting
If you see
404 Not Found
for/api/v1/nodes/proxy/stats/summary
, ensure:- The receiver endpoint is
endpoint: ${env:K8S_NODE_NAME}
(not empty / not a host:port). - The env var is set via
spec.nodeName
and your sidecar hasKUBECONFIG=/conf/kubeconfig
. - RBAC allows
get
onnodes/stats
andnodes/proxy
.
- The receiver endpoint is
Some EKS Fargate environments may not expose
/stats/summary
via proxy. If that’s your case, use the cAdvisor via API-proxy method (/api/v1/nodes/$NODE/proxy/metrics/cadvisor
) with a Prometheus receiver as a fallback.
Last updated: August 19, 2025
Edit on GitHub