Prerequisites
- Kubernetes version >=
1.22 - Currently supports
x86-64,amd64andarm64architectures - Helm version >=
3.8 - You must have
kubectlaccess to your cluster The following table describes the hardware requirements that are needed to install SigNoz on Kubernetes:
Component Minimal Requirements Recommended Memory 8 GB 16 GB CPU 4 cores 8 cores Storage 30 GB 80 GB
Set up a Local Kubernetes Cluster
Choose one of the following options to set up your local Kubernetes cluster:
- Follow the official Minikube installation guide
- Recommended configuration for SigNoz:
minikube start --memory=8g --cpus=4
- Follow the official Kind installation guide
- Recommended configuration for SigNoz as
values.yaml:kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane extraPortMappings: - containerPort: 8080 hostPort: 8080 - Run the Following command to spin up the k3s cluster:
kind create cluster --config values.yaml -n <NAMESPACE>
- Follow the official K3s installation guide
You can also use k3d to spin up a k3s cluster. k3d is a lightweight wrapper to run k3s in Docker, making it easier to create and manage k3s clusters. It's particularly useful for development and testing purposes as it provides a quick way to spin up disposable k3s clusters. The installation process remains the same as with a regular k3s cluster.
- For spining up a cluster use following command:
k3d cluster create
Install SigNoz
Helm Installation
The SigNoz Helm chart will install the following components into your Kubernetes cluster:
- SigNoz
- SigNoz Collector
- Clickhouse
- Zookeeper
Find a storage class to use in your cluster:
kubectl get storageclassCreate a
values.yamlfile that will contain the configuration for the chart. Here is a minimal example to get started:global: storageClass: <storage-class> clickhouse: installCustomStorageClass: trueYou can find an exhaustive list of the parameters here.
Install SigNoz:
helm repo add signoz https://charts.signoz.io helm repo update helm install signoz signoz/signoz \ --namespace <namespace> --create-namespace \ --wait \ --timeout 1h \ -f values.yaml
Test the installation
In another terminal, port-forward signoz on its http port. (By default, signoz exposes its http server on port 8080.)
kubectl port-forward -n <namespace> svc/signoz 8080:8080Run the following command to check the health of signoz:
curl -X GET http://localhost:8080/api/v1/healthIf the installation is successful, you should see the following output:
{"status":"ok"}
By default, retention period is set to 7 days for logs and traces, and 30 days for metrics. To change this, navigate to the General tab on the Settings page of SigNoz UI.
For more details, refer to the retention period guide.
(Optional) Install OpenTelemetry Demo
The OpenTelemetry Demo is a microservice-based distributed system intended to illustrate the implementation of OpenTelemetry in a near real-world environment. See more details at OpenTelemetry Demo.
Get the address of the SigNoz collector:
kubectl get -n <namespace> svc/signoz-otel-collectorThis value will be used in the next step to configure the OpenTelemetry Demo to send data to the SigNoz collector.
Create a
values.yamlfile that will contain the configuration for the chart and send it to your SigNoz installation:default: env: - name: OTEL_SERVICE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: "metadata.labels['app.kubernetes.io/component']" - name: OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE value: cumulative - name: OTEL_RESOURCE_ATTRIBUTES value: 'service.name=$(OTEL_SERVICE_NAME),service.namespace=opentelemetry-demo,service.version={{ .Chart.appVersion }}' - name: OTEL_COLLECTOR_NAME value: signoz-otel-collector.<namespace>.svc.cluster.localNote: The
OTEL_COLLECTOR_NAMEis the address obtained in the previous step.Install OpenTelemetry Demo:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm repo update helm install otel-demo open-telemetry/opentelemetry-demo -f values.yamlMore details on the installation can be found here.