This guide explains how to deploy SigNoz on Kubernetes using Foundry and Kustomize. Foundry generates and deploys a complete set of Kustomize manifests from your configuration.
Prerequisites
- A running Kubernetes cluster (v1.25+)
- kubectl installed and configured to talk to your cluster
- foundryctl: The Foundry CLI (see Step 1)
Step 1: Install foundryctl
Download and extract the foundryctl binary from GitHub Releases.
Linux:
curl -L "https://github.com/SigNoz/foundry/releases/latest/download/foundry_linux_$(uname -m | sed 's/x86_64/amd64/g' | sed 's/aarch64/arm64/g').tar.gz" -o foundry.tar.gz
tar -xzf foundry.tar.gz
macOS:
curl -L "https://github.com/SigNoz/foundry/releases/latest/download/foundry_darwin_$(uname -m | sed 's/x86_64/amd64/g' | sed 's/arm64/arm64/g').tar.gz" -o foundry.tar.gz
tar -xzf foundry.tar.gz
Windows (PowerShell):
$ARCH = if ($env:PROCESSOR_ARCHITECTURE -eq "ARM64") { "arm64" } else { "amd64" }
Invoke-WebRequest -Uri "https://github.com/SigNoz/foundry/releases/latest/download/foundry_windows_${ARCH}.tar.gz" -OutFile foundry.tar.gz -UseBasicParsing
tar -xzf foundry.tar.gz
After extracting, run foundryctl from the unpacked directory:
./foundry_*/bin/foundryctl <COMMAND> <OPTIONS>
Step 2: Create casting.yaml
Create a casting file that targets Kubernetes with flavor: kustomize and mode: kubernetes.
apiVersion: v1alpha1
metadata:
name: signoz
spec:
deployment:
# kustomize generates plain Kubernetes manifests with Kustomize overlays
flavor: kustomize
# kubernetes specifies Kubernetes as the target
mode: kubernetes
Step 3: Generate the Manifests (forge)
Run forge to generate the Kustomize manifests:
foundryctl forge -f casting.yaml
Foundry generates the manifests into pours/deployment/ with the following structure:
pours/deployment/
├── kustomization.yaml # Root kustomization
├── namespace.yaml # SigNoz namespace
├── clickhouse-operator/ # Altinity ClickHouse Operator
│ ├── deployment.yaml
│ ├── clusterrole.yaml
│ ├── clusterrolebinding.yaml
│ ├── configmap.yaml
│ ├── service.yaml
│ ├── serviceaccount.yaml
│ └── kustomization.yaml
├── telemetrystore/clickhouse/ # ClickHouse (time-series storage)
│ ├── clickhouseinstallation.yaml
│ ├── configmap.yaml
│ └── kustomization.yaml
├── telemetrykeeper/ # ClickHouse Keeper (coordination)
│ └── clickhousekeeper/
│ ├── clickhousekeeperinstallation.yaml
│ └── kustomization.yaml
├── metastore/postgres/ # PostgreSQL (metadata storage)
│ ├── statefulset.yaml
│ ├── service.yaml
│ ├── serviceaccount.yaml
│ └── kustomization.yaml
├── signoz/ # SigNoz query service
│ ├── statefulset.yaml
│ ├── service.yaml
│ ├── serviceaccount.yaml
│ └── kustomization.yaml
├── ingester/ # OTel Collector (telemetry ingestion)
│ ├── deployment.yaml
│ ├── configmap.yaml
│ ├── service.yaml
│ ├── serviceaccount.yaml
│ └── kustomization.yaml
└── telemetrystore-migrator/ # Schema migration job
├── job.yaml
└── kustomization.yaml
Step 4: Deploy
The recommended way to deploy is with foundryctl cast, which runs forge and then deploys in one step:
foundryctl cast -f casting.yaml
Step 5: Verify
- Check that all pods are running in the
signoznamespace:
kubectl get pods -n signoz
Expected output (names and counts may vary):
NAME READY STATUS RESTARTS AGE
chi-signoz-clickhouse-0-0-0 1/1 Running 0 2m
clickhouse-operator-... 1/1 Running 0 2m
signoz-keeper-0 1/1 Running 0 2m
signoz-postgres-0 1/1 Running 0 2m
signoz-signoz-0 1/1 Running 0 2m
signoz-ingester-... 1/1 Running 0 2m
signoz-telemetrystore-migrator-... 0/1 Completed 0 2m
- Port-forward the SigNoz UI to your local machine:
kubectl port-forward -n signoz svc/signoz-signoz 8080:8080
Then open http://localhost:8080 in your browser.
- Check the health endpoint:
curl -X GET http://localhost:8080/api/v1/health
Customizing with Foundry Patches
Instead of manually editing generated manifests, use spec.patches in your casting to apply RFC 6902 JSON Patches during forge. Patches target generated files by path and are applied before writing to pours/, so your customizations are reproducible and survive re-forging.
Each patch entry has:
target: the file path withinpours/to patch (exact or glob)operations: a list of JSON Patch operations (op,path,value)
Example: Casting with patches
apiVersion: v1alpha1
metadata:
name: signoz
spec:
deployment:
flavor: kustomize
mode: kubernetes
patches:
# Set storage class and size on ClickHouse data volumes
# Adjust for your cloud: gp3 (AWS), pd-ssd (GCP), managed-premium (Azure)
- target: "deployment/telemetrystore/clickhouse/clickhouseinstallation.yaml"
operations:
- op: add
path: /spec/templates/volumeClaimTemplates/0/spec/storageClassName
value: gp3
- op: replace
path: /spec/templates/volumeClaimTemplates/0/spec/resources/requests/storage
value: 100Gi
- op: add
path: /spec/templates/volumeClaimTemplates/1/spec/storageClassName
value: gp3
# Set resource limits on ClickHouse
- target: "deployment/telemetrystore/clickhouse/clickhouseinstallation.yaml"
operations:
- op: replace
path: /spec/templates/podTemplates/0/spec/containers/0/resources
value:
requests:
cpu: "2"
memory: "4Gi"
limits:
cpu: "4"
memory: "8Gi"
# Set resource limits on SigNoz
- target: "deployment/signoz/statefulset.yaml"
operations:
- op: replace
path: /spec/template/spec/containers/0/resources
value:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "2Gi"
cpu: "1"
# Schedule on dedicated observability nodes
- target: "deployment/signoz/statefulset.yaml"
operations:
- op: add
path: /spec/template/spec/tolerations
value:
- key: "dedicated"
operator: "Equal"
value: "signoz"
effect: "NoSchedule"
- op: add
path: /spec/template/spec/nodeSelector
value:
node-role.kubernetes.io/observability: ""
# Expose SigNoz via AWS NLB
- target: "deployment/signoz/service.yaml"
operations:
- op: replace
path: /spec/type
value: LoadBalancer
- op: add
path: /metadata/annotations
value:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
# Set resource limits on ingester
- target: "deployment/ingester/deployment.yaml"
operations:
- op: replace
path: /spec/template/spec/containers/0/resources
value:
requests:
cpu: "1"
memory: "2Gi"
limits:
cpu: "2"
memory: "4Gi"
# Set storage class on PostgreSQL metastore
- target: "deployment/metastore/postgres/statefulset.yaml"
operations:
- op: add
path: /spec/volumeClaimTemplates/0/spec/storageClassName
value: gp3
- op: replace
path: /spec/volumeClaimTemplates/0/spec/resources/requests/storage
value: 20Gi
Native Kustomize patches
Since Foundry generates standard Kustomize bases, you can also use native Kustomize patches on the generated kustomization.yaml. This lets you use strategic merge patches or overlays for environment-specific customization without re-forging.
Use a Foundry patch to inject a patches block into the root kustomization.yaml:
apiVersion: v1alpha1
metadata:
name: signoz
spec:
deployment:
flavor: kustomize
mode: kubernetes
patches:
- target: "deployment/kustomization.yaml"
operations:
- op: add
path: /patches
value:
- target:
kind: StatefulSet
name: signoz-signoz
patch: |-
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: signoz-signoz
spec:
template:
spec:
nodeSelector:
node-role.kubernetes.io/observability: ""
Or create an overlay directory that references the generated base and applies your own Kustomize patches on top:
my-deployment/
├── base/ # Copy of pours/deployment/
│ └── ...
└── overlays/
└── prod/
├── kustomization.yaml
└── increase-resources.yaml
# overlays/prod/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../../base
patches:
- path: increase-resources.yaml
target:
kind: StatefulSet
name: signoz-clickhouse
kubectl apply -k overlays/prod/
Troubleshooting
CRD not found errors
If you see errors about ClickHouseInstallation or ClickHouseKeeperInstallation resources not being recognized, the Altinity ClickHouse Operator CRDs are missing. Use foundryctl cast which installs them automatically, or install them manually before running kubectl apply -k (see the "Skip cast and apply with kubectl directly" callout in Step 4).
Pods stuck in Pending
Check that your cluster has enough resources. The default resource requests are:
- ClickHouse: 500m CPU, 512Mi memory
- ClickHouse Operator: 1000m CPU, 2Gi memory
- PostgreSQL: 100m CPU, 128Mi memory
- SigNoz: 100m CPU, 200Mi memory
- Ingester: 125m CPU, 512Mi memory
- Keeper: 1 CPU, 1Gi memory
Also verify that your cluster has a default StorageClass for the PersistentVolumeClaims.
ClickHouse not ready
Check ClickHouse Operator logs:
kubectl logs -n signoz deployment/clickhouse-operator
Migration job failing
Check the migrator job logs:
kubectl logs -n signoz job/signoz-telemetrystore-migrator