Deploying to GCP
The easiest way to run SigNoz is to use SigNoz Cloud - no installation, maintenance, or scaling needed.
New users get 30 days of unlimited access to all features. Click here to sign up.
First, we need to set up a Kubernetes cluster (see the official GCP documentation for more info).
Follow the steps on this page to install SigNoz on Kubernetes with Helm.
The SigNoz Helm chart will install the following components into your Kubernetes cluster:
- Query Service (backend service)
- Web UI (frontend)
- OpenTelemetry Collectors
- Alertmanager
- ClickHouse chart (datastore)
- K8s-Infra chart (k8s infra metrics/logs collectors)
Prerequisites
- You must have a GKE cluster. Both Standard and Autopilot are supported.
Kubernetes version >=
1.22
x86-64
/amd64
workloads as currentlyarm64
architecture is not supportedHelm version >=
3.8
You must have
kubectl
access to your clusterThe following table describes the hardware requirements that are needed to install SigNoz on Kubernetes:
Component Minimal Requirements Recommended Memory 8 GB 16 GB CPU 4 cores 8 cores Storage 30 GB 80 GB Suggestion: In case you want to use your own custom storage class for PVCs, you can set
global.storageClass
configuration to desired storage class.
Chart configuration
Here's the minimal required override-values.yaml
that we'll be using later. You can find an overview of the parameters that can be configured during installation under chart configuration.
GKE Standard
In GKE Standard, you can either install with the default configuration or make use of the following override-values.yaml
:
global:
storageClass: gce-resizable
cloud: gcp
clickhouse:
installCustomStorageClass: true
GKE Autopilot
In GKE Autopilot, you must set cloud
to gcp/autogke
as well as update kubeletMetrics
to use read-only Kubelet endpoint as shown below in the override-values.yaml
:
global:
storageClass: gce-resizable
cloud: gcp/autogke
clickhouse:
installCustomStorageClass: true
k8s-infra:
presets:
kubeletMetrics:
authType: none
endpoint: ${K8S_NODE_NAME}:10255
GKE Autopilot automatically overriddes resource requests/limits. In our case, all signoz
chart components as well as components from clickhouse
and k8s-infra
charts, if enabled. Therefore, make sure to have enough resource quota for the region where the cluster is deployed. Read more about it here.
To list storage class in your Kubernetes cluster: kubectl get storageclass
.
Install SigNoz on Kubernetes with Helm
Add the SigNoz Helm repository to your client with name
signoz
by running the following command:helm repo add signoz https://charts.signoz.io
Verify that the repository is accessible to the Helm CLI by entering the following command:
helm repo list
Use the
kubectl create ns
command to create a new namespace. SigNoz recommends you useplatform
for your new namespace:kubectl create ns platform
Run the following command to install the chart with the release name
my-release
and namespaceplatform
:
helm --namespace platform install my-release signoz/signoz -f override-values.yaml
Output:
NAME: my-release
LAST DEPLOYED: Mon May 23 20:34:55 2022
NAMESPACE: platform
STATUS: deployed
REVISION: 1
NOTES:
1. You have just deployed SigNoz cluster:
- frontend version: '0.8.0'
- query-service version: '0.8.0'
- alertmanager version: '0.23.0-0.1'
- otel-collector version: '0.43.0-0.1'
- otel-collector-metrics version: '0.43.0-0.1'
*Note that the above command installs the latest stable version of SigNoz.
(Optional) To install a different version, you can use the --set
flag to specify the version you wish to install. The following example command installs SigNoz version 0.8.0
:
helm --namespace platform install my-release signoz/signoz \
--set frontend.image.tag="0.8.0" \
--set queryService.image.tag="0.8.0"
- If you use the
--set
flag, ensure that you specify the same versions for thefrontend
andqueryService
images. Specifying different versions could lead the SigNoz cluster to behave abnormally. - Do not use the
latest
ordevelop
tags in a production environment. Specifying these tags could install different versions of SigNoz on your cluster and could lead to data loss.
- You can access SigNoz by setting up port forwarding and browsing to the specified port. The following
kubectl port-forward
example command forwards all connections made tolocalhost:3301
to<signoz-frontend-service>:3301
:
export SERVICE_NAME=$(kubectl get svc --namespace platform -l "app.kubernetes.io/component=frontend" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace platform port-forward svc/$SERVICE_NAME 3301:3301
Verify the Installation
Using the kubectl -n platform get pods
command, monitor the SigNoz deployment process. Wait for all the pods to be in running state:
kubectl -n platform get pods
Output:
NAME READY STATUS RESTARTS AGE
chi-signoz-cluster-0-0-0 1/1 Running 0 8m21s
clickhouse-operator-8cff468-n5s99 2/2 Running 0 8m55s
my-release-signoz-alertmanager-0 1/1 Running 0 8m54s
my-release-signoz-frontend-78774f44d7-wl87p 1/1 Running 0 8m55s
my-release-signoz-otel-collector-66c8c7dc9d-d8v5c 1/1 Running 0 8m55s
my-release-signoz-otel-collector-metrics-68bcfd5556-9tkgh 1/1 Running 0 8m55s
my-release-signoz-query-service-0 1/1 Running 0 8m54s
my-release-zookeeper-0 1/1 Running 0 8m54s
By default, retention period is set to 7 days for logs and traces, and 30 days for metrics. To change this, navigate to the General tab on the Settings page of SigNoz UI.
For more details, refer to https://signoz.io/docs/userguide/retention-period.
(Optional) Install a Sample Application and Generate Tracing Data
Follow the steps in this section to install a sample application named HotR.O.D, and generate tracing data.
Use the HotROD install script below to create a
sample-application
namespace and deploy HotROD application on it:curl -sL https://github.com/SigNoz/signoz/raw/develop/sample-apps/hotrod/hotrod-install.sh \ | HELM_RELEASE=my-release SIGNOZ_NAMESPACE=platform bash
Using the
kubectl -n sample-application get pods
command, monitor the sample application pods. Wait for all the pods to be in running state:kubectl -n sample-application get pods
Output:
NAME READY STATUS RESTARTS AGE hotrod-55bd58cc8d-mzxq8 1/1 Running 0 2m locust-master-b65744bbf-l7v7n 1/1 Running 0 2m locust-slave-688c86bcb7-ngx7w 1/1 Running 0 2m
Use the following command to generate load:
kubectl --namespace sample-application run strzal --image=djbingham/curl \ --restart='OnFailure' -i --tty --rm --command -- curl -X POST -F \ 'user_count=6' -F 'spawn_rate=2' http://locust-master:8089/swarm
Browse to
http://localhost:3301
and see the metrics and traces for your sample application.
Use the following command to stop load generation:
kubectl -n sample-application run strzal --image=djbingham/curl \ --restart='OnFailure' -i --tty --rm --command -- curl \ http://locust-master:8089/stop
Go to Kubernetes Operate section for detailed instructions.
Next Steps
- Instrument Your Application
- Use OpenTelemetry Operator for automatic instrumentation (if your applications are in k8s)
- Tutorials