Skip to main content

Deploying to AWS

tip

The easiest way to run SigNoz is to use SigNoz Cloud - no installation, maintenance, or scaling needed.

New users get 30 days of unlimited access to all features. Click here to sign up.

First, we need to set up a Kubernetes cluster (see the official AWS documentation for more info). Follow the "Managed nodes - Linux" guide.

Follow the steps on this page to install SigNoz on Kubernetes with Helm.

The SigNoz Helm chart will install the following components into your Kubernetes cluster:

  • Query Service (backend service)
  • Web UI (frontend)
  • OpenTelemetry Collectors
  • Alertmanager
  • ClickHouse chart (datastore)
  • K8s-Infra chart (k8s infra metrics/logs collectors)

Prerequisites​

  • Managed nodes - Linux. Fargate is not offically supported
  • You must have an EKS cluster
  • Kubernetes version >= 1.22

  • x86-64/amd64 workloads as currently arm64 architecture is not supported

  • Helm version >= 3.8

  • You must have kubectl access to your cluster

  • The following table describes the hardware requirements that are needed to install SigNoz on Kubernetes:

    ComponentMinimal RequirementsRecommended
    Memory8 GB16 GB
    CPU4 cores8 cores
    Storage30 GB80 GB
  • Suggestion: In case you want to use your own custom storage class for PVCs, you can set global.storageClass configuration to desired storage class.

  • In case of K8s version 1.23 and above, you must install the Amazon EBS CSI driver and provide relevant volume permissions to the role assigned to the Amazon EKS cluster IAM role. To know more, refer to the Amazon EBS CSI migration documentation.

Chart configuration​

Here's the minimal required override-values.yaml that we'll be using later. You can find an overview of the parameters that can be configured during installation under chart configuration.

global:
storageClass: gp2-resizable
cloud: aws

clickhouse:
installCustomStorageClass: true
info

To list storage class in your Kubernetes cluster: kubectl get storageclass.

Install SigNoz on Kubernetes with Helm​

  1. Add the SigNoz Helm repository to your client with name signoz by running the following command:
helm repo add signoz https://charts.signoz.io
  1. Verify that the repository is accessible to the Helm CLI by entering the following command:
helm repo list
  1. Use the kubectl create ns command to create a new namespace. SigNoz recommends you use platform for your new namespace:
kubectl create ns platform
  1. Run the following command to install the chart with the release name my-release and namespace platform:
helm --namespace platform install my-release signoz/signoz -f override-values.yaml

Output:

NAME: my-release
LAST DEPLOYED: Mon May 23 20:34:55 2022
NAMESPACE: platform
STATUS: deployed
REVISION: 1
NOTES:
1. You have just deployed SigNoz cluster:

- frontend version: '0.8.0'
- query-service version: '0.8.0'
- alertmanager version: '0.23.0-0.1'
- otel-collector version: '0.43.0-0.1'
- otel-collector-metrics version: '0.43.0-0.1'

*Note that the above command installs the latest stable version of SigNoz.

(Optional) To install a different version, you can use the --set flag to specify the version you wish to install. The following example command installs SigNoz version 0.8.0:

helm --namespace platform install my-release signoz/signoz \
--set frontend.image.tag="0.8.0" \
--set queryService.image.tag="0.8.0"
info
  • If you use the --set flag, ensure that you specify the same versions for the frontend and queryService images. Specifying different versions could lead the SigNoz cluster to behave abnormally.
  • Do not use the latest or develop tags in a production environment. Specifying these tags could install different versions of SigNoz on your cluster and could lead to data loss.
  1. You can access SigNoz by setting up port forwarding and browsing to the specified port. The following kubectl port-forward example command forwards all connections made to localhost:3301 to <signoz-frontend-service>:3301:
export SERVICE_NAME=$(kubectl get svc --namespace platform -l "app.kubernetes.io/component=frontend" -o jsonpath="{.items[0].metadata.name}")

kubectl --namespace platform port-forward svc/$SERVICE_NAME 3301:3301

Verify the Installation​

Using the kubectl -n platform get pods command, monitor the SigNoz deployment process. Wait for all the pods to be in running state:

kubectl -n platform get pods

Output:

NAME                                                        READY   STATUS    RESTARTS   AGE
chi-signoz-cluster-0-0-0 1/1 Running 0 8m21s
clickhouse-operator-8cff468-n5s99 2/2 Running 0 8m55s
my-release-signoz-alertmanager-0 1/1 Running 0 8m54s
my-release-signoz-frontend-78774f44d7-wl87p 1/1 Running 0 8m55s
my-release-signoz-otel-collector-66c8c7dc9d-d8v5c 1/1 Running 0 8m55s
my-release-signoz-otel-collector-metrics-68bcfd5556-9tkgh 1/1 Running 0 8m55s
my-release-signoz-query-service-0 1/1 Running 0 8m54s
my-release-zookeeper-0 1/1 Running 0 8m54s
info

By default, retention period is set to 7 days for logs and traces, and 30 days for metrics. To change this, navigate to the General tab on the Settings page of SigNoz UI.

For more details, refer to https://signoz.io/docs/userguide/retention-period.

(Optional) Install a Sample Application and Generate Tracing Data​

Follow the steps in this section to install a sample application named HotR.O.D, and generate tracing data.

  1. Use the HotROD install script below to create a sample-application namespace and deploy HotROD application on it:
curl -sL https://github.com/SigNoz/signoz/raw/develop/sample-apps/hotrod/hotrod-install.sh \
| HELM_RELEASE=my-release SIGNOZ_NAMESPACE=platform bash
  1. Using the kubectl -n sample-application get pods command, monitor the sample application pods. Wait for all the pods to be in running state:
kubectl -n sample-application get pods

Output:

NAME                            READY   STATUS    RESTARTS   AGE
hotrod-55bd58cc8d-mzxq8 1/1 Running 0 2m
locust-master-b65744bbf-l7v7n 1/1 Running 0 2m
locust-slave-688c86bcb7-ngx7w 1/1 Running 0 2m
  1. Use the following command to generate load:
kubectl --namespace sample-application run strzal --image=djbingham/curl \
--restart='OnFailure' -i --tty --rm --command -- curl -X POST -F \
'user_count=6' -F 'spawn_rate=2' http://locust-master:8089/swarm
  1. Browse to http://localhost:3301 and see the metrics and traces for your sample application.

  2. Use the following command to stop load generation:

kubectl -n sample-application run strzal --image=djbingham/curl \
--restart='OnFailure' -i --tty --rm --command -- curl \
http://locust-master:8089/stop

Go to Kubernetes Operate section for detailed instructions.

Next Steps​