SigNoz Cloud - This page is relevant for SigNoz Cloud editions.
Self-Host - This page is relevant for self-hosted SigNoz editions.

Docker Swarm Collection Agent - Configure

The install guide provides a working config that collects host metrics, container metrics, container logs, and forwards application traces out of the box. This page explains what each component in that config does and how to customize it for your Swarm environment.

Prerequisites

  • OpenTelemetry Collector installed (see Installation guide)
  • SigNoz Cloud account or self-hosted SigNoz instance
  • Access to create and edit the collector configuration file

Send Data from Applications to the Collector

Configure the OTLP receiver so instrumented applications on the Swarm cluster can send traces, metrics, and logs to the collector.

config.yaml
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318

service:
  pipelines:
    traces:
      receivers: [otlp]
    metrics:
      receivers: [otlp]
    logs:
      receivers: [otlp]

Where and how to send data from your application

Set your app's OTLP endpoint to the collector (gRPC 4317, HTTP 4318). With global mode and mode=host publishing, each Swarm node exposes the collector on localhost:

# Same node (container on overlay network or host network)
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317"
# Different node
export OTEL_EXPORTER_OTLP_ENDPOINT="http://<NODE_IP>:4317"
# Set the service name
export OTEL_RESOURCE_ATTRIBUTES="service.name=my-app"

See Instrumentation for language-specific setup.

Collect Host Metrics

Use the hostmetrics receiver to scrape CPU, memory, disk, filesystem, load, and network metrics from each Swarm node.

config.yaml
receivers:
  hostmetrics:
    collection_interval: 60s
    root_path: /hostfs
    scrapers:
      cpu: {}
      disk: {}
      load: {}
      filesystem: {}
      memory: {}
      network: {}
      paging: {}
      process:
        mute_process_name_error: true
        mute_process_exe_error: true
        mute_process_io_error: true
        mute_process_user_error: true
      processes: {}

service:
  pipelines:
    metrics:
      receivers: [otlp, hostmetrics]

You can enable only the scrapers you need or tune collection_interval. The host filesystem must be mounted at /hostfs as shown in the install guide.

Exclude filesystem mounts

The filesystem scraper in the hostmetrics receiver may report virtual or Docker overlay mounts that add noise. Use exclude_mount_points and exclude_fs_types to filter them out.

config.yaml
receivers:
  hostmetrics:
    collection_interval: 60s
    root_path: /hostfs
    scrapers:
      filesystem:
        exclude_mount_points:
          match_type: regexp
          mount_points:
            - /dev/.*
            - /proc/.*
            - /sys/.*
            - /run/.*
            - /var/lib/docker/.*
        exclude_fs_types:
          match_type: strict
          fs_types:
            - autofs
            - binfmt_misc
            - bpf
            - cgroup2
            - configfs
            - debugfs
            - devpts
            - devtmpfs
            - fusectl
            - hugetlbfs
            - mqueue
            - nsfs
            - overlay
            - proc
            - procfs
            - pstore
            - rpc_pipefs
            - securityfs
            - selinuxfs
            - squashfs
            - sysfs
            - tracefs

Collect Container Metrics

Use the docker_stats receiver to collect per-container resource metrics from the Docker Engine API. For full setup details and a pre-configured dashboard, see Docker Container Metrics.

config.yaml
receivers:
  docker_stats:
    endpoint: unix:///var/run/docker.sock
    metrics:
      # Following metrics are enabled by default in the docker_stats receiver
      container.cpu.utilization:
        enabled: true
      container.memory.percent:
        enabled: true
      container.memory.usage.limit:
        enabled: true
      container.memory.usage.total:
        enabled: true
      container.network.io.usage.rx_bytes:
        enabled: true
      container.network.io.usage.tx_bytes:
        enabled: true
      container.network.io.usage.rx_dropped:
        enabled: true
      container.network.io.usage.tx_dropped:
        enabled: true
      container.blockio.io_service_bytes_recursive:
        enabled: true

service:
  pipelines:
    metrics:
      receivers: [otlp, hostmetrics, docker_stats]

The Docker socket must be mounted as shown in the install guide.

Map container labels to metric attributes (Swarm)

Use container_labels_to_metric_labels on the docker_stats receiver to promote Docker Swarm labels into metric resource attributes. This enables filtering and grouping by stack, service, or task in SigNoz dashboards.

config.yaml
receivers:
  docker_stats:
    endpoint: unix:///var/run/docker.sock
    container_labels_to_metric_labels:
      com.docker.stack.namespace: docker.stack.name
      com.docker.swarm.node.id: docker.node.id
      com.docker.swarm.service.name: docker.service.name
      com.docker.swarm.task.name: docker.task.name

The keys are Docker label names (check with docker inspect <container> --format '{{json .Config.Labels}}'), and the values are the metric attribute names that appear in SigNoz.

Exclude containers from metrics

Use excluded_images on the docker_stats receiver to filter out the collector itself or infrastructure containers you don't want metrics for.

config.yaml
receivers:
  docker_stats:
    endpoint: unix:///var/run/docker.sock
    excluded_images:
      - opentelemetry-collector-contrib
      - /.*pause.*/

Values can be exact image names or regex patterns (wrapped in /).

Collect Container Logs

Use the filelog receiver to tail Docker container JSON log files. For full setup details, see Collecting Docker Container Logs.

config.yaml
receivers:
  filelog:
    include: [/var/lib/docker/containers/*/*-json.log]
    start_at: end
    include_file_name: false
    include_file_path: true
    operators:
      - id: container-parser
        type: container
        format: docker
        add_metadata_from_filepath: false

service:
  pipelines:
    logs:
      receivers: [otlp, filelog]

The container log directory must be mounted as shown in the install guide.

Tune Batch and Memory Limiter

Adjust the batch processor for throughput and the memory_limiter processor for stability based on your workload. See Collector Configuration Best Practices for detailed guidance.

config.yaml
processors:
  batch:
    send_batch_size: 1000       # number of items per batch
    send_batch_max_size: 2048   # hard cap per batch
    timeout: 10s                # flush interval
  memory_limiter:
    check_interval: 5s
    limit_mib: 4000             # ~80% of container memory limit
    spike_limit_mib: 800

Set limit_mib to approximately 80% of the memory limit you assign to the collector container. If your Swarm nodes are resource-constrained, reduce both limit_mib and spike_limit_mib accordingly.

Resource Detection

The resource detection processor adds host and Docker metadata to your telemetry so you can filter and group by environment, hostname, or container in SigNoz.

config.yaml
processors:
  resourcedetection:
    detectors: [env, system, docker]
    timeout: 2s
    system:
      hostname_sources: [os]
service:
  pipelines:
    traces:
      processors: [resourcedetection]
    metrics:
      processors: [resourcedetection]
    logs:
      processors: [resourcedetection]

Custom resource attributes

You can attach custom resource attributes (for example, deployment.environment, team, or any custom key) to all telemetry sent by the collector by setting the OTEL_RESOURCE_ATTRIBUTES environment variable when creating the service:

docker service create ... --env OTEL_RESOURCE_ATTRIBUTES="deployment.environment=production,team=backend" ...

Scrape Prometheus Metrics from Containers

Use the Prometheus receiver to scrape Prometheus-compatible endpoints exposed by application containers.

config.yaml
receivers:
  prometheus:
    config:
      global:
        scrape_interval: 60s
      scrape_configs:
        - job_name: my-app
          static_configs:
            - targets:
                - my-app-container:9090

service:
  pipelines:
    metrics:
      receivers: [prometheus]

Replace my-app-container:9090 with the hostname (or IP) and port of the Prometheus endpoint. If the collector and target share an overlay network, use the service or task name as hostname.

Swarm Networking

The install guide uses --publish ... mode=host so each node exposes 4317 and 4318. Application containers can reach the collector via localhost when on the same node, or via the node IP when on a different node.

For overlay networks, attach the collector to your app's network so services can reach it by service name. See the optional overlay network step in the install guide.

Validate

  • Config and logs: Check the collector logs to confirm the config is valid and the collector is running without errors.

    docker service logs signoz-collection-agent
    
  • In SigNoz: Confirm data in Infrastructure Monitoring for host and container metrics, Logs for container logs, and Traces for application traces.

Apply Changes

After editing the config:

  1. Create a new Swarm config and update the service (see Updating the config in the install guide).
  2. View collector logs to confirm there are no configuration errors.

Next Steps

Last updated: February 27, 2026

Edit on GitHub