SigNoz Cloud - This page is relevant for SigNoz Cloud editions.
Self-Host - This page is relevant for self-hosted SigNoz editions.

Infinite Retention of OpenTelemetry Data in AWS S3

Overview

It is a common practice to back up telemetry data for longer durations due to compliance and audit purposes. You can use the AWS S3 Exporter to retain OpenTelemetry data as long as you need.

How It Works

The OpenTelemetry Collector supports multiple exporters per pipeline. Adding awss3 alongside the existing otlp exporter means data fans out to both SigNoz and S3 independently. If one exporter fails, the other is unaffected. For a full overview of Collector configuration, see the OpenTelemetry Collector Configuration page.

Limitations

  • Data in AWS S3 is not accessible in the SigNoz UI
  • You need a third-party tool like Amazon Athena to query data

Prerequisites

Adding AWS S3 Exporter

Add the following snippet to your existing otel-collector-config.yaml. This example exports logs to S3:

otel-collector-config.yaml
exporters:
  awss3/logs:
    s3uploader:
      region: '<region>'
      s3_bucket: '<bucket-name>'
      s3_prefix: 'logs'
      compression: gzip

Replace the placeholders:

  • <region>: your AWS region (e.g., us-east-1)
  • <bucket-name>: your S3 bucket name

Then add awss3/logs to your existing logs pipeline's exporters list:

otel-collector-config.yaml
service:
  pipelines:
    logs:
      exporters: [otlp, awss3/logs]

Exporting All Signals (Logs, Metrics, and Traces)

To back up all three signals, use named instances with different prefixes. Append these exporters and update each pipeline in your existing config:

otel-collector-config.yaml
exporters:
  awss3/logs:
    s3uploader:
      region: '<region>'
      s3_bucket: '<bucket-name>'
      s3_prefix: 'logs'
      compression: gzip

  awss3/metrics:
    s3uploader:
      region: '<region>'
      s3_bucket: '<bucket-name>'
      s3_prefix: 'metrics'
      compression: gzip

  awss3/traces:
    s3uploader:
      region: '<region>'
      s3_bucket: '<bucket-name>'
      s3_prefix: 'traces'
      compression: gzip

Then add each exporter to the corresponding pipeline in your existing service.pipelines section:

otel-collector-config.yaml
service:
  pipelines:
    logs:
      exporters: [otlp, awss3/logs]
    metrics:
      exporters: [otlp, awss3/metrics]
    traces:
      exporters: [otlp, awss3/traces]
  • Named instances (awss3/logs, awss3/metrics, awss3/traces) allow a different s3_prefix per signal

Additional Configuration Options

The exporter supports several optional fields for production use:

Top-level options on the exporter (siblings of s3uploader):

  • marshaler: output format. Default: otlp_json (queryable with Amazon Athena). otlp_proto is more compact but harder to query.

Options nested under s3uploader:

  • compression: none (default), gzip, or zstd. Use gzip or zstd in production to reduce storage costs.
  • s3_partition_format: strftime-based path for time partitioning. Default: year=%Y/month=%m/day=%d/hour=%H/minute=%M
  • storage_class: S3 storage class. Default: STANDARD. Valid values: STANDARD, STANDARD_IA, ONEZONE_IA, INTELLIGENT_TIERING, GLACIER, DEEP_ARCHIVE. Use STANDARD_IA or INTELLIGENT_TIERING for archival data to reduce costs (see AWS S3 storage classes).
  • role_arn: IAM role ARN to assume via STS, instead of static credentials.
  • endpoint: custom S3-compatible endpoint URL for MinIO, LocalStack, or other stores.
  • s3_force_path_style: set to true when using non-AWS S3-compatible stores.

Example with all optional fields:

otel-collector-config.yaml
exporters:
  awss3/logs:
    marshaler: otlp_json
    s3uploader:
      region: '<region>'
      s3_bucket: '<bucket-name>'
      s3_prefix: 'logs'
      s3_partition_format: 'year=%Y/month=%m/day=%d/hour=%H/minute=%M'
      compression: gzip
      storage_class: STANDARD_IA
      role_arn: '<iam-role-arn>'
      endpoint: '<s3-compatible-endpoint>'
      s3_force_path_style: true

Replace the placeholders:

  • <iam-role-arn>: ARN of the IAM role to assume (e.g., arn:aws:iam::123456789012:role/otel-s3-writer)
  • <s3-compatible-endpoint>: endpoint URL for non-AWS S3-compatible stores (e.g., http://localhost:9000 for MinIO)

For the full list of configuration options, see the AWS S3 Exporter README.

Validate in S3

This data will not appear in the SigNoz UI — verify delivery directly in S3. After deploying your updated Collector config, check that objects are appearing in your bucket:

aws s3 ls s3://<bucket-name>/logs/ --recursive | head

You should see keys following the partition pattern:

logs/year=2024/month=01/day=15/hour=10/minute=30/<unique-id>.json.gz

The .json.gz extension comes from the default marshaler: otlp_json combined with compression: gzip. Changing either option changes the extension.

If no objects appear, check the Collector logs for S3 permission errors or configuration issues.

Troubleshooting

No objects appearing in S3

  • Verify the exporter is listed in the pipeline's exporters array, not just defined in the exporters: section.
  • Check that the Collector is receiving data on the pipeline (e.g., logs are flowing through the logs pipeline).

AccessDenied or NoCredentialProviders in Collector logs

  • Verify your AWS credentials are available to the Collector process (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY environment variables, or an attached IAM role).
  • Confirm the IAM policy includes s3:PutObject permission on the target bucket.

NoSuchBucket error in Collector logs

  • Verify the s3_bucket name and region match your actual S3 bucket.

Collector fails to start with config error

  • Confirm you are using the otelcol-contrib distribution, which includes the awss3 exporter. The core otelcol distribution does not include it.
  • Check YAML indentation and that exporter names use the type/name format with a forward slash (e.g., awss3/logs, not awss3logs).

Next Steps

  • Query your archived data using Amazon Athena with the otlp_json format
  • Configure S3 lifecycle policies to transition older data to cheaper storage classes (e.g., Glacier)
  • Monitor exporter health by checking the Collector's built-in metrics or logs for export failures

Get Help

If you need help with the steps in this topic, please reach out to us on SigNoz Community Slack.

If you are a SigNoz Cloud user, please use in product chat support located at the bottom right corner of your SigNoz instance or contact us at cloud-support@signoz.io.

Last updated: April 21, 2026

Edit on GitHub

Was this page helpful?

Your response helps us improve this page.