Overview
It is a common practice to back up telemetry data for longer durations due to compliance and audit purposes. You can use the AWS S3 Exporter to retain OpenTelemetry data as long as you need.
How It Works
The OpenTelemetry Collector supports multiple exporters per pipeline. Adding awss3 alongside the existing otlp exporter means data fans out to both SigNoz and S3 independently. If one exporter fails, the other is unaffected. For a full overview of Collector configuration, see the OpenTelemetry Collector Configuration page.
Limitations
- Data in AWS S3 is not accessible in the SigNoz UI
- You need a third-party tool like Amazon Athena to query data
Prerequisites
- Running instance of OpenTelemetry Collector with
otelcol-contribdistribution (if not running already, see the SigNoz installation guide or the OTel Collector installation docs for standalone setups) - Access to AWS S3 Bucket either using AWS credentials as environment variables or IAM roles for ECS tasks or EC2 instances (for more details, see the AWS credential configuration docs)
Adding AWS S3 Exporter
Add the following snippet to your existing otel-collector-config.yaml. This example exports logs to S3:
exporters:
awss3/logs:
s3uploader:
region: '<region>'
s3_bucket: '<bucket-name>'
s3_prefix: 'logs'
compression: gzip
Replace the placeholders:
<region>: your AWS region (e.g.,us-east-1)<bucket-name>: your S3 bucket name
Then add awss3/logs to your existing logs pipeline's exporters list:
service:
pipelines:
logs:
exporters: [otlp, awss3/logs]
Exporting All Signals (Logs, Metrics, and Traces)
To back up all three signals, use named instances with different prefixes. Append these exporters and update each pipeline in your existing config:
exporters:
awss3/logs:
s3uploader:
region: '<region>'
s3_bucket: '<bucket-name>'
s3_prefix: 'logs'
compression: gzip
awss3/metrics:
s3uploader:
region: '<region>'
s3_bucket: '<bucket-name>'
s3_prefix: 'metrics'
compression: gzip
awss3/traces:
s3uploader:
region: '<region>'
s3_bucket: '<bucket-name>'
s3_prefix: 'traces'
compression: gzip
Then add each exporter to the corresponding pipeline in your existing service.pipelines section:
service:
pipelines:
logs:
exporters: [otlp, awss3/logs]
metrics:
exporters: [otlp, awss3/metrics]
traces:
exporters: [otlp, awss3/traces]
- Named instances (
awss3/logs,awss3/metrics,awss3/traces) allow a differents3_prefixper signal
Additional Configuration Options
The exporter supports several optional fields for production use:
Top-level options on the exporter (siblings of s3uploader):
marshaler: output format. Default:otlp_json(queryable with Amazon Athena).otlp_protois more compact but harder to query.
Options nested under s3uploader:
compression:none(default),gzip, orzstd. Usegziporzstdin production to reduce storage costs.s3_partition_format: strftime-based path for time partitioning. Default:year=%Y/month=%m/day=%d/hour=%H/minute=%Mstorage_class: S3 storage class. Default:STANDARD. Valid values:STANDARD,STANDARD_IA,ONEZONE_IA,INTELLIGENT_TIERING,GLACIER,DEEP_ARCHIVE. UseSTANDARD_IAorINTELLIGENT_TIERINGfor archival data to reduce costs (see AWS S3 storage classes).role_arn: IAM role ARN to assume via STS, instead of static credentials.endpoint: custom S3-compatible endpoint URL for MinIO, LocalStack, or other stores.s3_force_path_style: set totruewhen using non-AWS S3-compatible stores.
Example with all optional fields:
exporters:
awss3/logs:
marshaler: otlp_json
s3uploader:
region: '<region>'
s3_bucket: '<bucket-name>'
s3_prefix: 'logs'
s3_partition_format: 'year=%Y/month=%m/day=%d/hour=%H/minute=%M'
compression: gzip
storage_class: STANDARD_IA
role_arn: '<iam-role-arn>'
endpoint: '<s3-compatible-endpoint>'
s3_force_path_style: true
Replace the placeholders:
<iam-role-arn>: ARN of the IAM role to assume (e.g.,arn:aws:iam::123456789012:role/otel-s3-writer)<s3-compatible-endpoint>: endpoint URL for non-AWS S3-compatible stores (e.g.,http://localhost:9000for MinIO)
For the full list of configuration options, see the AWS S3 Exporter README.
Validate in S3
This data will not appear in the SigNoz UI — verify delivery directly in S3. After deploying your updated Collector config, check that objects are appearing in your bucket:
aws s3 ls s3://<bucket-name>/logs/ --recursive | head
You should see keys following the partition pattern:
logs/year=2024/month=01/day=15/hour=10/minute=30/<unique-id>.json.gz
The .json.gz extension comes from the default marshaler: otlp_json combined with compression: gzip. Changing either option changes the extension.
If no objects appear, check the Collector logs for S3 permission errors or configuration issues.
Troubleshooting
No objects appearing in S3
- Verify the exporter is listed in the pipeline's
exportersarray, not just defined in theexporters:section. - Check that the Collector is receiving data on the pipeline (e.g., logs are flowing through the
logspipeline).
AccessDenied or NoCredentialProviders in Collector logs
- Verify your AWS credentials are available to the Collector process (
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEYenvironment variables, or an attached IAM role). - Confirm the IAM policy includes
s3:PutObjectpermission on the target bucket.
NoSuchBucket error in Collector logs
- Verify the
s3_bucketname andregionmatch your actual S3 bucket.
Collector fails to start with config error
- Confirm you are using the
otelcol-contribdistribution, which includes theawss3exporter. The coreotelcoldistribution does not include it. - Check YAML indentation and that exporter names use the
type/nameformat with a forward slash (e.g.,awss3/logs, notawss3logs).
Next Steps
- Query your archived data using Amazon Athena with the
otlp_jsonformat - Configure S3 lifecycle policies to transition older data to cheaper storage classes (e.g., Glacier)
- Monitor exporter health by checking the Collector's built-in metrics or logs for export failures
Get Help
If you need help with the steps in this topic, please reach out to us on SigNoz Community Slack.
If you are a SigNoz Cloud user, please use in product chat support located at the bottom right corner of your SigNoz instance or contact us at cloud-support@signoz.io.