Skip to content

[exporter/kafka] add compression level to kafka configuration #39647

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

vidam-io
Copy link
Contributor

Description

Add support for configuring Kafka producer compression level.

Both Kafka and the Sarama library support setting compression level, but the OpenTelemetry Collector currently does not expose this option.
This change adds a compression_level configuration field to the Kafka exporter, allowing users to control compression level explicitly.

Link to tracking issue

No tracking issue provided.

Testing

  • Added unit tests

Documentation

  • Updated configuration documentation alongside the existing compression field to describe the usage of compression_level.

@vidam-io vidam-io requested review from MovieStoreGuy and a team as code owners April 25, 2025 06:13
Copy link

linux-foundation-easycla bot commented Apr 25, 2025

CLA Signed

The committers listed above are authorized under a signed CLA.

@github-actions github-actions bot requested review from axw and pavolloffay April 25, 2025 06:13
@vidam-io vidam-io closed this Apr 25, 2025
@vidam-io vidam-io deleted the add-compression-level-in-kafka-producer branch April 25, 2025 06:19
@vidam-io vidam-io restored the add-compression-level-in-kafka-producer branch April 28, 2025 06:20
@vidam-io vidam-io reopened this Apr 28, 2025
Copy link
Contributor

@axw axw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! Looks good overall, but I would like more consistency with confighttp, and to not expose Sarama implementation details

@@ -88,6 +88,7 @@ The following settings can be optionally configured:
- `max_message_bytes` (default = 1000000) the maximum permitted size of a message in bytes
- `required_acks` (default = 1) controls when a message is regarded as transmitted. https://docs.confluent.io/platform/current/installation/configuration/producer-configs.html#acks
- `compression` (default = 'none') the compression used when producing messages to kafka. The options are: `none`, `gzip`, `snappy`, `lz4`, and `zstd` https://docs.confluent.io/platform/current/installation/configuration/producer-configs.html#compression-type
- `compression_level` (default = '-1000') the compression level is a measure of the compression quality the compression used when producing messages to kafka. Used in only: 'gzip', 'zstd'. the default value is ignored in every compression type.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

@vidam-io vidam-io Apr 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@axw
Thank you for the great feedback.
I have updated the PR to incorporate your suggestions:

  • Aligned the configuration behavior with confighttp for consistency. (Note: I couldn’t directly use configcompression.Type for unmarshaling.)
  • Instead of exposing -1000, I now expose -1 via configcompression, and internally map it to -1000 at the final stage.
  • Documented that the underlying library only supports the “fast” level for lz4 compression and that other types follow the confighttp's.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect, thank you :)

Copy link
Contributor

@axw axw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the changes, LGTM!

@vidam-io vidam-io closed this Apr 29, 2025
@vidam-io vidam-io force-pushed the add-compression-level-in-kafka-producer branch from 5cc2bc8 to 054557a Compare April 29, 2025 01:29
@vidam-io vidam-io reopened this Apr 29, 2025
@axw
Copy link
Contributor

axw commented Apr 29, 2025

@vidam-io in case you're reopening to try and get CI to run: I think a maintainer will need to approve, since this is your first contribution.

@MovieStoreGuy
Copy link
Contributor

Should the receivers also be updated within the same PR?

@vidam-io
Copy link
Contributor Author

@MovieStoreGuy No, only producers (exporters) configure compression settings.

@atoulme atoulme added the ready to merge Code review completed; ready to merge by maintainers label Apr 29, 2025
# Conflicts:
#	exporter/kafkaexporter/go.mod
#	exporter/kafkaexporter/go.sum
#	extension/observer/kafkatopicsobserver/go.mod
#	extension/observer/kafkatopicsobserver/go.sum
#	internal/kafka/go.mod
#	internal/kafka/go.sum
#	receiver/kafkametricsreceiver/go.mod
#	receiver/kafkametricsreceiver/go.sum
#	receiver/kafkareceiver/go.mod
#	receiver/kafkareceiver/go.sum
@vidam-io
Copy link
Contributor Author

vidam-io commented May 2, 2025

@atoulme I update it!

vidam-io added 7 commits May 2, 2025 13:44
# Conflicts:
#	exporter/kafkaexporter/go.mod
#	exporter/kafkaexporter/go.sum
#	extension/observer/kafkatopicsobserver/go.mod
#	extension/observer/kafkatopicsobserver/go.sum
#	internal/kafka/go.mod
#	internal/kafka/go.sum
#	receiver/kafkametricsreceiver/go.mod
#	receiver/kafkametricsreceiver/go.sum
#	receiver/kafkareceiver/go.mod
#	receiver/kafkareceiver/go.sum
@vidam-io
Copy link
Contributor Author

vidam-io commented May 5, 2025

@atoulme I think I solve CI checks

vidam-io added 2 commits May 7, 2025 10:00
# Conflicts:
#	exporter/kafkaexporter/go.mod
#	exporter/kafkaexporter/go.sum
#	extension/observer/kafkatopicsobserver/go.mod
#	extension/observer/kafkatopicsobserver/go.sum
#	internal/kafka/go.mod
#	internal/kafka/go.sum
#	receiver/kafkametricsreceiver/go.mod
#	receiver/kafkametricsreceiver/go.sum
#	receiver/kafkareceiver/go.mod
#	receiver/kafkareceiver/go.sum
@dmitryax dmitryax merged commit 5b6b522 into open-telemetry:main May 7, 2025
173 checks passed
@github-actions github-actions bot added this to the next release milestone May 7, 2025
dragonlord93 pushed a commit to dragonlord93/opentelemetry-collector-contrib that referenced this pull request May 23, 2025
…elemetry#39647)

#### Description

Add support for configuring Kafka producer compression level.

Both Kafka and the Sarama library support setting compression level, but
the OpenTelemetry Collector currently does not expose this option.
This change adds a `compression_level` configuration field to the Kafka
exporter, allowing users to control compression level explicitly.

#### Link to tracking issue

_No tracking issue provided._

#### Testing

- Added unit tests

#### Documentation

- Updated configuration documentation alongside the existing
`compression` field to describe the usage of `compression_level`.

---------

Signed-off-by: vidam.io <[email protected]>
Co-authored-by: Antoine Toulme <[email protected]>
dd-jasminesun pushed a commit to DataDog/opentelemetry-collector-contrib that referenced this pull request Jun 23, 2025
…elemetry#39647)

#### Description

Add support for configuring Kafka producer compression level.

Both Kafka and the Sarama library support setting compression level, but
the OpenTelemetry Collector currently does not expose this option.
This change adds a `compression_level` configuration field to the Kafka
exporter, allowing users to control compression level explicitly.

#### Link to tracking issue

_No tracking issue provided._

#### Testing

- Added unit tests

#### Documentation

- Updated configuration documentation alongside the existing
`compression` field to describe the usage of `compression_level`.

---------

Signed-off-by: vidam.io <[email protected]>
Co-authored-by: Antoine Toulme <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants