-
Notifications
You must be signed in to change notification settings - Fork 2.9k
[exporter/kafka] add compression level to kafka configuration #39647
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[exporter/kafka] add compression level to kafka configuration #39647
Conversation
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! Looks good overall, but I would like more consistency with confighttp, and to not expose Sarama implementation details
exporter/kafkaexporter/README.md
Outdated
@@ -88,6 +88,7 @@ The following settings can be optionally configured: | |||
- `max_message_bytes` (default = 1000000) the maximum permitted size of a message in bytes | |||
- `required_acks` (default = 1) controls when a message is regarded as transmitted. https://docs.confluent.io/platform/current/installation/configuration/producer-configs.html#acks | |||
- `compression` (default = 'none') the compression used when producing messages to kafka. The options are: `none`, `gzip`, `snappy`, `lz4`, and `zstd` https://docs.confluent.io/platform/current/installation/configuration/producer-configs.html#compression-type | |||
- `compression_level` (default = '-1000') the compression level is a measure of the compression quality the compression used when producing messages to kafka. Used in only: 'gzip', 'zstd'. the default value is ignored in every compression type. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Please make this consistent with the
compression_params
config in confighttp? See https://github.com/open-telemetry/opentelemetry-collector/tree/main/config/confighttp#client-configuration -1000
is an implementation detail of Sarama, please don't expose that- Rather than ignoring the level for algorithms other than gzip and zstd, fail fast so the user isn't wondering why their configuration is being ignored. You can use the configcompression types to validate the config, which take care of this already: https://github.com/open-telemetry/opentelemetry-collector/blob/f26c6bb527e3335fd3ae9174a8dc40a5fb5a0270/config/configcompression/compressiontype.go#L71
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@axw
Thank you for the great feedback.
I have updated the PR to incorporate your suggestions:
- Aligned the configuration behavior with confighttp for consistency. (Note: I couldn’t directly use configcompression.Type for unmarshaling.)
- Instead of exposing -1000, I now expose -1 via configcompression, and internally map it to -1000 at the final stage.
- Documented that the underlying library only supports the “fast” level for lz4 compression and that other types follow the confighttp's.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect, thank you :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the changes, LGTM!
5cc2bc8
to
054557a
Compare
@vidam-io in case you're reopening to try and get CI to run: I think a maintainer will need to approve, since this is your first contribution. |
Should the receivers also be updated within the same PR? |
@MovieStoreGuy No, only producers (exporters) configure compression settings. |
# Conflicts: # exporter/kafkaexporter/go.mod # exporter/kafkaexporter/go.sum # extension/observer/kafkatopicsobserver/go.mod # extension/observer/kafkatopicsobserver/go.sum # internal/kafka/go.mod # internal/kafka/go.sum # receiver/kafkametricsreceiver/go.mod # receiver/kafkametricsreceiver/go.sum # receiver/kafkareceiver/go.mod # receiver/kafkareceiver/go.sum
@atoulme I update it! |
# Conflicts: # exporter/kafkaexporter/go.mod # exporter/kafkaexporter/go.sum # extension/observer/kafkatopicsobserver/go.mod # extension/observer/kafkatopicsobserver/go.sum # internal/kafka/go.mod # internal/kafka/go.sum # receiver/kafkametricsreceiver/go.mod # receiver/kafkametricsreceiver/go.sum # receiver/kafkareceiver/go.mod # receiver/kafkareceiver/go.sum
@atoulme I think I solve CI checks |
# Conflicts: # exporter/kafkaexporter/go.mod # exporter/kafkaexporter/go.sum # extension/observer/kafkatopicsobserver/go.mod # extension/observer/kafkatopicsobserver/go.sum # internal/kafka/go.mod # internal/kafka/go.sum # receiver/kafkametricsreceiver/go.mod # receiver/kafkametricsreceiver/go.sum # receiver/kafkareceiver/go.mod # receiver/kafkareceiver/go.sum
Signed-off-by: vidam.io <[email protected]>
…elemetry#39647) #### Description Add support for configuring Kafka producer compression level. Both Kafka and the Sarama library support setting compression level, but the OpenTelemetry Collector currently does not expose this option. This change adds a `compression_level` configuration field to the Kafka exporter, allowing users to control compression level explicitly. #### Link to tracking issue _No tracking issue provided._ #### Testing - Added unit tests #### Documentation - Updated configuration documentation alongside the existing `compression` field to describe the usage of `compression_level`. --------- Signed-off-by: vidam.io <[email protected]> Co-authored-by: Antoine Toulme <[email protected]>
…elemetry#39647) #### Description Add support for configuring Kafka producer compression level. Both Kafka and the Sarama library support setting compression level, but the OpenTelemetry Collector currently does not expose this option. This change adds a `compression_level` configuration field to the Kafka exporter, allowing users to control compression level explicitly. #### Link to tracking issue _No tracking issue provided._ #### Testing - Added unit tests #### Documentation - Updated configuration documentation alongside the existing `compression` field to describe the usage of `compression_level`. --------- Signed-off-by: vidam.io <[email protected]> Co-authored-by: Antoine Toulme <[email protected]>
Description
Add support for configuring Kafka producer compression level.
Both Kafka and the Sarama library support setting compression level, but the OpenTelemetry Collector currently does not expose this option.
This change adds a
compression_level
configuration field to the Kafka exporter, allowing users to control compression level explicitly.Link to tracking issue
No tracking issue provided.
Testing
Documentation
compression
field to describe the usage ofcompression_level
.