-
Notifications
You must be signed in to change notification settings - Fork 2.7k
[pkg/stanza/fileconsumer] Emit logs in batches #35455
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Removing |
This refactors the `Reader::ReadToEnd` method by separating reading the file's header from reading the file's contents. This results in very similar code in `readHeader` and `readContents` methods, which was previosly deduplicated at the cost of slightly higher complexity. The bug could be fixed without separating header reading from contents reading, but I hope this separation will make it easier to implement content batching in the Reader (open-telemetry#35455). Content batching was my original motivation for these code changes. I only discovered the problem with record counting when reading the code.
The pull request #35870 is a first step: it refactors the code of the Reader to make the introduction of batching easier. |
#### Description Fixes #35869 by refactoring of the `Reader::ReadToEnd` method. This refactors the `Reader::ReadToEnd` method by separating reading the file's header from reading the file's contents. This results in very similar code in `readHeader` and `readContents` methods, which was previously deduplicated at the cost of slightly higher complexity. The bug could be fixed without separating header reading from contents reading, but I hope this separation will make it easier to implement content batching in the Reader (#35455). Content batching was my original motivation for these code changes. I only discovered the problem with record counting when reading the code. #### Link to tracking issue Fixes #35869 #### Testing In the first commit I have added tests that document the erroneous behavior. In the second commit I have fixed the bug and corrected the tests. #### Documentation Added changelog entry.
) #### Description Fixes open-telemetry#35869 by refactoring of the `Reader::ReadToEnd` method. This refactors the `Reader::ReadToEnd` method by separating reading the file's header from reading the file's contents. This results in very similar code in `readHeader` and `readContents` methods, which was previously deduplicated at the cost of slightly higher complexity. The bug could be fixed without separating header reading from contents reading, but I hope this separation will make it easier to implement content batching in the Reader (open-telemetry#35455). Content batching was my original motivation for these code changes. I only discovered the problem with record counting when reading the code. #### Link to tracking issue Fixes open-telemetry#35869 #### Testing In the first commit I have added tests that document the erroneous behavior. In the second commit I have fixed the bug and corrected the tests. #### Documentation Added changelog entry.
The PR that actually introduces batching: |
) #### Description Fixes open-telemetry#35869 by refactoring of the `Reader::ReadToEnd` method. This refactors the `Reader::ReadToEnd` method by separating reading the file's header from reading the file's contents. This results in very similar code in `readHeader` and `readContents` methods, which was previously deduplicated at the cost of slightly higher complexity. The bug could be fixed without separating header reading from contents reading, but I hope this separation will make it easier to implement content batching in the Reader (open-telemetry#35455). Content batching was my original motivation for these code changes. I only discovered the problem with record counting when reading the code. #### Link to tracking issue Fixes open-telemetry#35869 #### Testing In the first commit I have added tests that document the erroneous behavior. In the second commit I have fixed the bug and corrected the tests. #### Documentation Added changelog entry.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
#### Description Modifies the File consumer to emit logs in batches as opposed to sending each log individually through the Stanza pipeline and on to the Log Emitter. Here are the changes introduced: - 6b4c9fe Changed the `Reader::ReadToEnd` method in File consumer to collect the tokens scanned from the file into batches. At this point, the Reader still emits each token individually, as the `emit.Callback` function only accepts a single token. - c206995 Changed `emit.Callback` function signature to accept a slice of tokens as opposed to a single token, and changed the Reader to emit a batch of tokens in one request. At this point, the batches are still split into individual tokens inside the `emit` function, because the Stanza operators can only process one entry at a time. - aedda3a Added `ProcessBatch` method to Stanza operators and used it in the `emit` function. At this point, the batch of tokens is translated to a batch of entries and passed to Log Emitter as a whole batch. The batch is still split in the Log Emitter, which calls `consumeFunc` for each entry in a loop. - 13d6054 Changed the LogEmitter to add the whole batch to its buffer, as opposed to adding entries one by one. **Slice of entries `[]entry.Entry` vs. slice of pointers `[]*entry.Entry`** I considered whether the `ProcessBatch` method in the `Operator` interface should accept a slice of structs `[]entry.Entry` or a slice of pointers `[]*entry.Entry`. I ran some tests (similar to #35454) and they showed a 7-10% performance loss when using a slice of structs vs. a slice of pointers. That's why I decided to use the slice of pointers `[]*entry.Entry`. #### Link to tracking issue - Fixes #35455 #### Testing No changes in tests. The goal is for the functionality to not change and for performance to not decrease. I have added a new benchmark in a separate PR #38054 that should be helpful in assessing the performance impact of this change. #### Documentation These are internal changes, no user documentation needs changing.
…38171) #### Description Related to #38054. The File Log receiver benchmark has a scope that is larger than both the [File consumer benchmark](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/a826350bab9388e7ab8179f1e02c68177d83f0b4/pkg/stanza/fileconsumer/benchmark_test.go) and the [File input benchmark](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/a826350bab9388e7ab8179f1e02c68177d83f0b4/pkg/stanza/operator/input/file/benchmark_test.go). Compared to the File input benchmark, the scope of File Log receiver benchmark includes: - translating of Stanza entries to pdata logs ([converter.ConvertEntries](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/a826350bab9388e7ab8179f1e02c68177d83f0b4/pkg/stanza/adapter/converter.go#L20)). - batching of logs in [LogEmitter](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/a826350bab9388e7ab8179f1e02c68177d83f0b4/pkg/stanza/operator/helper/emitter.go#L103). This new benchmark should help us measure the performance impact of [removing batching from LogEmitter](#35456) after it is [added in File consumer](#35455). #### Link to tracking issue - Needed for #35456
Component(s)
pkg/stanza/fileconsumer
Is your feature request related to a problem? Please describe.
This issue is created as a result of discussion in #31074.
The Stanza adapter's LogEmitter has a 100-log buffer that is a source of data loss during non-graceful collector shutdown. One solution is to remove this buffer, but this would cause a severe performance impact (see #35454). This performance impact could be alleviated in case of the File Log receiver by implementing batching earlier in the Stanza pipeline - in the File consumer.
Describe the solution you'd like
From #31074 (comment):
and further in #31074 (comment):
Describe alternatives you've considered
Additional context
See #31074 (comment) and the following comments.
The text was updated successfully, but these errors were encountered: