Skip to content

[pkg/stanza/fileconsumer] Emit logs in batches #35455

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
andrzej-stencel opened this issue Sep 27, 2024 · 6 comments · Fixed by #36663
Closed

[pkg/stanza/fileconsumer] Emit logs in batches #35455

andrzej-stencel opened this issue Sep 27, 2024 · 6 comments · Fixed by #36663
Assignees
Labels

Comments

@andrzej-stencel
Copy link
Member

andrzej-stencel commented Sep 27, 2024

Component(s)

pkg/stanza/fileconsumer

Is your feature request related to a problem? Please describe.

This issue is created as a result of discussion in #31074.

The Stanza adapter's LogEmitter has a 100-log buffer that is a source of data loss during non-graceful collector shutdown. One solution is to remove this buffer, but this would cause a severe performance impact (see #35454). This performance impact could be alleviated in case of the File Log receiver by implementing batching earlier in the Stanza pipeline - in the File consumer.

Describe the solution you'd like

From #31074 (comment):

Consider if it's feasible to modify the File consumer to send logs in batches. It seems natural (to me) for the File consumer to emit all logs read from one file in one pass as a single batch. The file consumer being able to emit batches of entries as opposed to single entries could alleviate the performance impact of removing batching from LogEmitter (possibly even improving the situation, as the "natural" batches could be larger than the current artificial 100 log limit). This would require the modification of all Stanza operators to be able to operate on batches of Stanza entries as opposed to single entry at a time. It seems a simple change to add a ProcessBatch method beside the existing Process method, with a for loop calling Process inside the ProcessBatch function.

and further in #31074 (comment):

In my opinion this could be a really nice improvement regardless of any other changes. The data coming out of the receiver will be more organized and intuitive. In terms of implementation, I think we can add a []string buffer to the Reader struct, and then emit the entire buffer, either when it reaches a max size (maybe configurable at a later time) or whenever we hit EOF. We can maintain a tentative offset which is saved only when this buffer is actually emitted.

Describe alternatives you've considered

  • Remove buffering in LogEmitter and live with the performance impact it brings.
  • Instead of removing the buffering, have it persisted so that the data is not lost during non-graceful shutdown.
  • Change the logic of marking logs as sent, so that logs are only marked as sent when they are actually successfully sent out to next consumer in the collector pipeline.

Additional context

See #31074 (comment) and the following comments.

@andrzej-stencel andrzej-stencel added enhancement New feature or request needs triage New item requiring triage labels Sep 27, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@andrzej-stencel
Copy link
Member Author

Removing needs triage label as this was discussed with the code owner.

@andrzej-stencel andrzej-stencel self-assigned this Oct 9, 2024
andrzej-stencel added a commit to andrzej-stencel/opentelemetry-collector-contrib that referenced this issue Oct 18, 2024
This refactors the `Reader::ReadToEnd` method
by separating reading the file's header from reading the file's contents.
This results in very similar code in `readHeader` and `readContents` methods,
which was previosly deduplicated at the cost of slightly higher complexity.
The bug could be fixed without separating header reading from contents reading,
but I hope this separation will make it easier to implement content batching
in the Reader (open-telemetry#35455).
Content batching was my original motivation for these code changes.
I only discovered the problem with record counting when reading the code.
@andrzej-stencel
Copy link
Member Author

The pull request #35870 is a first step: it refactors the code of the Reader to make the introduction of batching easier.

djaglowski pushed a commit that referenced this issue Oct 28, 2024
#### Description

Fixes
#35869
by refactoring of the `Reader::ReadToEnd` method.

This refactors the `Reader::ReadToEnd` method by separating reading the
file's header from reading the file's contents.
This results in very similar code in `readHeader` and `readContents`
methods, which was previously deduplicated at the cost of slightly
higher complexity.
The bug could be fixed without separating header reading from contents
reading, but I hope this separation will make it easier to implement
content batching in the Reader
(#35455).
Content batching was my original motivation for these code changes.
I only discovered the problem with record counting when reading the
code.

#### Link to tracking issue

Fixes
#35869

#### Testing

In the first commit I have added tests that document the erroneous
behavior. In the second commit I have fixed the bug and corrected the
tests.

#### Documentation

Added changelog entry.
jpbarto pushed a commit to jpbarto/opentelemetry-collector-contrib that referenced this issue Oct 29, 2024
)

#### Description

Fixes
open-telemetry#35869
by refactoring of the `Reader::ReadToEnd` method.

This refactors the `Reader::ReadToEnd` method by separating reading the
file's header from reading the file's contents.
This results in very similar code in `readHeader` and `readContents`
methods, which was previously deduplicated at the cost of slightly
higher complexity.
The bug could be fixed without separating header reading from contents
reading, but I hope this separation will make it easier to implement
content batching in the Reader
(open-telemetry#35455).
Content batching was my original motivation for these code changes.
I only discovered the problem with record counting when reading the
code.

#### Link to tracking issue

Fixes
open-telemetry#35869

#### Testing

In the first commit I have added tests that document the erroneous
behavior. In the second commit I have fixed the bug and corrected the
tests.

#### Documentation

Added changelog entry.
@andrzej-stencel
Copy link
Member Author

@andrzej-stencel
Copy link
Member Author

andrzej-stencel commented Nov 11, 2024

sbylica-splunk pushed a commit to sbylica-splunk/opentelemetry-collector-contrib that referenced this issue Dec 17, 2024
)

#### Description

Fixes
open-telemetry#35869
by refactoring of the `Reader::ReadToEnd` method.

This refactors the `Reader::ReadToEnd` method by separating reading the
file's header from reading the file's contents.
This results in very similar code in `readHeader` and `readContents`
methods, which was previously deduplicated at the cost of slightly
higher complexity.
The bug could be fixed without separating header reading from contents
reading, but I hope this separation will make it easier to implement
content batching in the Reader
(open-telemetry#35455).
Content batching was my original motivation for these code changes.
I only discovered the problem with record counting when reading the
code.

#### Link to tracking issue

Fixes
open-telemetry#35869

#### Testing

In the first commit I have added tests that document the erroneous
behavior. In the second commit I have fixed the bug and corrected the
tests.

#### Documentation

Added changelog entry.
Copy link
Contributor

github-actions bot commented Feb 3, 2025

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Feb 3, 2025
andrzej-stencel added a commit that referenced this issue Mar 3, 2025
#### Description

Modifies the File consumer to emit logs in batches as opposed to sending
each log individually through the Stanza pipeline and on to the Log
Emitter.

Here are the changes introduced:

-
6b4c9fe
Changed the `Reader::ReadToEnd` method in File consumer to collect the
tokens scanned from the file into batches. At this point, the Reader
still emits each token individually, as the `emit.Callback` function
only accepts a single token.
-
c206995
Changed `emit.Callback` function signature to accept a slice of tokens
as opposed to a single token, and changed the Reader to emit a batch of
tokens in one request. At this point, the batches are still split into
individual tokens inside the `emit` function, because the Stanza
operators can only process one entry at a time.
-
aedda3a
Added `ProcessBatch` method to Stanza operators and used it in the
`emit` function. At this point, the batch of tokens is translated to a
batch of entries and passed to Log Emitter as a whole batch. The batch
is still split in the Log Emitter, which calls `consumeFunc` for each
entry in a loop.
-
13d6054
Changed the LogEmitter to add the whole batch to its buffer, as opposed
to adding entries one by one.

**Slice of entries `[]entry.Entry` vs. slice of pointers
`[]*entry.Entry`**

I considered whether the `ProcessBatch` method in the `Operator`
interface should accept a slice of structs `[]entry.Entry` or a slice of
pointers `[]*entry.Entry`. I ran some tests (similar to
#35454)
and they showed a 7-10% performance loss when using a slice of structs
vs. a slice of pointers. That's why I decided to use the slice of
pointers `[]*entry.Entry`.

#### Link to tracking issue

- Fixes
#35455

#### Testing

No changes in tests. The goal is for the functionality to not change and
for performance to not decrease.

I have added a new benchmark in a separate PR
#38054
that should be helpful in assessing the performance impact of this
change.

#### Documentation

These are internal changes, no user documentation needs changing.
djaglowski pushed a commit that referenced this issue Mar 3, 2025
…38171)

#### Description

Related to
#38054.

The File Log receiver benchmark has a scope that is larger than both the
[File consumer
benchmark](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/a826350bab9388e7ab8179f1e02c68177d83f0b4/pkg/stanza/fileconsumer/benchmark_test.go)
and the [File input
benchmark](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/a826350bab9388e7ab8179f1e02c68177d83f0b4/pkg/stanza/operator/input/file/benchmark_test.go).
Compared to the File input benchmark, the scope of File Log receiver
benchmark includes:
- translating of Stanza entries to pdata logs
([converter.ConvertEntries](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/a826350bab9388e7ab8179f1e02c68177d83f0b4/pkg/stanza/adapter/converter.go#L20)).
- batching of logs in
[LogEmitter](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/a826350bab9388e7ab8179f1e02c68177d83f0b4/pkg/stanza/operator/helper/emitter.go#L103).

This new benchmark should help us measure the performance impact of
[removing batching from
LogEmitter](#35456)
after it is [added in File
consumer](#35455).

#### Link to tracking issue

- Needed for
#35456
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment