Closed
Description
A note for the community (please keep)
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Use Cases
Motivated by https://discord.com/channels/742820443487993987/746070591097798688/943246759294038017
The user in question has an aws_s3
sink with a large batch timeout (as would be common):
[sinks.aws_s3_upload]
type = "aws_s3"
inputs = ["app_logs"]
bucket = "app-logs"
region = "us-west-2"
[sinks.aws_s3_upload.batch]
max_bytes = 500000000
timeout_secs = 1800
In this case, if a SIGTERM is sent to Vector, Vector won't try to flush the batch, but wait the 60 seconds and then just terminate.
Attempted Solutions
No response
Proposal
During shutdown, Vector should attempt to:
- Stop sources (this already happens)
- Flush all data from sources and transforms through to sinks (I'm not sure if this happens?)
- Flush batches in sinks (does not currently happen)
references
Version
No response