-
Notifications
You must be signed in to change notification settings - Fork 84
Improve network error handling for batch jobs (HTTP retries) #216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
If that can help, I just patched the def next_page(pit_id: , search_after: nil, slice_id: nil)
options = search_options(pit_id: pit_id, search_after: search_after, slice_id: slice_id)
logger.trace("search options", options)
max_retries = 5 # Maximum number of retries
initial_delay = 1 # Initial delay (1 second)
retries = 0
begin
@client.search(options)
rescue => err
if retries < max_retries
delay = initial_delay * (2 ** retries)
retries += 1
logger.warn("Retrying search request, attempt #{retries} of #{max_retries}, waiting #{delay}s. Details: #{err.message}")
sleep(delay)
retry
else
logger.error("Max retries reached. Failing with error: #{err.message}")
raise
end
end
end That works as expected but I didn't take the time to implement generic logic for both "search_after" and "scoll" methods. Also, I have not added any tests for now. |
Thanks for raising this. I think this would be a useful addition. Would you like to open a pull request? Additionally, we are testing out sincedb like functionality in the plugin, would you be interested in participating in the tech. preview? |
Yes, I will open a pull request. I was just wondering if implementation details (i.e. params exposed to the end user) should be discussed here or in the PR? About sincedb, sorry, but I don't think I'll have time to spend on it in the coming weeks. |
I've looked at this a bit and I agree that, especially for search_after, we should support retries at the individual search level. It may be necessary to keep the retries at the PIT level as well though, as errors can cause the PIT to not be available any more, so the plugin must be able to get out of the search-level loop and go back to PIT-level. The retry at Your code suggestion gets us nearly there as it tackles the retry in the right place ( |
Context
In our project, we have to reprocess a large amount of Elastic data in a single batch. The
elasticsearch
input plugin might run for multiple hours to handle billions of events. But we are facing a problem: when a network issue occurs, the entire job is restarted (not only last HTTP request).I guess this issue rarely occurs when Logstash is deployed in the Elasticsearch subnet but we got a cloud hybrid configuration, and it makes it impossible to use in production as of now.
The plugin configuration:
Log sample of the restarting job when a network error occurred:
Feature proposal
First, for future documentation readers, it would be nice to improve the retries section of the documentation to explain that it is at the "job" level and not "http request" level.
Then, adding a retry mechanism at the HTTP request level with an exponential backoff (or similar) would be a good option.
I had a quick look at the code base, I think we could add a wrapper around the next_page() function to handle the network error and implement the retries properly
Contribution
If it can help, we can contribute and develop this feature.
The text was updated successfully, but these errors were encountered: