You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using the google-batch provider, I notice that some batch errors are not propagating to dsub and it continues waiting to run jobs, when when should be aborting
This appears to be working as intended. The idea is that the quota issue is resolvable (either by resources becoming available or user allocating more quota), and then the job continues. For example, imagine submitting 100 jobs when we only have quota to do 50. Once the first 50 finish, we'd want the next 50 to run.
Perhaps better documentation on this should be added.
This risks starvation. What is a graceful way to trigger fast failure / timeout please? For example we submit jobs on large gpu machines which can go without availability for days
Ideally, you could make use of dsub's --timeout flag. It's implemented for the google-cls-v2 provider, but unfortunately not yet for the google-batch providers. The good news is the Batch API has support for a timeout, so it should be a simple passthrough for dsub.
Using the
google-batch
provider, I notice that some batch errors are not propagating to dsub and it continues waiting to run jobs, when when should be abortingThe process that launched it has retries=0, yet it still shows no failure and is patiently
The text was updated successfully, but these errors were encountered: