-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Cpu of workers not taken into account #3688
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@philwo , you are the worker expert, could you take look at this? |
Yes, the "cpu" tag is not supported / used for anything in Bazel at the moment. I agree that it would be a nice thing to add. |
What strategies are you using then to prevent overtaxing of CPU? |
Buy even bigger workstations 😆 No, this is actually a problem for us as well. Internally there was a request to add a flag like --worker_max_instances=Javac=2. In other words, people wanted to tweak the limit of workers per mnemonic, not only in general. What you propose here sounds useful too, but is actually not worker specific - there is just no way to specify the resource usage of Skylark actions. They always use the default set of resources (
Adding @laurentlb for the Skylark side. |
We currently use Even worse for us is that we have offline Image compression, where single actions can take up to 20 minutes. If it was up to me I would deprecate
Being able to specify a resource set from Skylark would be really good @laurentlb. And then I currently have tests, which need to reserve a Vulkan context. So I would like to be able to do something like:
and:
or maybe better:
|
Dumb question: Is there any update on supporting Also, I assume this issue is related: #6477 |
@EricCousineau-TRI Last night I did some tests of our build with this patch that adds the build times weren't better, I don't know if I'll continue with it. |
I think we can say that this is a dup of #6477, which tracks the more general request of exposing resource sets to Starlark. |
Uh oh!
There was an error while loading. Please reload this page.
Description of the problem / feature request / question:
I have 2 types of workers, which I specify as:
and
My machine has 4 cores. So I would expect Bazel to try to get load average very near (possibly a bit lower) to 4.
When I trigger > 4 targets of Foo + > 4 targets of Bar, what I actually see is the average being ~12 (assuming 4 Foo workers running with 3 threads). The current running workers are 4 though. This seems to indicate that the cpu value is not properly taken into account or calculation is wrong.
If possible, provide a minimal example to reproduce the problem:
Environment info
bazel info release
): 0.5.4The text was updated successfully, but these errors were encountered: