-
-
Notifications
You must be signed in to change notification settings - Fork 153
Remove LocalCluster from KubeCluster #130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This looks like a possible repeat of #84 ? |
If I understand correctly, #84 would be resolved by using a remote scheduler, but this isn't enough to make the two example use cases work. This proposal provides a way to begin using heterogeneous pod specs, regardless of whether the scheduler is local or remote. |
For clarity, this change enables the relationship between cluster/client/scheduler below. These don't assume you have a remote scheduler, but if you did, you also could have multiple clients. Before:
Now:
where an arrow |
For broader visibility I recommend that you engage on dask/distributed#2235 . Any redesign like this will probably engage all of the projects. |
As this issue is related to the wider design of cluster managers and we've been pointed to an appropriate place to discuss that I'm going to close this out. |
I propose to allow KubeCluster to take a scheduler argument (or client and get scheduler indirectly), instead of always creating its own via LocalCluster.
This would enable spawning a scheduler, possibly remotely (see #84 and similar proposal #84 (comment)), and adding multiple KubeClusters that manage different pod specs.
Some use cases:
This might be related to dask/distributed#2235, but within the scope of dask-kubernetes this seems like a relatively minor change. If no client/scheduler is passed when constructing a KubeCluster, it could still create its own from LocalCluster.
The text was updated successfully, but these errors were encountered: