-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Adapt clusterctl move to the new multi-tenancy model #3042
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
/area clusterctl |
@randomvariable is working on this in the providers for v1alpha3, although we should revisit in v1alpha4 and forward. It's definitely something we might want to tackle before getting to beta. /milestone Next |
Thanks for this issue Fabrizio. /priority important/long-term |
@randomvariable: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/priority important-longterm |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
/lifecycle frozen |
We need to add definitions for what multi-tenancy is to the glossary |
And a provider contract. We should definitely put this in 0.4.0 /assign |
/milestone v0.4.0 |
Renamed this, hopefully it's going to be a bit more clear going forward :) |
Given that we are moving to a single manager watching all namespaces for each provider, I started to investigate possible cleanups/action items:
... March 11th edit |
That the point I'm trying to understand. For instance, last I checked the AWS proposal it was introducing different types of cluster principals. |
Linking #4035 because it touches changing credentials during pivoting |
For init, other than installing the CRDs, no. However in the most common case, we only expect the management cluster to be moved, and in that case, will use a singleton |
For capz, |
/area release-blocking |
@CecileRobertMichon: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/kind release-blocking |
for now on the CAPV side, we're having two types of credentials:
|
In case it's relevant in any way, CAPO is using a secret, which is usually located in the same namespace as the Cluster resource, but is referenced via a SecretRef which would also allow using a secret in another namespace. |
Trying to distill a generalised approach and possible impacts on clusterctl and in the CAPI provider operator as well: For the init workflow:
For the move workflow:
Are 1 or 2 enough to address the requirement for all the providers/identity management systems? |
Thanks to the work in #4514, we can finally nail down required changes in clusterctl move.
@randomvariable @nader-ziada @yastij @gab-satchi @sedefsavas @vincepri |
/unassign @randomvariable /assign |
@fabriziopandini Will it be straightforward to backport this to v0.3.x release? Asking because CAPA is supporting multi-tenancy in v1alpha3 releases. |
I don't think we are going to backport this; changes are sort of invasive and not all the guidelines for the providers (#4514) are not yet merged |
We should wait for #4628 as well |
User Story
As a User, I would like to use clusterctl for creating multy-tenant clusters.
Detailed Description
kubernetes-sigs/cluster-api-provider-aws#1713 is introducing the possibility for a provider to use many credentials with a single instance of a provider.
We should define if/how this scenario is supported by clusterctl.
Anything else you would like to add:
clusterctl already support two other different types of multy-tenancy, see https://cluster-api.sigs.k8s.io/clusterctl/commands/init.html#multi-tenancy
The approach introduced by CAPA is potentially by far simpler than the existing ones, and if we can potentially have all the providers to converge on the same approach this can result in a relevant simplification of manifests generation (e.g. no more need of the WebHook namespace) and of clusterctl (lots of corner cases won't be necessary anymore)
/kind feature
The text was updated successfully, but these errors were encountered: