-
Notifications
You must be signed in to change notification settings - Fork 55
Add support for host connection with client certificate and key #96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
You can already do that by templating your own kubeconfig and passing it into Example: https://github.com/kbst/terraform-kubestack/blob/v0.12.2-beta.0/common/cluster_services/main.tf |
Gotcha, I understand the reasoning. I'll do it via templating then. |
Yamlencode as used here is also an option. |
@pst That is a working solution but that's a lot of config. If we require this provider in a shared terraform module, expecting our users to add that terraform config every time they use the module isn't great. The 3 simple attributes required by the other main terraform providers (kubernetes, helm, kubectl) is much easier. |
It seems like kubeconfig is currently the only supported way to connect to a cluster according to the doc.
The addition of client certificate/key would be really useful for many I think.
Helm does it this way:
And the official Kubernetes provider does it similarly, where you can even use basic auth if that's your thing.
Just having this flexibility would make this provider work for workflow where you create a cluster via Terraform on your Cloud platform, an then use the output to feed the provider that manages k8s itself (k8s/helm/kustomize...)
The text was updated successfully, but these errors were encountered: