Skip to content

Ignore namespaces when dealing with ClusterRole or ClusterRoleBindings #26

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Bhashit opened this issue Jun 21, 2020 · 6 comments
Closed

Comments

@Bhashit
Copy link

Bhashit commented Jun 21, 2020

While installing Istio with CNI, I had a manifest generated using istioctl. The manifest incorrectly creates a ClusterRoleBinding that specifies a namespace:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: istio-cni-repair-rolebinding
  namespace: kube-system
  labels:
    k8s-app: istio-cni-repair
subjects:
- kind: ServiceAccount
  name: istio-cni
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: istio-cni-repair-role

This refers to another ClusterRole

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: istio-cni-repair-role
  labels:
    app: istio-cni
    release: istio
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list", "watch", "delete", "patch", "update" ]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["get", "list", "watch", "delete", "patch", "update", "create" ]

Because the ClusterRoleBinding specifies a namespace, it looks like this provider is trying to look for a resource in that namespace. It fails with the following error.

module.cluster-services.kustomization_resource.current["rbac.authorization.k8s.io_v1_ClusterRoleBinding|kube-system|istio-cni-repair-rolebinding"]: Creating...
 Error: ResourceCreate: creating 'rbac.authorization.k8s.io/v1, Resource=clusterrolebindings' failed: the server could not find the requested resource

This may be an easy fix, and I'll attempt to send a PR once I have fixed this issue from happening in my CI pipeline.

@pst
Copy link
Member

pst commented Jun 22, 2020

Thanks for reporting. I've seen this error a couple of times. The solution is to not send a namespaced ClusterRolebinding, which is as you note invalid. The error seems to come from the kustomizationResourceCreate method.

Kubectl, from what I understand, silently removes the namespace for non namespaced resources. But I don't think that's the correct approach here. The provider should apply exactly what is specified in the kustomization. Not some magically altered version of it.

This error most likely only surfaces during apply, not plan. So #24 may help.

@riccardomc
Copy link

The same thing happens for manifests generated by the prometheus-operator helm chart and PodSecurityPolicy resources.

However, I believe the problem is in the chart. Since these resources are cluster scoped shouldn't not have a namespace field to begin with. I will file an issue to the prometheus-operator helm chart.

I think @Bhashit, you should do the same for istio.

@riccardomc
Copy link

FYI: helm/charts#22946

@Bhashit
Copy link
Author

Bhashit commented Jun 29, 2020

Kubectl, from what I understand, silently removes the namespace for non namespaced resources. But I don't think that's the correct approach here. The provider should apply exactly what is specified in the kustomization. Not some magically altered version of it.

@pst Maybe one last argument: shouldn't the tools that kinda replace kubectl should also work like kubectl?

@pst
Copy link
Member

pst commented Jun 30, 2020

I do understand your argument @Bhashit, but writing a Terraform provider I am in between two worlds. And this kubectl behavior does not seem acceptable in a Terraform provider to me.

If the Prometheus helm chart mentioned by @riccardomc and istioctl generate invalid Kubernetes manifests, then fixing that seems the better approach to me.

@Bhashit
Copy link
Author

Bhashit commented Jun 30, 2020

Got it. I understand. I'll close the issue

@Bhashit Bhashit closed this as completed Jun 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants