Skip to content

csi: revisit rbac rules, add Snapshotter sidecar and roles #104

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Nov 19, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,13 @@
## unreleased

* Add Snapshots functionality
* Add CSI Snapshots functionality
[[GH-103]](https://github.com/digitalocean/csi-digitalocean/pull/103)
* Add csi-snapshotter sidecars and associated RBAC rules
[[GH-104]](https://github.com/digitalocean/csi-digitalocean/pull/104)
* Revisit existing RBAC rules for the attacher, provisioner and
driver-registrar. We no longer use the system cluster-role bindings as those
will be deleted in v1.13
[[GH-104]](https://github.com/digitalocean/csi-digitalocean/pull/104)
* Fix inconsistent usage of the driver name
[[GH-100]](https://github.com/digitalocean/csi-digitalocean/pull/100)
* Use publish_info in ControllerPublishVolume for storing and accessing the
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ test:
test-integration:

@echo "==> Started integration tests"
@env GOCACHE=off go test -v -tags integration ./test/...

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why change this?

Copy link
Contributor Author

@fatih fatih Nov 19, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because go modules won't work if you turn off GOCACHE

@env go test -v -tags integration ./test/...


.PHONY: build
Expand Down
163 changes: 139 additions & 24 deletions deploy/kubernetes/releases/csi-digitalocean-dev.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,20 @@ provisioner: dobs.csi.digitalocean.com

---

# NOTE(arslan): this will probably fail , because the CRD is created via the
# csi-snapshotter sidecar, that is part of the csi-do-controller statefulset.
# We need to create this seperately.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add to this comment some documentation about how other devs need to proceed to get this working (eg "apply this section separately after XYZ")

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment will be removed in upcoming PR's it's just here to remind myself.

kind: VolumeSnapshotClass
apiVersion: snapshot.storage.k8s.io/v1alpha1
metadata:
name: do-block-storage
namespace: kube-system
annotations:
snapshot.storage.kubernetes.io/is-default-class: "true"
snapshotter: dobs.csi.digitalocean.com

---

##############################################
########### ############
########### Controller plugin ############
Expand Down Expand Up @@ -165,6 +179,18 @@ spec:
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
- name: csi-snapshotter
image: quay.io/k8scsi/csi-snapshotter:v0.4.1
args:
- "--connection-timeout=15s"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
imagePullPolicy: Always
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
- name: csi-do-plugin
image: digitalocean/do-csi-plugin:dev
args :
Expand All @@ -190,46 +216,140 @@ spec:
emptyDir: {}
---

apiVersion: v1
kind: ServiceAccount
apiVersion: v1
metadata:
name: csi-do-controller-sa
namespace: kube-system

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-do-provisioner-role
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents"]
verbs: ["get", "list"]

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-do-controller-provisioner-binding
namespace: kube-system
name: csi-do-provisioner-binding
subjects:
- kind: ServiceAccount
name: csi-do-controller-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: system:csi-external-provisioner
name: csi-do-provisioner-role
apiGroup: rbac.authorization.k8s.io

---
# Attacher must be able to work with PVs, nodes and VolumeAttachments
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-do-attacher-role
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["csi.storage.k8s.io"]
resources: ["csinodeinfos"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-do-controller-attacher-binding
namespace: kube-system
name: csi-do-attacher-binding
subjects:
- kind: ServiceAccount
name: csi-do-controller-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: system:csi-external-attacher
name: csi-do-attacher-role
apiGroup: rbac.authorization.k8s.io

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-do-snapshotter-role
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents"]
verbs: ["create", "get", "list", "watch", "update", "delete"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do you need these permissions to create snapshots?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The csi-snapshotter sidecar needs them to create the VolumeSnapshot and VolumeSnapshotClass custom resource definitions.

These are pulled from the appropriate repo, each sidecar now contains the RBAC rules it needs to operate, as an example for the above: https://github.com/kubernetes-csi/external-snapshotter/blob/master/deploy/kubernetes/rbac.yaml

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm I see that the binary tries to create its own definition. Seems wrong to me but until patched, I guess we have no alternative:

https://github.com/kubernetes-csi/external-snapshotter/blob/a2f8b41c08d7d795ba08ca8a87b942fb9b5dac44/cmd/csi-snapshotter/main.go#L107-L111

verbs: ["create", "list", "watch", "delete"]

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-do-snapshotter-binding
subjects:
- kind: ServiceAccount
name: csi-do-controller-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: csi-do-snapshotter-role
apiGroup: rbac.authorization.k8s.io




########################################
########### ############
Expand Down Expand Up @@ -336,11 +456,22 @@ metadata:

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-do-driver-registrar-role
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "create", "update", "patch"]

---

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-do-driver-registrar-binding
namespace: kube-system
subjects:
- kind: ServiceAccount
name: csi-do-node-sa
Expand All @@ -350,19 +481,3 @@ roleRef:
name: csi-do-driver-registrar-role
apiGroup: rbac.authorization.k8s.io


---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-do-driver-registrar-role
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "update"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]

59 changes: 59 additions & 0 deletions examples/kubernetes/snapshot/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
# Creating a snapshot from an exiting volume and restore it back

Note that we assume you correctly installed the csi-digitalocean driver, and
it's up and running.


1. Create a `pvc`:


```
$ kubectl create -f pvc.yaml
```

2. Create a `snapshot` from the previous `pvc`:


```
$ kubectl create -f snapshot.yaml
```

At this point you should have a volume and a snapshot originating from that
volume. You can observe the state of your pvc's and snapshots with the
following command:


```
$ kubectl get pvc && kubectl get pv && kubectl get volumesnapshot
```


3. Restore from a `snapshot`:

To restore from a given snapshot, you need to create a new `pvc` that refers to
the snapshot:


```
$ kubectl create -f restore.yaml
```

This will create a new `pvc` that you can use with your applications.

4. Cleanup your resources:

Make sure to delete your test resources:

```
$ kubectl delete -f pvc.yaml
$ kubectl delete -f restore.yaml
$ kubectl delete -f snapshot.yaml
```

---

To understand how snapshotting works, please read the official blog
announcement with examples:
https://kubernetes.io/blog/2018/10/09/introducing-volume-snapshot-alpha-for-kubernetes/


11 changes: 11 additions & 0 deletions examples/kubernetes/snapshot/pvc.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-do-test-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: do-block-storage
14 changes: 14 additions & 0 deletions examples/kubernetes/snapshot/restore.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-do-test-pvc-restore
spec:
dataSource:
name: csi-do-test-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
8 changes: 8 additions & 0 deletions examples/kubernetes/snapshot/snapshot.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshot
metadata:
name: csi-do-test-snapshot
spec:
source:
name: csi-do-test-pvc
kind: PersistentVolumeClaim