Skip to content

Intermittent MountVolume.MountDevice errors on Pod creation #32

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
zarbis opened this issue May 24, 2018 · 33 comments · Fixed by #61
Closed

Intermittent MountVolume.MountDevice errors on Pod creation #32

zarbis opened this issue May 24, 2018 · 33 comments · Fixed by #61
Labels
bug Something isn't working

Comments

@zarbis
Copy link

zarbis commented May 24, 2018

I've set up Rancher 2.0 cluster on DO to test CSI-DO. At first attempt i followed README.md and succeeded with example app. However after trying to do my own stuff I've started consistently getting Pod creation errors. To rule out the unknowns I've wiped all my stuff and returned back to example app and confirmed that i'm consistently getting this result:

Events:
  Type     Reason                  Age                From                     Message
  ----     ------                  ----               ----                     -------
  Warning  FailedScheduling        21s (x4 over 22s)  default-scheduler        pod has unbound PersistentVolumeClaims
  Normal   Scheduled               19s                default-scheduler        Successfully assigned my-csi-app to node-1
  Normal   SuccessfulMountVolume   19s                kubelet, node-1          MountVolume.SetUp succeeded for volume "default-token-jw8hg"
  Normal   SuccessfulAttachVolume  15s                attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-4b1897fb-5f66-11e8-8f45-c2743f7bbff2"
  Warning  FailedMount             5s (x4 over 10s)   kubelet, node-1          MountVolume.MountDevice failed for volume "pvc-4b1897fb-5f66-11e8-8f45-c2743f7bbff2" : rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-4b1897fb-5f66-11e8-8f45-c2743f7bbff2' output: "mke2fs 1.43.7 (16-Oct-2017)\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-4b1897fb-5f66-11e8-8f45-c2743f7bbff2 does not exist and no size was specified.\n"

However just after waiting for couple hours problem is gone. I wonder if it was Block Storage degradation I've just happened to witness or this is something related to CSI-DO. Unfortunatelly I've wiped out that cluster and after setting up new one example app deploys just fine. I will provide additional info you might need if I witness this problem again.

@zarbis
Copy link
Author

zarbis commented May 25, 2018

I've experienced this problem again and here are my observations:

  1. It never happens on fresh cluster with fresh nodes but starts happening after a while. Re-creating cluster with fresh nodes solves problem for a while.
  2. /dev/disk/by-id/scsi-0DO_Volume_pvc-* is present on a droplet when this error occurs. I've tried to run mkfs command manually and result was either the same as in error message or something along those lines:ext4 fs is already present and drive is currently used by a system, but effectively Pod was in Failed state.

So there is some sort of deterioration over time somewhere in a system.
If you can provide me with debug steps to perform when I reproduce problem again - I will share the results I can gather.

@zarbis
Copy link
Author

zarbis commented May 26, 2018

One more observation: some sort of ID mismatch happens when this issue arises.
This problem happened to me again, Pod has such event:
Warning FailedMount 7s (x5 over 17s) kubelet, node-1 MountVolume.MountDevice failed for volume "pvc-910ee89c-60fc-11e8-ac9b-a22ee19381a6" ...<same mkfs error here>

First I take a look at what k8s has to say about that PVC:

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                            STORAGECLASS       REASON    AGE
pvc-910ee89c-60fc-11e8-ac9b-a22ee19381a6   2Gi        RWO            Delete           Bound     jenkins/jenkins-home-jenkins-0   do-block-storage             3m

Looks good, then I confirm that block storage is attached to node-1:
image

Then I ssh into the node and look around:

root@node-1:~# ls /dev/disk/by-id/scsi-0DO_Volume_pvc-910ee89c-60fc-11e8-ac9b-a22ee19381a6
ls: cannot access '/dev/disk/by-id/scsi-0DO_Volume_pvc-910ee89c-60fc-11e8-ac9b-a22ee19381a6': No such file or directory

Block device is absent indeed, or is it?

root@node-1:~# ls /dev/disk/by-id/
scsi-0DO_Volume_pvc-1996973e-60c8-11e8-ac9b-a22ee19381a6

So what's happening here?
PV ID referred everywhere:
pvc-910ee89c-60fc-11e8-ac9b-a22ee19381a6
Block device actually mounted into the node:
pvc-1996973e-60c8-11e8-ac9b-a22ee19381a6

At some moment PV ID and block device ID are getting mismatched. Not sure what exact condition causes it, but this is a point of no return: i was not able to create any more PVs even if I remove faulty PVs/PVCs, only full cluster teardown and reassembly helps.

Will provide more details as I encounter it more in my testing.

Update: After removing PV mislabeled block device stays on node until reboot, however DO dashboard show no Volumes. Rebooting nodes "fixes" them so cluster reassembly is no longer needed.

@zarbis
Copy link
Author

zarbis commented May 26, 2018

Small update to my observations:

  1. at some point block device with wrong ID appears on node in /dev/disk/by-id/
  2. Pod creation fails with mkfs "The file <...> does not exist" error
  3. After this point node becomes "corrupted":
  • if you remove PV, DO Volume gets detached but this faulty block device stays on node
  • if you create new PV that lands on this node new DO Volume gets attached but it doesn't show up in /dev/disk/by-id/ until reboot
  • after reboot new block device shows up, but Pod creation fails with:
MountVolume.MountDevice failed for volume "<PVC_ID>" : 
rpc error: code = Internal desc = mounting failed: exit status 255 cmd: 
'mount -t ext4 /dev/disk/by-id/scsi-0DO_Volume_<PVC_ID> /var/lib/kubelet/plugins/kubernetes.io/csi/pv/<PVC_ID>/globalmount' 
output: "mount: mounting /dev/disk/by-id/scsi-0DO_Volume_<PVC_ID> on /var/lib/kubelet/plugins/kubernetes.io/csi/pv/<PVC_ID>/globalmount failed: No error information\n"
  1. after removing PV node is functional again

Pinging @fatih i guess?

@fatih
Copy link
Contributor

fatih commented Jul 5, 2018

@zarbis thanks for all the report. I know how frustrating this can be, so thanks for providing all this information. It seems like the plugin might fail to attach the volumes to the droplets. There was an issue which was fixed here: #34

This is released on May 29: https://github.com/digitalocean/csi-digitalocean/blob/master/CHANGELOG.md#v011-alpha---may-29th-2018 So I definitely recommend to upgrade to v0.1.1. I think it might be due this issue.

I'm on this and try to fix things when I can reproduce it myself as well. As always please keep posting whatever you find.

@lohazo
Copy link

lohazo commented Aug 13, 2018

Still got same error in 0.1.3

1____projects_kubernetes_digitalocean-k8s_echo-server__zsh_

@fatih
Copy link
Contributor

fatih commented Aug 13, 2018

@lohazo this is a completely different error. Can you please open a new issue with your Kubernetes version and the steps you did seeing this. Thanks

@fatih
Copy link
Contributor

fatih commented Aug 13, 2018

@lohazo also please provide these information:

kubectl logs -l app=csi-provisioner-doplugin -c digitalocean-csi-plugin -n kube-system
kubectl logs -l app=csi-attacher-doplugin -c digitalocean-csi-plugin -n kube-system
kubectl logs -l app=csi-doplugin -c csi-doplugin -n kube-system
kubectl get pods -n kube-system

@lohazo
Copy link

lohazo commented Aug 16, 2018

I created new issue #54 can you take a look @fatih

@davidhiendl
Copy link

I can confirm the original issue, funny enough I can reliably reproduce it by mounting multiple DO volumes in a single pod simultaneously. Mounting other volumes like CephFS/RBD/HostPath alongside does not seem to cause the issue. I first noticed this while migrating/restoring data from an other cluster.

Event Log:

Events:
  Type     Reason                  Age              From                      Message
  ----     ------                  ----             ----                      -------
  Normal   Scheduled               14s              default-scheduler         Successfully assigned transfer-pod to k8s-xi-worker-1
  Normal   SuccessfulMountVolume   14s              kubelet, k8s-xi-worker-1  MountVolume.SetUp succeeded for volume "hostfs"
  Normal   SuccessfulMountVolume   14s              kubelet, k8s-xi-worker-1  MountVolume.SetUp succeeded for volume "default-token-jcjzs"
  Normal   SuccessfulAttachVolume  13s              attachdetach-controller   AttachVolume.Attach succeeded for volume "pvc-aa090bfa-a4d7-11e8-9fb0-829d8140666b"
  Normal   SuccessfulAttachVolume  10s              attachdetach-controller   AttachVolume.Attach succeeded for volume "pvc-b7303a8d-a4d7-11e8-9fb0-829d8140666b"
  Normal   SuccessfulMountVolume   2s               kubelet, k8s-xi-worker-1  MountVolume.SetUp succeeded for volume "pvc-b7303a8d-a4d7-11e8-9fb0-829d8140666b"
  Warning  FailedMount             1s (x2 over 3s)  kubelet, k8s-xi-worker-1  MountVolume.MountDevice failed for volume "pvc-aa090bfa-a4d7-11e8-9fb0-829d8140666b" : rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-aa090bfa-a4d7-11e8-9fb0-829d8140666b' output: "mke2fs 1.43.7 (16-Oct-2017)\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-aa090bfa-a4d7-11e8-9fb0-829d8140666b does not exist and no size was specified.\n"

@thomas
Copy link

thomas commented Aug 22, 2018

Experiencing the same issue: https://github.com/confluentinc/cp-helm-charts (just as an example) does not appear to work with the csi plugin largely because it mounts two volumes for the zookeeper instance. The plugin will fail to attach the volume to the node, and manually attaching does not correct the issue.

Update: In the above-mentioned chart I used the old driver (https://github.com/kubernetes-incubator/external-storage/tree/master/digitalocean) in conjunction with this current CSI driver (one volume on csi, the other on the old driver) on the same pod to get around this issue. The pod started, at least, and appears to be working.

@fatih
Copy link
Contributor

fatih commented Aug 22, 2018

Thanks for the reports. I'm not sure if you're all seeing the same issue. As my understanding, the issue happens when two volumes are mounted to the same pod. I'm now going to try to reproduce this. I'll add more information. Meanwhile, please try to share always the example manifest (pvc's, deployments, etc..) so I can use them on my test cluster to reproduce. I'll add a CONTRIBUTING.md file so people are aware of it.

@fatih fatih added the bug Something isn't working label Aug 22, 2018
@fatih
Copy link
Contributor

fatih commented Aug 22, 2018

I tried the following setup, where a pod is referring to two volumes. They will be mounted to two different mount paths:

pod.yaml:

kind: Pod
apiVersion: v1
metadata:
  name: my-csi-app
spec:
  containers:
    - name: my-frontend
      image: busybox
      volumeMounts:
      - mountPath: "/data1"
        name: my-do-volume-1
      - mountPath: "/data2"
        name: my-do-volume-2
      command: [ "sleep", "1000000" ]
  volumes:
    - name: my-do-volume-1
      persistentVolumeClaim:
        claimName: csi-pvc-1
    - name: my-do-volume-2
      persistentVolumeClaim:
        claimName: csi-pvc-2

pvc-multiple.yaml:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pvc-1
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: do-block-storage
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pvc-2
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: do-block-storage

This is the events I see when do a kubectl describe pods my-csi-app:

Events:
  Type    Reason                  Age   From                     Message
  ----    ------                  ----  ----                     -------
  Normal  Scheduled               20s   default-scheduler        Successfully assigned my-csi-app to worker-6273
  Normal  SuccessfulMountVolume   19s   kubelet, worker-6273     MountVolume.SetUp succeeded for volume "default-token-pmkbp"
  Normal  SuccessfulAttachVolume  17s   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-b90e9365-a5ee-11e8-b271-aa750cfd6233"
  Normal  SuccessfulAttachVolume  15s   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-b924a471-a5ee-11e8-b271-aa750cfd6233"
  Normal  SuccessfulMountVolume   15s   kubelet, worker-6273     MountVolume.SetUp succeeded for volume "pvc-b90e9365-a5ee-11e8-b271-aa750cfd6233"
  Normal  SuccessfulMountVolume   11s   kubelet, worker-6273     MountVolume.SetUp succeeded for volume "pvc-b924a471-a5ee-11e8-b271-aa750cfd6233"
  Normal  Pulling                 10s   kubelet, worker-6273     pulling image "busybox"
  Normal  Pulled                  9s    kubelet, worker-6273     Successfully pulled image "busybox"
  Normal  Created                 9s    kubelet, worker-6273     Created container
  Normal  Started                 9s    kubelet, worker-6273     Started container

Everything works fine in this scenario. Gonna try out other kind of setups. Please let me know if the one you're using is similar to this. Also I would appreciate if you could test the setup above and then maybe provide a case that is created from the manifests above so I can also reproduce.

@fatih
Copy link
Contributor

fatih commented Aug 22, 2018

Alright, found a way to make it fail with the following setup:

pod.yaml:

kind: Pod
apiVersion: v1
metadata:
  name: my-csi-app
spec:
  containers:
    - name: my-frontend
      image: busybox
      volumeMounts:
      - mountPath: "/data/foo"
        name: foo
      - mountPath: "/data/bar"
        name: bar
      command: [ "sleep", "1000000" ]
  volumes:
    - name: foo
      persistentVolumeClaim:
        claimName: csi-foo
    - name: bar
      persistentVolumeClaim:
        claimName: csi-bar

pvc.yaml:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-foo
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: do-block-storage
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-bar
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: do-block-storage

This are the events I see:

Events:
  Type     Reason                  Age               From                     Message
  ----     ------                  ----              ----                     -------
  Warning  FailedScheduling        3m (x6 over 3m)   default-scheduler        persistentvolumeclaim "csi-foo" not found
  Normal   Scheduled               3m                default-scheduler        Successfully assigned my-csi-app to worker-6273
  Normal   SuccessfulMountVolume   3m                kubelet, worker-6273     MountVolume.SetUp succeeded for volume "default-token-pmkbp"
  Normal   SuccessfulAttachVolume  3m                attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-eda4d1a9-a5f0-11e8-b271-aa750cfd6233"
  Normal   SuccessfulAttachVolume  3m                attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-ed8e8d32-a5f0-11e8-b271-aa750cfd6233"
  Normal   SuccessfulMountVolume   3m                kubelet, worker-6273     MountVolume.SetUp succeeded for volume "pvc-ed8e8d32-a5f0-11e8-b271-aa750cfd6233"
  Warning  FailedMount             1m                kubelet, worker-6273     Unable to mount volumes for pod "my-csi-app_default(dcade27c-a5f0-11e8-b271-aa750cfd6233)": timeout expired waiting for volumes to attach or mount for pod "default"/"my-csi-app". list of unmounted volumes=[bar]. list of unattached volumes=[foo bar default-token-pmkbp]
  Warning  FailedMount             52s (x9 over 3m)  kubelet, worker-6273     MountVolume.MountDevice failed for volume "pvc-eda4d1a9-a5f0-11e8-b271-aa750cfd6233" : rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-eda4d1a9-a5f0-11e8-b271-aa750cfd6233' output: "mke2fs 1.43.7 (16-Oct-2017)\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-eda4d1a9-a5f0-11e8-b271-aa750cfd6233 does not exist and no size was specified.\n"

Seems like, it happens if it's mounted under a sub directory under the same root. Such as /data/foo and /data/bar. It doesn't fail if it's /foo and /bar.

@davidhiendl
Copy link

davidhiendl commented Aug 22, 2018

Thanks for following up on this. My setup was with volumes mounted under a common ancestor.
I will test later if mounting without common ancestors (except "/" of course) fixes the issue.
Here is my pod manifest that experiences this issue (pvc and mount names replaced):

apiVersion: v1
kind: Pod
metadata:
  name: transfer-pod
spec:
  nodeSelector:
    kubernetes.io/hostname: k8s-xi-worker-1
  containers:
  - name: transfer-pod
    image: debian
    command: ["sleep", "360000"]
    volumeMounts:
    - name: hostfs
      mountPath: /hostfs
    - name: pv1
      mountPath: /pv/pv1
    - name: pv2
      mountPath: /pv/pv2
    - name: pv3
      mountPath: /pv/pv3
  volumes:
  - name: hostfs
    hostPath:
      path: /
  - name: pv1
    persistentVolumeClaim:
      claimName: pvc1
  - name: pv2
    persistentVolumeClaim:
      claimName: pvc2
  - name: pv3
    persistentVolumeClaim:
      claimName: pvc3

PVCs look similar to this (I have a different name for the the storage class than you because we are using a custom helm chart instead of manually deploying as described in this repo).

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv1
  labels:
    ...
spec:
  storageClassName: digitalocean-blockstorage
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 30Gi

@fatih
Copy link
Contributor

fatih commented Aug 22, 2018

I found the issue, which is a race condition that could happen very often due how the DigitalOcean API works. This is fixed with #61 I tested it and it works fine.

I pushed a new image to test with the version :dev. You can update the attacher component with the following command:

kubectl set image statefulset csi-attacher-doplugin digitalocean-csi-plugin=digitalocean/do-csi-plugin:dev -n kube-system && kubectl delete pods/csi-attacher-doplugin-0  -n kube-system

The above command should replace the current image with the :dev version. Deleting the pod makes sure the statefulset creates a new pod with the latest image. Can anyone test it with this image, would appreciate it.

@Azuka
Copy link

Azuka commented Aug 22, 2018

@fatih, I just tested and still see the same error.

kubectl describe pod app-db-postgresql-7b8f48f799-5fr95
...
Events:
  Type     Reason                  Age               From                                           Message
  ----     ------                  ----              ----                                           -------
  Normal   Scheduled               35s               default-scheduler                              Successfully assigned app/app-db-postgresql-7b8f48f799-5fr95 to f0193b5d-ccf1-4d47-b6c2-1b9537386b05
  Warning  FailedAttachVolume      35s               attachdetach-controller                        Multi-Attach error for volume "pvc-b154104f-a4e7-11e8-b870-16ebc57da236" Volume is already exclusively attached to one node and can't be attached to another
  Normal   SuccessfulAttachVolume  25s               attachdetach-controller                        AttachVolume.Attach succeeded for volume "pvc-b154104f-a4e7-11e8-b870-16ebc57da236"
  Warning  FailedMount             1s (x6 over 18s)  kubelet, f0193b5d-ccf1-4d47-b6c2-1b9537386b05  MountVolume.MountDevice failed for volume "pvc-b154104f-a4e7-11e8-b870-16ebc57da236" : rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-b154104f-a4e7-11e8-b870-16ebc57da236' output: "mke2fs 1.43.7 (16-Oct-2017)\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-b154104f-a4e7-11e8-b870-16ebc57da236 does not exist and no size was specified.\n"

@fatih
Copy link
Contributor

fatih commented Aug 22, 2018

@Azuka can you please try this example: #32 (comment) if not please provide your manifests in full deployable version so I can test it myself. This might be or not the same error. Thanks!

@Azuka
Copy link

Azuka commented Aug 22, 2018

@fatih I'm using the postgres helm chart which mounts at /var/lib/postgresql/data/pgdata. See https://github.com/helm/charts/blob/master/stable/postgresql/values.yaml#L71

I'm using as-is which uses the default storage provider.

@fatih
Copy link
Contributor

fatih commented Aug 22, 2018

Thanks @Azuka. I never used helm, is the link you provided something I can deploy via Kubectl or is it a template we need to convert first?

@Azuka
Copy link

Azuka commented Aug 22, 2018

@fatih I just exported the templates. Please see below:

---
# Source: postgresql/templates/secrets.yaml

apiVersion: v1
kind: Secret
metadata:
  name: postgres-postgresql
  labels:
    app: postgresql
    chart: postgresql-0.15.0
    release: postgres
    heritage: Tiller
type: Opaque
data:
  
  postgres-password: "QXplNlJkV1hBcw=="
  

---
# Source: postgresql/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: postgres-postgresql
  labels:
    app: postgresql
    chart: postgresql-0.15.0
    release: postgres
    heritage: Tiller
data:
---
# Source: postgresql/templates/pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: postgres-postgresql
  labels:
    app: postgresql
    chart: postgresql-0.15.0
    release: postgres
    heritage: Tiller
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      storage: "8Gi"
---
# Source: postgresql/templates/svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: postgres-postgresql
  labels:
    app: postgresql
    chart: postgresql-0.15.0
    release: postgres
    heritage: Tiller
spec:
  type: ClusterIP
  ports:
  - name: postgresql
    port: 5432
    targetPort: postgresql
  selector:
    app: postgresql
    release: postgres

---
# Source: postgresql/templates/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: postgres-postgresql
  labels:
    app: postgresql
    chart: postgresql-0.15.0
    release: postgres
    heritage: Tiller
spec:
  selector:
    matchLabels:
      app: postgresql
      release: postgres
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: postgresql
        release: postgres
    spec:
      containers:
      - name: postgres-postgresql
        image: "postgres:9.6.2"
        imagePullPolicy: ""
        args:
        env:
        - name: POSTGRES_USER
          value: "postgres"
          # Required for pg_isready in the health probes.
        - name: PGUSER
          value: "postgres"
        - name: POSTGRES_DB
          value: ""
        - name: POSTGRES_INITDB_ARGS
          value: ""
        - name: PGDATA
          value: /var/lib/postgresql/data/pgdata
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: postgres-postgresql
              key: postgres-password
        - name: POD_IP
          valueFrom: { fieldRef: { fieldPath: status.podIP } }
        ports:
        - name: postgresql
          containerPort: 5432
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - exec pg_isready --host $POD_IP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          failureThreshold: 6
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - exec pg_isready --host $POD_IP
          initialDelaySeconds: 5
          timeoutSeconds: 3
          periodSeconds: 5
        resources:
          requests:
            cpu: 100m
            memory: 256Mi
          
        volumeMounts:
        - name: data
          mountPath: /var/lib/postgresql/data/pgdata
          subPath: postgresql-db
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: postgres-postgresql

---
# Source: postgresql/templates/networkpolicy.yaml

@Azuka
Copy link

Azuka commented Aug 22, 2018

Detailed actions if it helps (I'm using a new volume now).

  • I deployed Elasticsearch to a cluster without checking resource limits. This caused most nodes to go out of commission
  • I added a new node to the cluster, drained the node with the problematic pods and deleted the entire namespace, ie
     kubectl drain  <node>--ignore-daemonsets --delete-local-data
     kubectl delete ns elasticsearch
    
  • Deleted pods with volume claims. One postgres pod recovered, one didn't.
  • Set pod tag to dev for the csi-attacher-doplugin per @fatih's instructions above.
  • Still saw mounting issues even after deleting the pod.
  • Deleted the pvc, detached from the node and deleted the volume (via the DO console), then deleted the pv (had to edit and remove the finalizer).
  • Recreated the deployment, including pvc. Volume is not being created in DO, but I see logs like the below:
    csi-attacher-doplugin-0 csi-attacher I0822 20:23:32.681417       1 csi_handler.go:86] Error processing "csi-15fedfa77fb2c88cb74e5e5206723dc0de0ea2dbeced98367f6bb1a7e3d9cd6a": failed to attach: persistentvolume "pvc-8d68d362-a62c-11e8-b188-16ebc57da236" not found
    
    pvc-8d68d362-a62c-11e8-b188-16ebc57da236 was the volume claim created for Elasticsearch which is gone now. All working now, although PVs took a while to create.

Thanks a lot, @fatih

@fatih fatih closed this as completed in #61 Aug 23, 2018
@fatih fatih reopened this Aug 23, 2018
@fatih
Copy link
Contributor

fatih commented Aug 23, 2018

We should focus on a simple use case for everyone and go from them. It seems like not all of us here in this issue have the same issue. The fix for the race condition is fixed now, please let us templates that are simple to follow and let's start from there, otherwise it's very hard to track for me everyone's private or public manifests. I'll release a new version (v0.1.4) now and then let's start debugging with this version as it contains a lot of bug fixes.

@sayhell
Copy link

sayhell commented Aug 23, 2018

I run the case #32 (comment) and still got the same error:

Events:
  Type     Reason                  Age                From                     Message
  ----     ------                  ----               ----                     -------
  Normal   Scheduled               19s                default-scheduler        Successfully assigned my-csi-app to do-sfo2-3
  Normal   SuccessfulMountVolume   19s                kubelet, do-sfo2-3       MountVolume.SetUp succeeded for volume "default-token-p796n"
  Normal   SuccessfulAttachVolume  14s                attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-0e34d000-a6b1-11e8-a2a7-b2a41a998ed4"
  Warning  FailedAttachVolume      13s (x4 over 17s)  attachdetach-controller  AttachVolume.Attach failed for volume "pvc-0e32f6cb-a6b1-11e8-a2a7-b2a41a998ed4" : rpc error: code = Aborted desc = volume "0e964035-a6b1-11e8-9a91-0242ac112c0a" couldn't be attached. droplet 106740863 is in process of another action
  Normal   SuccessfulAttachVolume  9s                 attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-0e32f6cb-a6b1-11e8-a2a7-b2a41a998ed4"
  Warning  FailedMount             1s (x5 over 11s)   kubelet, do-sfo2-3       MountVolume.MountDevice failed for volume "pvc-0e34d000-a6b1-11e8-a2a7-b2a41a998ed4" : rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-0e34d000-a6b1-11e8-a2a7-b2a41a998ed4' output: "mke2fs 1.43.7 (16-Oct-2017)\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-0e34d000-a6b1-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\n"
  Warning  FailedMount             1s (x2 over 3s)    kubelet, do-sfo2-3       MountVolume.MountDevice failed for volume "pvc-0e32f6cb-a6b1-11e8-a2a7-b2a41a998ed4" : rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-0e32f6cb-a6b1-11e8-a2a7-b2a41a998ed4' output: "mke2fs 1.43.7 (16-Oct-2017)\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-0e32f6cb-a6b1-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\n"

@fatih
Copy link
Contributor

fatih commented Aug 23, 2018

@sayhell true, the issue is not the mount path at all. A fix was pushed, can you please try v0.1.4 : https://github.com/digitalocean/csi-digitalocean/blob/master/deploy/kubernetes/releases/csi-digitalocean-v0.1.4.yaml

@fatih
Copy link
Contributor

fatih commented Aug 23, 2018

Can anyone here comment please after updating to v0.1.4, cleaning up everthing and do a clean deploy with the setup here: #32 (comment)

There is a race condition that can happen if there are multiple request against a Droplet. We fixed it with v0.1.4 now and it should be good to go. However, there might be other issues we might not aware. So I want to close this problem if the problem is something else and continue with a clean Github issue so we know what we're dealing with.

@sayhell
Copy link

sayhell commented Aug 23, 2018

I clean up my StorageClass and recreate with v0.1.4. weird result:

Events:
  Type     Reason                  Age              From                     Message
  ----     ------                  ----             ----                     -------
  Normal   Scheduled               3m               default-scheduler        Successfully assigned my-csi-app to do-sfo2-2
  Normal   SuccessfulMountVolume   3m               kubelet, do-sfo2-2       MountVolume.SetUp succeeded for volume "default-token-p796n"
  Normal   SuccessfulAttachVolume  3m               attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4"
  Normal   SuccessfulAttachVolume  3m               attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4"
  Normal   SuccessfulMountVolume   3m               kubelet, do-sfo2-2       MountVolume.SetUp succeeded for volume "pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4"
  Warning  FailedMount             1m               kubelet, do-sfo2-2       Unable to mount volumes for pod "my-csi-app_default(39003bd8-a6b4-11e8-a2a7-b2a41a998ed4)": timeout expired waiting for volumes to attach or mount for pod "default"/"my-csi-app". list of unmounted volumes=[bar]. list of unattached volumes=[foo bar default-token-p796n]
  Warning  FailedMount             1m (x9 over 3m)  kubelet, do-sfo2-2       MountVolume.MountDevice failed for volume "pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4" : rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: "mke2fs 1.43.7 (16-Oct-2017)\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\n"

One of the volume has been mount successe, and the other one is failed.

@fatih
Copy link
Contributor

fatih commented Aug 23, 2018

@sayhell can you please provide the logs of the csi-attacher, csi-provisioner and node? Here are the commands you can use to get the logs:

kubectl logs -l app=csi-provisioner-doplugin -c digitalocean-csi-plugin -n kube-system  
kubectl logs -l app=csi-attacher-doplugin -c digitalocean-csi-plugin -n kube-system
kubectl logs -l app=csi-doplugin -c csi-doplugin -n kube-system 

@sayhell
Copy link

sayhell commented Aug 23, 2018

attacher.log

time="2018-08-23T09:07:25Z" level=info msg="removing socket" node_id=106740862 region=sfo2 socket=/var/lib/csi/sockets/pluginproxy/csi.sock
time="2018-08-23T09:07:25Z" level=info msg="server started" addr=/var/lib/csi/sockets/pluginproxy/csi.sock node_id=106740862 region=sfo2
time="2018-08-23T09:07:26Z" level=info msg="get plugin info called" method=get_plugin_info node_id=106740862 region=sfo2 response="name:\"com.digitalocean.csi.dobs\" vendor_version:\"v0.1.4\" "
time="2018-08-23T09:07:26Z" level=info msg="probe called" method=prove node_id=106740862 region=sfo2
time="2018-08-23T09:07:26Z" level=info msg="get plugin capabitilies called" method=get_plugin_capabilities node_id=106740862 region=sfo2 response="capabilities:<service:<type:CONTROLLER_SERVICE > > "
time="2018-08-23T09:07:26Z" level=info msg="controller get capabilities called" method=controller_get_capabilities node_id=106740862 region=sfo2 response="capabilities:<rpc:<type:CREATE_DELETE_VOLUME > > capabilities:<rpc:<type:PUBLISH_UNPUBLISH_VOLUME > > capabilities:<rpc:<type:LIST_VOLUMES > > "
time="2018-08-23T09:07:26Z" level=info msg="controller unpublish volume called" droplet_id=106740863 method=controller_unpublish_volume node_id=106740863 region=sfo2 volume_id=958f2de4-a6b2-11e8-875e-0242ac111912
time="2018-08-23T09:07:28Z" level=info msg="waiting until volume is detached" droplet_id=106740863 method=controller_unpublish_volume node_id=106740863 region=sfo2 volume_id=958f2de4-a6b2-11e8-875e-0242ac111912
time="2018-08-23T09:07:29Z" level=info msg="action received" action_id=496408540 action_status=in-progress node_id=106740862 region=sfo2 volume_id=958f2de4-a6b2-11e8-875e-0242ac111912
time="2018-08-23T09:07:30Z" level=info msg="action received" action_id=496408540 action_status=in-progress node_id=106740862 region=sfo2 volume_id=958f2de4-a6b2-11e8-875e-0242ac111912
time="2018-08-23T09:07:33Z" level=info msg="action received" action_id=496408540 action_status=completed node_id=106740862 region=sfo2 volume_id=958f2de4-a6b2-11e8-875e-0242ac111912
time="2018-08-23T09:07:33Z" level=info msg="action completed" action_id=496408540 node_id=106740862 region=sfo2 volume_id=958f2de4-a6b2-11e8-875e-0242ac111912
time="2018-08-23T09:07:33Z" level=info msg="volume is detached" droplet_id=106740863 method=controller_unpublish_volume node_id=106740863 region=sfo2 volume_id=958f2de4-a6b2-11e8-875e-0242ac111912
time="2018-08-23T09:07:33Z" level=info msg="controller unpublish volume called" droplet_id=106740863 method=controller_unpublish_volume node_id=106740863 region=sfo2 volume_id=958f2de4-a6b2-11e8-875e-0242ac111912
time="2018-08-23T09:07:35Z" level=warning msg="assuming volume is detached already" droplet_id=106740863 error="POST https://api.digitalocean.com/v2/volumes/958f2de4-a6b2-11e8-875e-0242ac111912/actions: 422 Attachment not found" method=controller_unpublish_volume node_id=106740863 region=sfo2 resp="godo.Rate{Limit:5000, Remaining:4776, Reset:godo.Timestamp{2018-08-23 08:07:36 +0000 UTC}}" volume_id=958f2de4-a6b2-11e8-875e-0242ac111912
time="2018-08-23T09:09:21Z" level=info msg="controller publish volume called" droplet_id=106740862 method=controller_publish_volume node_id=106740862 region=sfo2 volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a
time="2018-08-23T09:09:21Z" level=info msg="controller publish volume called" droplet_id=106740862 method=controller_publish_volume node_id=106740862 region=sfo2 volume_id=2ff12484-a6b4-11e8-875e-0242ac111912
time="2018-08-23T09:09:23Z" level=info msg="waiting until volume is attached" droplet_id=106740862 method=controller_publish_volume node_id=106740862 region=sfo2 volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a
time="2018-08-23T09:09:23Z" level=info msg="waiting until volume is attached" droplet_id=106740862 method=controller_publish_volume node_id=106740862 region=sfo2 volume_id=2ff12484-a6b4-11e8-875e-0242ac111912
time="2018-08-23T09:09:24Z" level=info msg="action received" action_id=496409274 action_status=in-progress node_id=106740862 region=sfo2 volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a
time="2018-08-23T09:09:24Z" level=info msg="action received" action_id=496409275 action_status=in-progress node_id=106740862 region=sfo2 volume_id=2ff12484-a6b4-11e8-875e-0242ac111912
time="2018-08-23T09:09:25Z" level=info msg="action received" action_id=496409275 action_status=in-progress node_id=106740862 region=sfo2 volume_id=2ff12484-a6b4-11e8-875e-0242ac111912
time="2018-08-23T09:09:25Z" level=info msg="action received" action_id=496409274 action_status=completed node_id=106740862 region=sfo2 volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a
time="2018-08-23T09:09:25Z" level=info msg="action completed" action_id=496409274 node_id=106740862 region=sfo2 volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a
time="2018-08-23T09:09:25Z" level=info msg="volume is attached" droplet_id=106740862 method=controller_publish_volume node_id=106740862 region=sfo2 volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a
time="2018-08-23T09:09:25Z" level=info msg="controller publish volume called" droplet_id=106740862 method=controller_publish_volume node_id=106740862 region=sfo2 volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a
time="2018-08-23T09:09:26Z" level=info msg="action received" action_id=496409275 action_status=in-progress node_id=106740862 region=sfo2 volume_id=2ff12484-a6b4-11e8-875e-0242ac111912
time="2018-08-23T09:09:27Z" level=warning msg="droplet is not able to detach the volume" droplet_id=106740862 error="POST https://api.digitalocean.com/v2/volumes/30135c09-a6b4-11e8-9a91-0242ac112c0a/actions: 422 Droplet already has a pending event." method=controller_publish_volume node_id=106740862 region=sfo2 resp="godo.Rate{Limit:5000, Remaining:4777, Reset:godo.Timestamp{2018-08-23 08:09:31 +0000 UTC}}" volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a
time="2018-08-23T09:09:27Z" level=error msg="method failed" error="rpc error: code = Aborted desc = volume \"30135c09-a6b4-11e8-9a91-0242ac112c0a\" couldn't be attached. droplet 106740862 is in process of another action" method=/csi.v0.Controller/ControllerPublishVolume node_id=106740862 region=sfo2
time="2018-08-23T09:09:27Z" level=info msg="action received" action_id=496409275 action_status=in-progress node_id=106740862 region=sfo2 volume_id=2ff12484-a6b4-11e8-875e-0242ac111912
time="2018-08-23T09:09:28Z" level=info msg="action received" action_id=496409275 action_status=completed node_id=106740862 region=sfo2 volume_id=2ff12484-a6b4-11e8-875e-0242ac111912
time="2018-08-23T09:09:28Z" level=info msg="action completed" action_id=496409275 node_id=106740862 region=sfo2 volume_id=2ff12484-a6b4-11e8-875e-0242ac111912
time="2018-08-23T09:09:28Z" level=info msg="volume is attached" droplet_id=106740862 method=controller_publish_volume node_id=106740862 region=sfo2 volume_id=2ff12484-a6b4-11e8-875e-0242ac111912

provisioner.log

time="2018-08-23T09:07:19Z" level=info msg="removing socket" node_id=106740863 region=sfo2 socket=/var/lib/csi/sockets/pluginproxy/csi.sock
time="2018-08-23T09:07:19Z" level=info msg="server started" addr=/var/lib/csi/sockets/pluginproxy/csi.sock node_id=106740863 region=sfo2
time="2018-08-23T09:07:34Z" level=info msg="get plugin capabitilies called" method=get_plugin_capabilities node_id=106740863 region=sfo2 response="capabilities:<service:<type:CONTROLLER_SERVICE > > "
time="2018-08-23T09:07:34Z" level=info msg="controller get capabilities called" method=controller_get_capabilities node_id=106740863 region=sfo2 response="capabilities:<rpc:<type:CREATE_DELETE_VOLUME > > capabilities:<rpc:<type:PUBLISH_UNPUBLISH_VOLUME > > capabilities:<rpc:<type:LIST_VOLUMES > > "
time="2018-08-23T09:07:34Z" level=info msg="get plugin info called" method=get_plugin_info node_id=106740863 region=sfo2 response="name:\"com.digitalocean.csi.dobs\" vendor_version:\"v0.1.4\" "
time="2018-08-23T09:07:34Z" level=info msg="get plugin capabitilies called" method=get_plugin_capabilities node_id=106740863 region=sfo2 response="capabilities:<service:<type:CONTROLLER_SERVICE > > "
time="2018-08-23T09:07:34Z" level=info msg="controller get capabilities called" method=controller_get_capabilities node_id=106740863 region=sfo2 response="capabilities:<rpc:<type:CREATE_DELETE_VOLUME > > capabilities:<rpc:<type:PUBLISH_UNPUBLISH_VOLUME > > capabilities:<rpc:<type:LIST_VOLUMES > > "
time="2018-08-23T09:07:34Z" level=info msg="get plugin info called" method=get_plugin_info node_id=106740863 region=sfo2 response="name:\"com.digitalocean.csi.dobs\" vendor_version:\"v0.1.4\" "
time="2018-08-23T09:07:34Z" level=info msg="delete volume called" method=delete_volume node_id=106740863 region=sfo2 volume_id=0e964035-a6b1-11e8-9a91-0242ac112c0a
time="2018-08-23T09:07:34Z" level=info msg="delete volume called" method=delete_volume node_id=106740863 region=sfo2 volume_id=958f2de4-a6b2-11e8-875e-0242ac111912
time="2018-08-23T09:07:40Z" level=error msg="method failed" error="DELETE https://api.digitalocean.com/v2/volumes/0e964035-a6b1-11e8-9a91-0242ac112c0a: 409 A volume that's attached to a Droplet cannot be deleted. Please detach it first before deleting." method=/csi.v0.Controller/DeleteVolume node_id=106740863 region=sfo2
time="2018-08-23T09:07:40Z" level=info msg="volume is deleted" method=delete_volume node_id=106740863 region=sfo2 response="godo.Rate{Limit:5000, Remaining:4778, Reset:godo.Timestamp{2018-08-23 08:07:41 +0000 UTC}}" volume_id=958f2de4-a6b2-11e8-875e-0242ac111912
time="2018-08-23T09:07:40Z" level=info msg="get plugin capabitilies called" method=get_plugin_capabilities node_id=106740863 region=sfo2 response="capabilities:<service:<type:CONTROLLER_SERVICE > > "
time="2018-08-23T09:07:40Z" level=info msg="controller get capabilities called" method=controller_get_capabilities node_id=106740863 region=sfo2 response="capabilities:<rpc:<type:CREATE_DELETE_VOLUME > > capabilities:<rpc:<type:PUBLISH_UNPUBLISH_VOLUME > > capabilities:<rpc:<type:LIST_VOLUMES > > "
time="2018-08-23T09:07:40Z" level=info msg="get plugin info called" method=get_plugin_info node_id=106740863 region=sfo2 response="name:\"com.digitalocean.csi.dobs\" vendor_version:\"v0.1.4\" "
time="2018-08-23T09:07:40Z" level=info msg="delete volume called" method=delete_volume node_id=106740863 region=sfo2 volume_id=958f2de4-a6b2-11e8-875e-0242ac111912
time="2018-08-23T09:07:49Z" level=info msg="get plugin capabitilies called" method=get_plugin_capabilities node_id=106740863 region=sfo2 response="capabilities:<service:<type:CONTROLLER_SERVICE > > "
time="2018-08-23T09:07:49Z" level=info msg="controller get capabilities called" method=controller_get_capabilities node_id=106740863 region=sfo2 response="capabilities:<rpc:<type:CREATE_DELETE_VOLUME > > capabilities:<rpc:<type:PUBLISH_UNPUBLISH_VOLUME > > capabilities:<rpc:<type:LIST_VOLUMES > > "
time="2018-08-23T09:07:49Z" level=info msg="get plugin info called" method=get_plugin_info node_id=106740863 region=sfo2 response="name:\"com.digitalocean.csi.dobs\" vendor_version:\"v0.1.4\" "
time="2018-08-23T09:07:49Z" level=info msg="delete volume called" method=delete_volume node_id=106740863 region=sfo2 volume_id=0e964035-a6b1-11e8-9a91-0242ac112c0a
time="2018-08-23T09:07:49Z" level=error msg="method failed" error="DELETE https://api.digitalocean.com/v2/volumes/0e964035-a6b1-11e8-9a91-0242ac112c0a: 409 A volume that's attached to a Droplet cannot be deleted. Please detach it first before deleting." method=/csi.v0.Controller/DeleteVolume node_id=106740863 region=sfo2
time="2018-08-23T09:08:04Z" level=info msg="get plugin capabitilies called" method=get_plugin_capabilities node_id=106740863 region=sfo2 response="capabilities:<service:<type:CONTROLLER_SERVICE > > "
time="2018-08-23T09:08:04Z" level=info msg="controller get capabilities called" method=controller_get_capabilities node_id=106740863 region=sfo2 response="capabilities:<rpc:<type:CREATE_DELETE_VOLUME > > capabilities:<rpc:<type:PUBLISH_UNPUBLISH_VOLUME > > capabilities:<rpc:<type:LIST_VOLUMES > > "
time="2018-08-23T09:08:04Z" level=info msg="get plugin info called" method=get_plugin_info node_id=106740863 region=sfo2 response="name:\"com.digitalocean.csi.dobs\" vendor_version:\"v0.1.4\" "
time="2018-08-23T09:08:04Z" level=info msg="delete volume called" method=delete_volume node_id=106740863 region=sfo2 volume_id=0e964035-a6b1-11e8-9a91-0242ac112c0a
time="2018-08-23T09:08:04Z" level=error msg="method failed" error="DELETE https://api.digitalocean.com/v2/volumes/0e964035-a6b1-11e8-9a91-0242ac112c0a: 409 A volume that's attached to a Droplet cannot be deleted. Please detach it first before deleting." method=/csi.v0.Controller/DeleteVolume node_id=106740863 region=sfo2
time="2018-08-23T09:08:19Z" level=info msg="get plugin capabitilies called" method=get_plugin_capabilities node_id=106740863 region=sfo2 response="capabilities:<service:<type:CONTROLLER_SERVICE > > "
time="2018-08-23T09:08:19Z" level=info msg="controller get capabilities called" method=controller_get_capabilities node_id=106740863 region=sfo2 response="capabilities:<rpc:<type:CREATE_DELETE_VOLUME > > capabilities:<rpc:<type:PUBLISH_UNPUBLISH_VOLUME > > capabilities:<rpc:<type:LIST_VOLUMES > > "
time="2018-08-23T09:08:19Z" level=info msg="get plugin info called" method=get_plugin_info node_id=106740863 region=sfo2 response="name:\"com.digitalocean.csi.dobs\" vendor_version:\"v0.1.4\" "
time="2018-08-23T09:08:19Z" level=info msg="delete volume called" method=delete_volume node_id=106740863 region=sfo2 volume_id=0e964035-a6b1-11e8-9a91-0242ac112c0a
time="2018-08-23T09:08:20Z" level=error msg="method failed" error="DELETE https://api.digitalocean.com/v2/volumes/0e964035-a6b1-11e8-9a91-0242ac112c0a: 409 A volume that's attached to a Droplet cannot be deleted. Please detach it first before deleting." method=/csi.v0.Controller/DeleteVolume node_id=106740863 region=sfo2
time="2018-08-23T09:08:34Z" level=info msg="get plugin capabitilies called" method=get_plugin_capabilities node_id=106740863 region=sfo2 response="capabilities:<service:<type:CONTROLLER_SERVICE > > "
time="2018-08-23T09:08:34Z" level=info msg="controller get capabilities called" method=controller_get_capabilities node_id=106740863 region=sfo2 response="capabilities:<rpc:<type:CREATE_DELETE_VOLUME > > capabilities:<rpc:<type:PUBLISH_UNPUBLISH_VOLUME > > capabilities:<rpc:<type:LIST_VOLUMES > > "
time="2018-08-23T09:08:34Z" level=info msg="get plugin info called" method=get_plugin_info node_id=106740863 region=sfo2 response="name:\"com.digitalocean.csi.dobs\" vendor_version:\"v0.1.4\" "
time="2018-08-23T09:08:34Z" level=info msg="delete volume called" method=delete_volume node_id=106740863 region=sfo2 volume_id=0e964035-a6b1-11e8-9a91-0242ac112c0a
time="2018-08-23T09:08:35Z" level=error msg="method failed" error="DELETE https://api.digitalocean.com/v2/volumes/0e964035-a6b1-11e8-9a91-0242ac112c0a: 409 A volume that's attached to a Droplet cannot be deleted. Please detach it first before deleting." method=/csi.v0.Controller/DeleteVolume node_id=106740863 region=sfo2
time="2018-08-23T09:08:49Z" level=info msg="get plugin capabitilies called" method=get_plugin_capabilities node_id=106740863 region=sfo2 response="capabilities:<service:<type:CONTROLLER_SERVICE > > "
time="2018-08-23T09:08:49Z" level=info msg="controller get capabilities called" method=controller_get_capabilities node_id=106740863 region=sfo2 response="capabilities:<rpc:<type:CREATE_DELETE_VOLUME > > capabilities:<rpc:<type:PUBLISH_UNPUBLISH_VOLUME > > capabilities:<rpc:<type:LIST_VOLUMES > > "
time="2018-08-23T09:08:49Z" level=info msg="get plugin info called" method=get_plugin_info node_id=106740863 region=sfo2 response="name:\"com.digitalocean.csi.dobs\" vendor_version:\"v0.1.4\" "
time="2018-08-23T09:08:49Z" level=info msg="delete volume called" method=delete_volume node_id=106740863 region=sfo2 volume_id=0e964035-a6b1-11e8-9a91-0242ac112c0a
time="2018-08-23T09:08:50Z" level=error msg="method failed" error="DELETE https://api.digitalocean.com/v2/volumes/0e964035-a6b1-11e8-9a91-0242ac112c0a: 409 A volume that's attached to a Droplet cannot be deleted. Please detach it first before deleting." method=/csi.v0.Controller/DeleteVolume node_id=106740863 region=sfo2
time="2018-08-23T09:09:05Z" level=info msg="get plugin capabitilies called" method=get_plugin_capabilities node_id=106740863 region=sfo2 response="capabilities:<service:<type:CONTROLLER_SERVICE > > "
time="2018-08-23T09:09:05Z" level=info msg="controller get capabilities called" method=controller_get_capabilities node_id=106740863 region=sfo2 response="capabilities:<rpc:<type:CREATE_DELETE_VOLUME > > capabilities:<rpc:<type:PUBLISH_UNPUBLISH_VOLUME > > capabilities:<rpc:<type:LIST_VOLUMES > > "
time="2018-08-23T09:09:05Z" level=info msg="get plugin info called" method=get_plugin_info node_id=106740863 region=sfo2 response="name:\"com.digitalocean.csi.dobs\" vendor_version:\"v0.1.4\" "
time="2018-08-23T09:09:05Z" level=info msg="create volume called" method=create_volume node_id=106740863 region=sfo2 storage_size_giga_bytes=5 volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:09:05Z" level=info msg="get plugin capabitilies called" method=get_plugin_capabilities node_id=106740863 region=sfo2 response="capabilities:<service:<type:CONTROLLER_SERVICE > > "
time="2018-08-23T09:09:05Z" level=info msg="controller get capabilities called" method=controller_get_capabilities node_id=106740863 region=sfo2 response="capabilities:<rpc:<type:CREATE_DELETE_VOLUME > > capabilities:<rpc:<type:PUBLISH_UNPUBLISH_VOLUME > > capabilities:<rpc:<type:LIST_VOLUMES > > "
time="2018-08-23T09:09:05Z" level=info msg="get plugin info called" method=get_plugin_info node_id=106740863 region=sfo2 response="name:\"com.digitalocean.csi.dobs\" vendor_version:\"v0.1.4\" "
time="2018-08-23T09:09:05Z" level=info msg="create volume called" method=create_volume node_id=106740863 region=sfo2 storage_size_giga_bytes=5 volume_name=pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:09:06Z" level=info msg="creating volume" method=create_volume node_id=106740863 region=sfo2 storage_size_giga_bytes=5 volume_name=pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4 volume_req="&{sfo2 pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4 Created by DigitalOcean CSI driver 5 }"
time="2018-08-23T09:09:06Z" level=info msg="creating volume" method=create_volume node_id=106740863 region=sfo2 storage_size_giga_bytes=5 volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 volume_req="&{sfo2 pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 Created by DigitalOcean CSI driver 5 }"
time="2018-08-23T09:09:06Z" level=info msg="volume created" method=create_volume node_id=106740863 region=sfo2 response="volume:<capacity_bytes:5368709120 id:\"2ff12484-a6b4-11e8-875e-0242ac111912\" > " storage_size_giga_bytes=5 volume_name=pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:09:06Z" level=info msg="volume created" method=create_volume node_id=106740863 region=sfo2 response="volume:<capacity_bytes:5368709120 id:\"30135c09-a6b4-11e8-9a91-0242ac112c0a\" > " storage_size_giga_bytes=5 volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:09:19Z" level=info msg="get plugin capabitilies called" method=get_plugin_capabilities node_id=106740863 region=sfo2 response="capabilities:<service:<type:CONTROLLER_SERVICE > > "
time="2018-08-23T09:09:19Z" level=info msg="controller get capabilities called" method=controller_get_capabilities node_id=106740863 region=sfo2 response="capabilities:<rpc:<type:CREATE_DELETE_VOLUME > > capabilities:<rpc:<type:PUBLISH_UNPUBLISH_VOLUME > > capabilities:<rpc:<type:LIST_VOLUMES > > "
time="2018-08-23T09:09:19Z" level=info msg="get plugin info called" method=get_plugin_info node_id=106740863 region=sfo2 response="name:\"com.digitalocean.csi.dobs\" vendor_version:\"v0.1.4\" "
time="2018-08-23T09:09:19Z" level=info msg="delete volume called" method=delete_volume node_id=106740863 region=sfo2 volume_id=0e964035-a6b1-11e8-9a91-0242ac112c0a
time="2018-08-23T09:09:23Z" level=info msg="get plugin capabitilies called" method=get_plugin_capabilities node_id=106740863 region=sfo2 response="capabilities:<service:<type:CONTROLLER_SERVICE > > "
time="2018-08-23T09:09:23Z" level=info msg="controller get capabilities called" method=controller_get_capabilities node_id=106740863 region=sfo2 response="capabilities:<rpc:<type:CREATE_DELETE_VOLUME > > capabilities:<rpc:<type:PUBLISH_UNPUBLISH_VOLUME > > capabilities:<rpc:<type:LIST_VOLUMES > > "
time="2018-08-23T09:09:23Z" level=info msg="get plugin info called" method=get_plugin_info node_id=106740863 region=sfo2 response="name:\"com.digitalocean.csi.dobs\" vendor_version:\"v0.1.4\" "
time="2018-08-23T09:09:23Z" level=info msg="delete volume called" method=delete_volume node_id=106740863 region=sfo2 volume_id=0e964035-a6b1-11e8-9a91-0242ac112c0a

node.log

time="2018-08-23T09:07:28Z" level=info msg="removing socket" node_id=106740862 region=sfo2 socket=/csi/csi.sock
time="2018-08-23T09:07:28Z" level=info msg="server started" addr=/csi/csi.sock node_id=106740862 region=sfo2
time="2018-08-23T09:07:28Z" level=info msg="get plugin info called" method=get_plugin_info node_id=106740862 region=sfo2 response="name:\"com.digitalocean.csi.dobs\" vendor_version:\"v0.1.4\" "
time="2018-08-23T09:07:28Z" level=info msg="node get id called" method=node_get_id node_id=106740862 region=sfo2
time="2018-08-23T09:09:37Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:09:37Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:09:37Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:09:37Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:09:37Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:09:37Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=2ff12484-a6b4-11e8-875e-0242ac111912 volume_name=pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:09:37Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:09:37Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:09:37Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:09:37Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:09:37Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:09:38Z" level=info msg="mounting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=2ff12484-a6b4-11e8-875e-0242ac111912 volume_name=pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:09:38Z" level=info msg="checking if target is mounted" args="[-o TARGET,PROPAGATION,FSTYPE,OPTIONS -M /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4/globalmount -J]" cmd=findmnt node_id=106740862 region=sfo2
time="2018-08-23T09:09:38Z" level=info msg="executing mount command" args="[-t ext4 /dev/disk/by-id/scsi-0DO_Volume_pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4/globalmount]" cmd=mount node_id=106740862 region=sfo2
time="2018-08-23T09:09:38Z" level=info msg="formatting and mounting stage volume is finished" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=2ff12484-a6b4-11e8-875e-0242ac111912 volume_name=pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:09:38Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:09:38Z" level=info msg="node publish volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:09:38Z" level=info msg="checking if target is mounted" args="[-o TARGET,PROPAGATION,FSTYPE,OPTIONS -M /var/lib/kubelet/pods/39003bd8-a6b4-11e8-a2a7-b2a41a998ed4/volumes/kubernetes.io~csi/pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4/mount -J]" cmd=findmnt node_id=106740862 region=sfo2
time="2018-08-23T09:09:38Z" level=info msg="mounting the volume" fsType=ext4 method=node_publish_volume mount_options="[bind]" node_id=106740862 region=sfo2 source=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4/globalmount target="/var/lib/kubelet/pods/39003bd8-a6b4-11e8-a2a7-b2a41a998ed4/volumes/kubernetes.io~csi/pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4/mount" volume_id=2ff12484-a6b4-11e8-875e-0242ac111912
time="2018-08-23T09:09:38Z" level=info msg="executing mount command" args="[-t ext4 -o bind /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4/globalmount /var/lib/kubelet/pods/39003bd8-a6b4-11e8-a2a7-b2a41a998ed4/volumes/kubernetes.io~csi/pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4/mount]" cmd=mount node_id=106740862 region=sfo2
time="2018-08-23T09:09:38Z" level=info msg="bind mounting the volume is finished" fsType=ext4 method=node_publish_volume mount_options="[bind]" node_id=106740862 region=sfo2 source=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4/globalmount target="/var/lib/kubelet/pods/39003bd8-a6b4-11e8-a2a7-b2a41a998ed4/volumes/kubernetes.io~csi/pvc-2f979680-a6b4-11e8-a2a7-b2a41a998ed4/mount" volume_id=2ff12484-a6b4-11e8-875e-0242ac111912
time="2018-08-23T09:09:38Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:09:38Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:09:39Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:09:39Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:09:39Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:09:39Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:09:40Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:09:40Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:09:40Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:09:40Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:09:40Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:09:40Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:09:42Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:09:42Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:09:43Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:09:43Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:09:43Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:09:43Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:09:47Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:09:47Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:09:47Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:09:47Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:09:47Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:09:47Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:09:55Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:09:55Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:09:56Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:09:56Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:09:56Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:09:56Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:10:12Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:10:12Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:10:12Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:10:12Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:10:12Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:10:12Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:10:45Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:10:45Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:10:45Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:10:45Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:10:45Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:10:45Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:11:49Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:11:49Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:11:50Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:11:50Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:11:50Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:11:50Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:13:52Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:13:52Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:13:52Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:13:52Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:13:52Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:13:52Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:15:54Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:15:54Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:15:55Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:15:55Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:15:55Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:15:55Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:17:57Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:17:57Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:17:58Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:17:58Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:17:58Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:17:58Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:20:00Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:20:00Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:20:00Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:20:00Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:20:00Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:20:00Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:22:02Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:22:02Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:22:03Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:22:03Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:22:03Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:22:03Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:24:05Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:24:05Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:24:06Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:24:06Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:24:06Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:24:06Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:26:08Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:26:08Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:26:08Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:26:08Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:26:08Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:26:08Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:28:10Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:28:10Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:28:11Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:28:11Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:28:11Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:28:11Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:30:13Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:30:13Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:30:14Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:30:14Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:30:14Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:30:14Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:32:16Z" level=info msg="node get capabilities called" method=node_get_capabilities node_capabilities="rpc:<type:STAGE_UNSTAGE_VOLUME > " node_id=106740862 region=sfo2
time="2018-08-23T09:32:16Z" level=info msg="node stage volume called" node_id=106740862 region=sfo2
time="2018-08-23T09:32:17Z" level=info msg="checking if source is formatted" args="[/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=blkid node_id=106740862 region=sfo2
time="2018-08-23T09:32:17Z" level=info msg="formatting the volume for staging" fsType=ext4 method=node_stage_volume mount_options="[]" node_id=106740862 region=sfo2 source=/dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 staging_target_path=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4/globalmount volume_id=30135c09-a6b4-11e8-9a91-0242ac112c0a volume_name=pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4
time="2018-08-23T09:32:17Z" level=info msg="executing format command" args="[-F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4]" cmd=mkfs.ext4 node_id=106740862 region=sfo2
time="2018-08-23T09:32:17Z" level=error msg="method failed" error="rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4' output: \"mke2fs 1.43.7 (16-Oct-2017)\\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-2f9a4940-a6b4-11e8-a2a7-b2a41a998ed4 does not exist and no size was specified.\\n\"" method=/csi.v0.Node/NodeStageVolume node_id=106740862 region=sfo2
time="2018-08-23T09:07:27Z" level=info msg="removing socket" node_id=106740863 region=sfo2 socket=/csi/csi.sock
time="2018-08-23T09:07:27Z" level=info msg="server started" addr=/csi/csi.sock node_id=106740863 region=sfo2
time="2018-08-23T09:07:27Z" level=info msg="get plugin info called" method=get_plugin_info node_id=106740863 region=sfo2 response="name:\"com.digitalocean.csi.dobs\" vendor_version:\"v0.1.4\" "
time="2018-08-23T09:07:27Z" level=info msg="node get id called" method=node_get_id node_id=106740863 region=sfo2
time="2018-08-23T09:07:28Z" level=info msg="removing socket" node_id=106740861 region=sfo2 socket=/csi/csi.sock
time="2018-08-23T09:07:28Z" level=info msg="server started" addr=/csi/csi.sock node_id=106740861 region=sfo2
time="2018-08-23T09:07:28Z" level=info msg="get plugin info called" method=get_plugin_info node_id=106740861 region=sfo2 response="name:\"com.digitalocean.csi.dobs\" vendor_version:\"v0.1.4\" "
time="2018-08-23T09:07:28Z" level=info msg="node get id called" method=node_get_id node_id=106740861 region=sfo2

@fatih
Copy link
Contributor

fatih commented Aug 31, 2018

Lately, we introduce many fixes, one of them that also fixes this issue (see: #61). The problem @sayhell had was an internal DO issue, which has been resolved (thanks @sayhell for providing access to your cluster).

I'm closing this issue as it's very hard to track multiple issues in a single GH issue. If any of you still see this, even after upgrading and using the latest CSI version (current stable: v0.1.5), please open a new issue and follow the instructions. I'll make sure to debug with each of you individually to see where the problem is. Thanks all for their feedback.

@fumblehool
Copy link

I'm facing the same issue, Restarting the nodes helped me solve the issue. Thanks @zarbis for the tip 👍 :)

@axsaucedo
Copy link

@fatih we are facing this issue atm, how do we check the version of the CSI in the cluster, namely in context of making sure that we're using the version that contains the fix - alternatively it may have been introduced recently, unless we are seeing this caused due to a different issue. Thanks.

@axsaucedo
Copy link

Just following up with the comment above, I have been able to check the image tag for the running csi containers, and the images + tags are:

 image: quay.io/k8scsi/csi-node-driver-registrar:v2.0.1
...
image: digitalocean/do-csi-plugin:v2.1.1

It seems that the images versions are newer so this issue could have been re-introduced in a latter cluster?

@timoreimann
Copy link
Contributor

The issue should have long been resolved. If you see the same symptoms, it's likely a different problem.

Could you please file a new ticket? As Fatih (who doesn't maintain this project anymore) indicated, we shouldn't continue to pile onto this issue anymore as things have become quite unwieldy. Thanks!

@digitalocean digitalocean locked as resolved and limited conversation to collaborators Jun 1, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants