Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to attach or mount volumes: unmounted volumes... timed out waiting for the condition #68

Open
linuxhooligans opened this issue Mar 30, 2022 · 9 comments

Comments

@linuxhooligans
Copy link

hello everyone
i have some problem with ctrox/csi-s3, i think so
part of containers from deploy not start.
what can i do for troubleshooting this?

  1. Error from POD (when he creation)
    Pod status - Pending
    Unable to attach or mount volumes: unmounted volumes=[var-srs], unattached volumes=[logs srslog certs-fastqp-billing var-srs kube-api-access-jdnrv]: timed out waiting for the condition

  2. Status pv

~ % kubectl get pv                   
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                        STORAGECLASS   REASON   AGE
pvc-43bebc4e-9402-484a-9ab8-ffef6d5ab541   1Gi        RWX            Delete           Bound    s3-srs                       csi-s3                  42h
pvc-9a12de8f-d108-41fa-85f1-f69786c1117e   1Gi        RWX            Delete           Bound    s3-docserver                 csi-s3                  23h
pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f   1Gi        RWX            Delete           Bound    var-srs                      csi-s3                  91d

  1. Status pvs
~ % kubectl get pvc 
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
s3-docserver   Bound    pvc-9a12de8f-d108-41fa-85f1-f69786c1117e   1Gi        RWX            csi-s3         47h
s3-srs         Bound    pvc-43bebc4e-9402-484a-9ab8-ffef6d5ab541   1Gi        RWX            csi-s3         21d
var-srs        Bound    pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f   1Gi        RWX            csi-s3         91d
  1. Logs from
oleginishev@Olegs-MacBook-Air ~ % kubectl logs --tail 200 -l app=csi-provisioner-s3 -c csi-s3 --namespace csi-s3 
I0329 07:14:09.802379       1 driver.go:73] Driver: ch.ctrox.csi.s3-driver 
I0329 07:14:09.802515       1 driver.go:74] Version: v1.2.0-rc.1 
I0329 07:14:09.802526       1 driver.go:81] Enabling controller service capability: CREATE_DELETE_VOLUME
I0329 07:14:09.802533       1 driver.go:93] Enabling volume access mode: SINGLE_NODE_WRITER
I0329 07:14:09.802897       1 server.go:108] Listening for connections on address: &net.UnixAddr{Name:"//var/lib/kubelet/plugins/ch.ctrox.csi.s3-driver/csi.sock", Net:"unix"}
I0329 07:14:10.196555       1 utils.go:97] GRPC call: /csi.v1.Identity/Probe
I0329 07:14:10.197502       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginInfo
I0329 07:14:10.197941       1 utils.go:97] GRPC call: /csi.v1.Identity/GetPluginCapabilities
I0329 07:14:10.198398       1 utils.go:97] GRPC call: /csi.v1.Controller/ControllerGetCapabilities
kubectl logs --tail 1000 -l app=csi-s3 -c csi-s3 --namespace csi-s3  | grep 1efd572fe44

W0330 06:05:39.830163       1 mounter.go:85] Unable to find PID of fuse mount /var/lib/kubelet/pods/2f31fe79-949f-4353-a573-8aab5d2a8564/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount, it must have finished already
I0330 06:05:39.830195       1 nodeserver.go:119] s3: volume pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f has been unmounted.
W0330 06:05:48.818253       1 mounter.go:85] Unable to find PID of fuse mount /var/lib/kubelet/pods/a7df3c76-b643-4bba-99e6-6f0c5dd1968d/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount, it must have finished already
I0330 06:05:48.818274       1 nodeserver.go:119] s3: volume pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f has been unmounted.
W0330 06:05:58.829634       1 mounter.go:85] Unable to find PID of fuse mount /var/lib/kubelet/pods/a8d95422-3d4b-4216-bb00-09465bc53b10/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount, it must have finished already
I0330 06:05:58.829676       1 nodeserver.go:119] s3: volume pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f has been unmounted.
W0330 05:31:24.709756       1 mounter.go:85] Unable to find PID of fuse mount /var/lib/kubelet/pods/dfb5ea78-1246-479a-be10-66031bc629b4/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount, it must have finished already
I0330 05:31:24.709776       1 nodeserver.go:119] s3: volume pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f has been unmounted.
W0330 05:31:41.778525       1 mounter.go:85] Unable to find PID of fuse mount /var/lib/kubelet/pods/9e44613f-a4fe-4222-a26a-30dd2f1518bb/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount, it must have finished already
I0330 05:31:41.778543       1 nodeserver.go:119] s3: volume pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f has been unmounted.
I0330 06:05:12.467267       1 nodeserver.go:79] target /var/lib/kubelet/pods/18573abf-cd0a-4928-a791-10fa35fb8959/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount
volumeId pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f
I0330 06:05:12.475914       1 mounter.go:64] Mounting fuse with command: s3fs and args: [pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f:/csi-fs /var/lib/kubelet/pods/18573abf-cd0a-4928-a791-10fa35fb8959/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount -o use_path_request_style -o url=http://s3-devops.int.---.ru/ -o endpoint= -o allow_other -o mp_umask=000]
I0330 06:05:12.490621       1 nodeserver.go:99] s3: volume pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f successfuly mounted to /var/lib/kubelet/pods/18573abf-cd0a-4928-a791-10fa35fb8959/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount
I0330 06:05:15.628376       1 nodeserver.go:79] target /var/lib/kubelet/pods/39c8ab02-b48f-44ce-8f6f-58b8a8dc85c0/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount
volumeId pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f
I0330 06:05:15.637795       1 mounter.go:64] Mounting fuse with command: s3fs and args: [pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f:/csi-fs /var/lib/kubelet/pods/39c8ab02-b48f-44ce-8f6f-58b8a8dc85c0/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount -o use_path_request_style -o url=http://s3-devops.int.---.ru/ -o endpoint= -o allow_other -o mp_umask=000]
I0330 06:05:15.653314       1 nodeserver.go:99] s3: volume pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f successfuly mounted to /var/lib/kubelet/pods/39c8ab02-b48f-44ce-8f6f-58b8a8dc85c0/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount
W0330 06:05:48.831073       1 mounter.go:85] Unable to find PID of fuse mount /var/lib/kubelet/pods/ce10718e-1198-4dde-93f9-06b09a98ab35/volumes/kubernetes.io~csi/pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f/mount, it must have finished already
I0330 06:05:48.831091       1 nodeserver.go:119] s3: volume pvc-a8cdcc05-1172-4427-90d7-1efd572fe44f has been unmounted.
@IharKrasnik
Copy link

IharKrasnik commented Apr 29, 2022

I have the same issue with DigitalOcean: PVC created and bound, bucket created in DO Spaces but containers can't mount readwritemany volume.

Have you come up with a fix?

@yangfei91
Copy link

same issue,kubernetes 1.23.5,minio

Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message


Warning FailedMount 4m1s (x29 over 81m) kubelet Unable to attach or mount volumes: unmounted volumes=[webroot], unattached volumes=[webroot kube-api-access-ctdgx]: timed out waiting for the condition
Warning FailedMount 103s (x8 over 83m) kubelet Unable to attach or mount volumes: unmounted volumes=[webroot], unattached volumes=[kube-api-access-ctdgx webroot]: timed out waiting for the condition
Warning FailedAttachVolume 39s (x25 over 83m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-31957412-93b7-4f17-8ed7-0ae547c3e9b1" : Attach timeout for volume pvc-31957412-93b7-4f17-8ed7-0ae547c3e9b1

@SergkeiM
Copy link

Have same issue with DO Spaces

@Liad-n
Copy link

Liad-n commented Jul 7, 2022

Same issue with MinIO, an solution so far?

1 similar comment
@leyi-bc
Copy link

leyi-bc commented Jul 18, 2022

Same issue with MinIO, an solution so far?

@PaulYuanJ
Copy link

Same issue with MinIO, an solution so far?

# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
csi-s3-pvc   Bound    pvc-16e6929a-feca-49a4-b99d-aa09a3cd9793   5Gi        RWO            csi-s3         154m

# kubectl get pod
NAME                READY   STATUS              RESTARTS   AGE
csi-s3-test-nginx   0/1     ContainerCreating   0          13s


55m         Warning   FailedAttachVolume                                 pod/csi-s3-test-nginx                 AttachVolume.Attach failed for volume "pvc-16e6929a-feca-49a4-b99d-aa09a3cd9793" : Attach timeout for volume pvc-16e6929a-feca-49a4-b99d-aa09a3cd9793
56m         Warning   FailedMount                                        pod/csi-s3-test-nginx                 Unable to attach or mount volumes: unmounted volumes=[webroot], unattached volumes=[webroot kube-api-access-nfnld]: timed out waiting for the condition

@fallmo
Copy link

fallmo commented Sep 16, 2022

I had the same problem, I looked at the logs of the csi-attacher-s3 pod, first i saw Failed to list *v1beta1.VolumeAttachment: the server could not find the requested resource. I figured it was a k8s version issue, so I updated the container image of the csi-attacher stateful set, from v2.2.1 to canary (the latest).

kubectl -n kube-system set image statefulset/csi-attacher-s3 csi-attacher=quay.io/k8scsi/csi-attacher:canary

Next I got a permission error: `v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:csi-attacher-sa" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope.

I tried to modify the role bindings but I couldn't find the right combinations so I ended up giving the csi-attacher-sa service account cluster-admin privileges as shown below:

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-attacher-all
subjects:
  - kind: ServiceAccount
    name: csi-attacher-sa
    namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin

@Hcreak
Copy link

Hcreak commented Dec 20, 2022

I had the same problem, I looked at the logs of the csi-attacher-s3 pod, first i saw Failed to list *v1beta1.VolumeAttachment: the server could not find the requested resource. I figured it was a k8s version issue, so I updated the container image of the csi-attacher stateful set, from v2.2.1 to canary (the latest).

kubectl -n kube-system set image statefulset/csi-attacher-s3 csi-attacher=quay.io/k8scsi/csi-attacher:canary

Next I got a permission error: `v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:csi-attacher-sa" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope.

I tried to modify the role bindings but I couldn't find the right combinations so I ended up giving the csi-attacher-sa service account cluster-admin privileges as shown below:

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: csi-attacher-all
subjects:
  - kind: ServiceAccount
    name: csi-attacher-sa
    namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin

Thank you for your solution, it helped me a lot.

@CallMeLaNN
Copy link

timed out waiting for the condition is too general, have to look for the error that cause timed out.

If AttachVolume.Attach failed it doesn't related to OP error. Please refer to #80, #72 (comment) and this https://github.com/ctrox/csi-s3/pull/70/files

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants