-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to create NFS provisioned volume in OpenShift Virtualization #341
Comments
This looks like a duplicate of #295. Is the PVC created by an Operator? |
No, it is not exactly the same as issue #295, but the solutions you recommended also apply to this issue. Yes, the PVC is created by operator "OpenShift Virtualization". To solve the issue #341, we have to solve the following 4 issues:
Now I have solutions for the first 3 issues:
But for the 4th issue, I don't have a solution yet. |
Detailed description for the 4th issue: Creating a Data Volume will create 2 importer pods by CDI, one for the PVC that is NFS provisioned, one for the interal PVC used by HPE NFS Provisioner pod, while the latter one should not be created. Consider creating the following DV: The PVC can be created successfully: $ oc get pvc dv4 is the PVC I want to create, and PVC "hpe-nfs-994f5584-ea32-4a79-a5d6-2793856faeb4 " is underlying PVC used by HPE NFS Provisioner. But after 60 seconds, the PVC became: $ oc get pvc This is because a pod is created by CDI: $ oc get pods Pod "importer-hpe-nfs-994f5584-ea32-4a79-a5d6-2793856faeb4" was created by CDI, but it should not be created because it is not the final PVC, it is the underlying PVC used by HPE NFS Provisioner. Consider the following scenario: Create a PVC by the following yaml file: apiVersion: cdi.kubevirt.io/v1beta1 CDI will create 2 importer pods, one for dv5, and one for underlying PVC(which should not be created), and the creation of the PVC will failed because 2 pods (Importer for the underlying PVC and HPE NFS Provisioner pod which both need to mount the same PVC)want to mount the PVC at the same time. The creating of importer pod is always faster than HPE NFS Provisioner pod, so the creation of PVC dv5 will fail duo to HPE NFS Provisoner cannnot mount the underlying PVC that is already mounted by importer. 58m Warning FailedAttachVolume pod/hpe-nfs-fe636a61-6800-41c9-9bc8-454084591646-9c4bd48-bmbwg Multi-Attach error for volume "pvc-6eff39fd-7234-4d1a-8bfe-bad12307417b" Volume is already used by pod(s) importer-hpe-nfs-fe636a61-6800-41c9-9bc8-454084591646 So the question is: How to tell CDI not to create importer pod for underlying PVC which is used by HPE NFS Provisioner, but just creating a importer pod for the final RWX PVC only? The CDI picks up the PVC and does actions on it when it has a matching annotation in the PVC yaml. The HPE CSI is also copying the same to their PVC which is causing the problem. $ oc get pvc
|
Thanks for the additional details. To be completely honest here we have not qualified OpenShift Virtualization with the HPE CSI Driver. The operation you're performing should be made on with a regular RWX PVC (without NFS resources) using That said, this issue is a priority for HPE and we're currently working on getting this issue resolved for the next release of the CSI driver. |
Thank you very much for your kindly reply. Can we use a block mode regular RWX PVC? On scod.hpedev.io, it said a block mode RWX PVC can be provisoned, but the behavior can be unpredictable. Is there a success story of block mode RWX PVC used for VM in order to enable live migration feature? I have tested block mode regular RWX PVC with HPE Primera C630, the creation of VM succeeded, but the live migration failed. What is the version of the next release of the CSI driver that can be expected to solve this issue? v2.3.0? |
When I initiated a live migration of a VM that is based on block mode RWX PVC, the following error occurred: Generated from kubelet on worker11.openshift.lab Generated from kubelet on worker11.openshift.lab |
It won't be in 2.3.0, it will be in subsequent release. |
There's a beta chart available that fixes this for 3PAR pedigree platforms. GA release for the chart and certified OpenShift operator imminent. |
Fixed in v2.4.1 using RWX block. |
Environment: OCP 4.12 deployed on bare metal with OpenShift Virtualization operator installed. The cluster is connected to a HPE Primera storage array. HPE CSI driver for Kubernetes 2.2.0 and HPE NFS Provisioner 3.0.0 were installed.
Issue:
When creating the following Data Volume:
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: dv1
spec:
source:
blank: {}
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs-ssd
It failed with the following error messages:
failed to provision volume with StorageClass "nfs-ssd": rpc error: code = Internal desc = Failed to create NFS provisioned volume pvc-c2a552e7-5d05-4fe0-b32f-bc79ba6fb1e3, err persistentvolumeclaims "hpe-nfs-c2a552e7-5d05-4fe0-b32f-bc79ba6fb1e3" is forbidden: cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on: , , rollback status: success
nfs-ssd is a storage class backed by HPE NFS provisioner.
The text was updated successfully, but these errors were encountered: