Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CSI Driver failes to mount in pods #377

Open
brngates98 opened this issue Feb 19, 2024 · 4 comments
Open

CSI Driver failes to mount in pods #377

brngates98 opened this issue Feb 19, 2024 · 4 comments

Comments

@brngates98
Copy link

brngates98 commented Feb 19, 2024

image

so i am playing with the HPE CSI Driver on our Nimbles, i thought i had everything configured right, it is creating the volumes on the Nimbles, and bounds in Kubernetes but fails to mount to pods.

We are using vmware csi and a smb csi already without issue so not entirely sure what i am doing wrong here:

Storage Class YAML:

image

image

Any ideas?

The end goal is to build a storage class for both of our Nimbles, and once things are working to play with the NFS Provisioner so we can make use of some RWX Volumes.

DETAILS:

Nodes: Ubuntu Server 22.04.3 LTS

RKE2 1.27

Provisioned by Rancher

@datamattsson
Copy link
Collaborator

The error you're seeing is because the worker node is unable to find the block device on the host. Before anything else, is this iSCSI or FC?

If iSCSI:

  • Make sure the worker nodes have access to the data networks on the array. This is usually the same VLANs as the VMkernel networks that the ESX hosts are using for storage.

If FC:

  • Running the HPE CSI Driver on VM worker nodes is not supported and won't work as the data path can not be discovered.
  • If you just want to use the NFS server, I advise you to wait for the next release of the HPE CSI Driver, which is soon, that will allow you to provision HPE CSI Driver NFS servers using vSphere CSI Driver StorageClasses.

@brngates98
Copy link
Author

brngates98 commented Feb 20, 2024

They are using ISCSI, the VM's for my cluster are on subnet 192.168.70.X VLAN 70, our ISCSI connection is on 192.168.221.x and 192.168.222.X VLAN221/222

The ESXi hosts have a direct connection and everything works fine there, the virtual machines that run our kubernetes cluster seem to be creating new volumes on the nimbles, and even attaching(i think) to them, as the volumes show as online in the Nimbles.
Everything is routable between the subnets.

@datamattsson
Copy link
Collaborator

our ISCSI connection is on 192.168.221.x and 192.168.222.X VLAN221/222

The VMs need in-guest network interfaces on these VLANs. The creation of the volume initially is a simple control-plane operation only that completes successfully through your VLAN 70.

@brngates98
Copy link
Author

OOOOOOOOOOOO so i need to add secondary NIC's to our VM's that place them on to the ISCSI VLAN's then eh, now i just feel dumb :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants