You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The node-disk-manager panic on environment with NVMe multipath enabled.
What did you expect to happen:
Run correctly
The output of the following commands will help us better understand what's going on:
[Pasting long output into a GitHub gist or other pastebin is fine.]
@cospotato We have never tested on nodes which uses nvme multipath devices. But this issue seems like udev is not reporting the devname for the device.
Also, as per my understanding NVMe devices follow a naming format of nvme<controller_num>n<namespace_num>, But the devices in the above given example are using a different format nvme3c3n1.
Can you provide details of the cluster / node and the OS running on the machines.
@akhilerm The node OS is AliOS which is modified by Alibaba group from CentOS 7. The NVMe device on it is the "Local Disk" product of AliCloud. I googled the naming format. Then I get this page.
What steps did you take and what happened:
The node-disk-manager panic on environment with NVMe multipath enabled.
What did you expect to happen:
Run correctly
The output of the following commands will help us better understand what's going on:
[Pasting long output into a GitHub gist or other pastebin is fine.]
kubectl get pods -n openebs
kubectl get blockdevices -n openebs -o yaml
kubectl get blockdeviceclaims -n openebs -o yaml
kubectl logs <ndm daemon pod name> -n openebs
lsblk
from nodes where ndm daemonset is runningAnything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
I wrote a tiny program to get udev DEVNAME.
The result on the nodes is
Environment:
kubectl version
):/etc/os-release
):The text was updated successfully, but these errors were encountered: