export GOVC_PASSWORD=P@ssw0rd
export [email protected]
export GOVC_URL=https://vc01.satm.eng.vmware.com/sdk
export GOVC_INSECURE=1
export [email protected]
export OCP_CLUSTER_NAME=demo
# Power off VMs
govc find / -type m -runtime.powerState poweredOn -name $OCP_CLUSTER_NAME-'*' | xargs -L 1 govc vm.power -off $1
# Enable Disk UUID
govc find / -type m -runtime.powerState poweredOff -name $OCP_CLUSTER_NAME-'*' | xargs -L 1 govc vm.change -e="disk.enableUUID=1" -vm $1
# Upgrade VMHW to v15
govc find / -type m -runtime.powerState poweredOff -name $OCP_CLUSTER_NAME-'*' | xargs -L 1 govc vm.upgrade -version=15 -vm $1
# Power on VMs
govc find / -type m -runtime.powerState poweredOff -name $OCP_CLUSTER_NAME-'*' | grep -v rhcos | xargs -L 1 govc vm.power -on $1
kubectl taint nodes --all 'node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule'
Open csi-vsphere.conf and change cluster-id
so that it is unique in your vCenter, using the OCP cluster ID, i.e: demo-qrtnt
would be adequate.
N.B: It is extremely important that this is unique per K8s cluster, or you will have volume mounting problems. I.E: Each K8s cluster should have a totally unique cluster-id
.
Also edit the vCenter address
, username
, password
, datacenter
to your environment - do the same for vsphere.conf.
oc create secret generic vsphere-config-secret --from-file=csi-vsphere.conf --namespace=kube-system
oc get secret vsphere-config-secret --namespace=kube-system
oc create configmap cloud-config --from-file=vsphere.conf --namespace=kube-system
Official Documentation - Install vSphere Cloud Provider Interface
oc apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-roles.yaml
oc apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-vsphere/master/manifests/controller-manager/cloud-controller-manager-role-bindings.yaml
oc apply -f https://github.com/kubernetes/cloud-provider-vsphere/raw/master/manifests/controller-manager/vsphere-cloud-controller-manager-ds.yaml
This must print out a ProviderID
per node, or CSI will not work - if it is not populated, then CPI was probably not initialised correctly (check your taints).
kubectl describe nodes | grep "ProviderID"
Official Documentation - Deploy the vSphere Container Storage Plug-in on a Native Kubernetes Cluster
# Taint the Control Plane nodes
kubectl taint nodes <k8s-primary-name> node-role.kubernetes.io/master=:NoSchedule
# Create the namespace
oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/release-2.4/manifests/vanilla/namespace.yaml
# Install the CSI
oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/vsphere-csi-driver/v2.4.0/manifests/vanilla/vsphere-csi-driver.yaml
kubectl get deployment --namespace=vmware-system-csi
kubectl get daemonsets vsphere-csi-node --namespace=vmware-system-csi
kubectl describe csidrivers
kubectl get CSINode
Edit the sc.yaml and pvc.yaml to suit your environment (change the storage policy in the SC, adjust the name if desired).
kubectl apply -f sc.yaml
kubectl apply -f pvc.yaml
Output in the STATUS
column, after 10-15s should be Bound
for both:
kubectl get pv,pvc