Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Add KubeVirt provider #106

Merged
merged 4 commits into from
Oct 3, 2024
Merged

✨ Add KubeVirt provider #106

merged 4 commits into from
Oct 3, 2024

Conversation

michal-gubricky
Copy link
Contributor

@michal-gubricky michal-gubricky commented Jun 12, 2024

What this PR does / why we need it:

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #95
Fixes #34
Fixes #112

Special notes for your reviewer:

See docs https://cluster-api.sigs.k8s.io/user/quick-start.html?highlight=management and https://kubevirt.io/quickstart_kind/

  1. I created Ubuntu instance in gx-scs - flavor SCS-16V:32:100
  1. Create KinD cluster with config like
cat <<EOF > kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
# the default CNI will not be installed
  disableDefaultCNI: true
nodes:
- role: control-plane
  image: kindest/node:v1.29.2@sha256:51a1434a5397193442f0be2a297b488b6c919ce8a3931be0ce822606ea5ca245
  extraMounts:
   - containerPath: /var/lib/kubelet/config.json
     hostPath: <YOUR DOCKER CONFIG FILE PATH>
EOF
  1. Install the CNI based on your preferences - https://cluster-api.sigs.k8s.io/user/quick-start.html?highlight=management#install-the-calico-cni

  2. Install KubeVirt on the kind cluster

# get KubeVirt version
KV_VER=$(curl "https://api.github.com/repos/kubevirt/kubevirt/releases/latest" | jq -r ".tag_name")
# deploy required CRDs
kubectl apply -f "https://github.com/kubevirt/kubevirt/releases/download/${KV_VER}/kubevirt-operator.yaml"
# deploy the KubeVirt custom resource
kubectl apply -f "https://github.com/kubevirt/kubevirt/releases/download/${KV_VER}/kubevirt-cr.yaml"
kubectl wait -n kubevirt kv kubevirt --for=condition=Available --timeout=10m
  1. Deploy CAPI/CAPK/CSO
export CLUSTER_TOPOLOGY=true
clusterctl init --infrastructure kubevirt
# install CSO in your favourite way
  1. INstall MetalLB for loadbalancing
METALLB_VER=$(curl "https://api.github.com/repos/metallb/metallb/releases/latest" | jq -r ".tag_name")
kubectl apply -f "https://raw.githubusercontent.com/metallb/metallb/${METALLB_VER}/config/manifests/metallb-native.yaml"
kubectl wait pods -n metallb-system -l app=metallb,component=controller --for=condition=Ready --timeout=10m
kubectl wait pods -n metallb-system -l app=metallb,component=speaker --for=condition=Ready --timeout=2m

Now, we’ll create the IPAddressPool and the L2Advertisement custom resources. The script below creates the CRs with the right addresses, that match to the kind cluster addresses:

GW_IP=$(docker network inspect -f '{{range .IPAM.Config}}{{.Gateway}}{{end}}' kind)
NET_IP=$(echo ${GW_IP} | sed -E 's|^([0-9]+\.[0-9]+)\..*$|\1|g')
cat <<EOF | sed -E "s|172.19|${NET_IP}|g" | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: capi-ip-pool
  namespace: metallb-system
spec:
  addresses:
  - 172.19.255.200-172.19.255.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system
EOF
  1. Create Cluster Stack
apiVersion: clusterstack.x-k8s.io/v1alpha1
kind: ClusterStack
metadata:
  name: clusterstack
spec:
  provider: kubevirt
  name: alpha
  kubernetesVersion: "1.29"
  channel: custom
  autoSubscribe: false
  noProvider: true
  versions:
  - v0-sha.n24nerk
$ kubectl get clusterstack
NAME           PROVIDER   CLUSTERSTACK   K8S    CHANNEL   AUTOSUBSCRIBE   USABLE           LATEST                                         AGE   REASON   MESSAGE
clusterstack   kubevirt   alpha          1.29   custom    false           v0-sha-n24nerk   kubevirt-alpha-1-29-v0-sha-n24nerk | v1.29.5   2m             
  1. Create cluster
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: my-cluster
spec:
  clusterNetwork:
    pods:
      cidrBlocks:
      - 10.243.0.0/16
    services:
      cidrBlocks:
      - 10.95.0.0/16
  topology:
    class: kubevirt-alpha-1-29-v0-sha.n24nerk
    version: v1.29.5
    variables:
    - name: image
      value: "quay.io/capk/ubuntu-2204-container-disk:v1.29.5"
     # Enable CSI driver in tenant cluster
#    - name: csi_driver
#      value: true
    controlPlane:
      replicas: 1
    workers:
      machineDeployments:
      - class: default-worker
        name: alpha
        replicas: 1
k get cluster 
NAME         CLUSTERCLASS                         PHASE         AGE   VERSION
my-cluster   kubevirt-alpha-1-29-v0-sha.n24nerk   Provisioned   3s    v1.29.5

k get vm 
NAME                                 AGE     STATUS    READY
my-cluster-alpha-9t2pf-jq4d8-rpvf9   8m56s   Running   True
my-cluster-dmkh2-d7jfc               11m     Running   True
  1. Install cloud-provider-kubevirt

    • get kubeconfig of workload cluster
    clusterctl get kubeconfig my-cluster > my-cluster.kubeconfig
    
    • create secret to pass kubeconfig of workload cluster into cloud-provider-kubevirt pod
    kubectl create secret generic kubeconfig --from-file=kubeconfig=my-cluster.kubeconfig
    
    • create configmap with cloud-config like
    kubectl create configmap cloud-config \
      --from-literal=cloud-config='namespace: default
    loadBalancer:
      creationPollInterval: 5
      creationPollTimeout: 60' \
      -n default
    
    • deploy cloud-provider-kubevirt
    # Clone repo first
    git clone https://github.com/kubevirt/cloud-provider-kubevirt.git
    cd cloud-provider-kubevirt/config/base
    # It is necessary to have kustomize installed (https://kustomize.io/)
    # kustomize.yaml file is deprecated
    kustomize edit fix --vars
    # Apply kustomize.yaml file
    kubectl apply -k .
    
    • patch cloud-provider-kubevirt deployment
    # Retrieve the current args of the container
    CURRENT_ARGS=$(kubectl get deployment kubevirt-cloud-controller-manager -n default -o=jsonpath='{.spec.template.spec.containers[0].args}')
    
    # Append the cluster-name argument into container
    NEW_ARGS=$(jq --argjson arg_array '["--cluster-name=my-cluster"]' \
           --argjson current_args "$CURRENT_ARGS" \
           '$current_args + $arg_array' <<< '[]')
    
    # Apply the updated args to the deployment
    kubectl patch deployment kubevirt-cloud-controller-manager -n default --type=json \
        -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/args", "value": '"$NEW_ARGS"'}]'
    
  2. Test loadbalancing by creating a simple nginx deployment on workload cluster

kubectl --kubeconfig my-cluster.kubeconfig create deploy --image nginx --port 80 nginx
kubectl --kubeconfig my-cluster.kubeconfig expose deployment nginx --port 80 --type LoadBalancer

# check if svc in kind cluster was created
kubectl get svc 
NAME                               TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)          AGE
a008600cad34b477ca41aad89296a563   LoadBalancer   10.96.68.186    172.18.255.201   80:32062/TCP     15s
kubernetes                         ClusterIP      10.96.0.1       <none>           443/TCP          36m
my-cluster-lb                      LoadBalancer   10.96.235.163   172.18.255.200   6443:30745/TCP   11m

curl 172.18.255.201
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
  1. If you want to install CSI KubeVirt driver, follow these steps:
    • enable it in cluster.yaml file via variables, see example of it in step 8
    • install Containerized Data Importer (CDI)
    export VERSION=$(basename $(curl -s -w %{redirect_url} https://github.com/kubevirt/containerized-data-importer/releases/latest))
    kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
    kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml
    
    • clone csi-driver repository
    git clone https://github.com/kubevirt/csi-driver.git
    
    • deploy service account on infra cluster(must be deployed in the namespace of the tenant cluster inside of the infra cluster)
    kubectl apply -f csi-driver/deploy/infra-cluster-service-account.yaml
    
    • add the namespace where the tenant cluster is deployed to the kustomization.yaml file
    sed -i '1i\namespace: default' csi-driver/deploy/controller-infra/base/kustomization.yaml
    
    • create secret kvcluster-kubeconfig
    kubectl create secret generic kvcluster-kubeconfig --from-file=value=my-cluster.kubeconfig
    
    • create configmap driver-config
    kubectl apply -f - << EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: driver-config
    data:
      infraClusterNamespace: default
      infraClusterLabels: csi-driver/cluster=my-cluster
      infraStorageClassEnforcement: |
        allowAll: true
        allowDefault: true
    EOF
    
    • deploy the controller resources in the infra cluster(adjust image field to registry.dnation.cloud/test-mg/kubevirt-csi-driver:latest for csi-driver container in deploy/controller-infra/base/deploy.yaml file)
    kubectl apply --kustomize csi-driver/deploy/controller-infra/base
    
  2. Test CSI driver kubevirt
kubectl apply --kubeconfig my-cluster.kubeconfig -f - << EOF 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: 1g-kubevirt-disk
spec:
  storageClassName: kubevirt
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
EOF


$ k get pvc --kubeconfig my-cluster.kubeconfig 
NAME               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
1g-kubevirt-disk   Bound    pvc-0090496c-62a0-4112-9354-67591e49977f   1Gi        RWO            kubevirt       <unset>                 107s
$ k get pv --kubeconfig my-cluster.kubeconfig 
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                      STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-0090496c-62a0-4112-9354-67591e49977f   1Gi        RWO            Delete           Bound    default/1g-kubevirt-disk   kubevirt       <unset>                          114s

@michal-gubricky michal-gubricky linked an issue Jun 12, 2024 that may be closed by this pull request
@michal-gubricky michal-gubricky force-pushed the feat/kubevirt branch 2 times, most recently from 91e7187 to fc274d3 Compare June 25, 2024 12:46
@michal-gubricky michal-gubricky marked this pull request as ready for review June 25, 2024 12:46
Signed-off-by: michal.gubricky <[email protected]>
Signed-off-by: michal.gubricky <[email protected]>
@jschoone
Copy link
Contributor

jschoone commented Oct 3, 2024

Phew, I'm in the review since some days, but it looks like something changed in CAPK 0.1.9 what triggers a patch in the ClusterClass controller on immutable fields which leads to that the ClusterClass is never reconciled.
When this PR was ready it was still on v0.1.8 what seems to work.

@jschoone
Copy link
Contributor

jschoone commented Oct 3, 2024

Now it works. Don't know what was wrong.

@jschoone jschoone merged commit 17218e1 into main Oct 3, 2024
4 checks passed
@jschoone jschoone deleted the feat/kubevirt branch October 3, 2024 19:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants