Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

render workers using cluster chart #282

Merged
merged 20 commits into from
Sep 24, 2024
Merged

render workers using cluster chart #282

merged 20 commits into from
Sep 24, 2024

Conversation

anvddriesch
Copy link
Contributor

@anvddriesch anvddriesch commented Sep 3, 2024

https://github.com/giantswarm/giantswarm/issues/31416

This pr:

This migrates all the worker resources to be rendered from the cluster chart.

Trigger e2e tests

/run cluster-test-suites

@anvddriesch anvddriesch self-assigned this Sep 3, 2024
@tinkerers-ci
Copy link

tinkerers-ci bot commented Sep 3, 2024

Note

As this is a draft PR no triggers from the PR body will be handled.

If you'd like to trigger them while draft please add them as a PR comment.

@anvddriesch
Copy link
Contributor Author

/run cluster-test-suites

@tinkerers-ci
Copy link

tinkerers-ci bot commented Sep 4, 2024

cluster-test-suites

Run name pr-cluster-vsphere-282-cluster-test-suitesnf2sl
Commit SHA e1df738
Result Failed ❌

📋 View full results in Tekton Dashboard

Rerun trigger:
/run cluster-test-suites


Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

@anvddriesch
Copy link
Contributor Author

/run cluster-test-suites

@tinkerers-ci
Copy link

tinkerers-ci bot commented Sep 5, 2024

cluster-test-suites

Run name pr-cluster-vsphere-282-cluster-test-suites5jpkb
Commit SHA c95eeaa
Result Failed ❌

📋 View full results in Tekton Dashboard

Rerun trigger:
/run cluster-test-suites


Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

@anvddriesch
Copy link
Contributor Author

/run cluster-test-suites

@tinkerers-ci
Copy link

tinkerers-ci bot commented Sep 17, 2024

cluster-test-suites

Run name pr-cluster-vsphere-282-cluster-test-suites52ffl
Commit SHA 6c6895e
Result Failed ❌

📋 View full results in Tekton Dashboard

Rerun trigger:
/run cluster-test-suites


Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

@anvddriesch
Copy link
Contributor Author

/run cluster-test-suites

@tinkerers-ci
Copy link

tinkerers-ci bot commented Sep 17, 2024

cluster-test-suites

Run name pr-cluster-vsphere-282-cluster-test-suitesk6rqw
Commit SHA 4a8513a
Result Completed ✅

📋 View full results in Tekton Dashboard

Rerun trigger:
/run cluster-test-suites


Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

@anvddriesch anvddriesch marked this pull request as ready for review September 17, 2024 13:30
@anvddriesch anvddriesch requested a review from a team as a code owner September 17, 2024 13:30
@@ -79,6 +79,12 @@ Properties within the `.global.controlPlane` object
| `global.controlPlane.apiServerPort` | **API server port** - The API server Load Balancer port. This option sets the Spec.ClusterNetwork.APIServerPort field on the Cluster CR. In CAPI this field isn't used currently. It is instead used in providers. In CAPA this sets only the public facing port of the Load Balancer. In CAPZ both the public facing and the destination port are set to this value. CAPV and CAPVCD do not use it.|**Type:** `integer`<br/>**Default:** `6443`|
| `global.controlPlane.image` | **Node container image**|**Type:** `object`<br/>|
| `global.controlPlane.image.repository` | **Repository**|**Type:** `string`<br/>**Default:** `"gsoci.azurecr.io/giantswarm"`|
| `global.controlPlane.machineHealthCheck` | **Machine health check**|**Type:** `object`<br/>|
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

apparently the machine health check was just there previously. Now we have options. The defaults are the same as before.

@@ -1,16 +0,0 @@
# DEPRECATED - remove once CP and workers are rendered with cluster chart
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got rid of everything marked DEPRECATED but let's check if there's more old stuff

{{/*
Generates template spec for worker machines.
*/}}
{{- define "worker-vspheremachinetemplate-spec" -}}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nothing new here, we just need the spec separated out.

@@ -17,7 +17,7 @@ spec:
apiVersion: {{ include "infrastructureApiVersion" . }}
kind: VSphereMachineTemplate
metadata:
name: {{ include "resource.default.name" $ }}-{{ $nodePoolName }}-{{ include "mtRevision" $c }}
name: {{ include "resource.default.name" $ }}-{{ $nodePoolName }}-{{ include "machineTemplateSpec.hash" (dict "data" (include "worker-vspheremachinetemplate-spec" $) "salt" $.Values.cluster.providerIntegration.hashSalt) }}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the naming function that ensures machines have the same names as referenced inside the cluster chart resources

"machineHealthCheckResourceEnabled": true,
"machinePoolResourcesEnabled": false,
"machinePoolResourcesEnabled": true,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this just enables rendering worker resources in the cluster chart

@anvddriesch
Copy link
Contributor Author

Please correct me if I'm wrong but in case users have migrated to v0.60.0 there should not be another breaking change, right?
(also I'll add the changelog but please behold the green checkmark first)

@vxav
Copy link
Contributor

vxav commented Sep 18, 2024

There's quite a lot going on here but nothing jumps at me here.
However I will leave the review to Simon who is much more up to date with the chart changes than I am lately.

@anvddriesch
Copy link
Contributor Author

/run cluster-test-suites

@tinkerers-ci
Copy link

tinkerers-ci bot commented Sep 18, 2024

cluster-test-suites

Run name pr-cluster-vsphere-282-cluster-test-suitesc5xnd
Commit SHA df6f839
Result Completed ✅

📋 View full results in Tekton Dashboard

Rerun trigger:
/run cluster-test-suites


Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

@glitchcrab
Copy link
Member

glitchcrab commented Sep 19, 2024

All looks good, aside from one small change (which i will try to explain).

I templated the chart out from main and this branch and diffed them and I can see that the machineHealthCheck config ends up in the vspheremachinetemplate spec:

spec:
  template:
    spec:
      datacenter: Datacenter
      datastore: vsanDatastore
      server: https://foo.example.com
      thumbprint: F7:CF:F9:E5:99:39:FF:C1:D7:14:F1:3F:8A:42:21:95:3B:A1:6E:16
      cloneMode: linkedClone
      diskGiB: 25
      machineHealthCheck:
        enabled: true
        maxUnhealthy: 40%
        nodeStartupTimeout: 20m0s
        unhealthyNotReadyTimeout: 10m0s
        unhealthyUnknownTimeout: 10m0s

This won't work as it's not a valid field in the CRD - the controller will (should?) refuse to create the VMs. However you can unset it in the mtSpec function to ensure that it doesn't get templated out into the vspheremachintemplate resource. I had to do the same for the replica count previously - unfortunately this is one of the downsides of merging classes into pools.

I was surprised that the tests passed because of this, but then I realised that the test suite values provide a nodepool definition already which will override the default definition in the chart (and therefore hide this problem when running in CI).

Copy link
Contributor

There were differences in the rendered Helm template, please check! ⚠️

Output
=== Differences when rendered with values file helm/cluster-vsphere/ci/test-wc-values.yaml ===

(file level)
  - five documents removed:
    ---
    # Source: cluster-vsphere/templates/containerd-config-secret.yaml
    apiVersion: v1
    kind: Secret
    metadata:
      name: release-name-registry-configuration-d5bcde26
    data:
      registry-config.toml: dmVyc2lvbiA9IDIKCiMgcmVjb21tZW5kZWQgZGVmYXVsdHMgZnJvbSBodHRwczovL2dpdGh1Yi5jb20vY29udGFpbmVyZC9jb250YWluZXJkL2Jsb2IvbWFpbi9kb2NzL29wcy5tZCNiYXNlLWNvbmZpZ3VyYXRpb24KIyBzZXQgY29udGFpbmVyZCBhcyBhIHN1YnJlYXBlciBvbiBsaW51eCB3aGVuIGl0IGlzIG5vdCBydW5uaW5nIGFzIFBJRCAxCnN1YnJlYXBlciA9IHRydWUKIyBzZXQgY29udGFpbmVyZCdzIE9PTSBzY29yZQpvb21fc2NvcmUgPSAtOTk5CmRpc2FibGVkX3BsdWdpbnMgPSBbXQpbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ydW50aW1lLnYxLmxpbnV4Il0KIyBzaGltIGJpbmFyeSBuYW1lL3BhdGgKc2hpbSA9ICJjb250YWluZXJkLXNoaW0iCiMgcnVudGltZSBiaW5hcnkgbmFtZS9wYXRoCnJ1bnRpbWUgPSAicnVuYyIKIyBkbyBub3QgdXNlIGEgc2hpbSB3aGVuIHN0YXJ0aW5nIGNvbnRhaW5lcnMsIHNhdmVzIG9uIG1lbW9yeSBidXQKIyBsaXZlIHJlc3RvcmUgaXMgbm90IHN1cHBvcnRlZApub19zaGltID0gZmFsc2UKCltwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiMgc2V0dGluZyBydW5jLm9wdGlvbnMgdW5zZXRzIHBhcmVudCBzZXR0aW5ncwpydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgpbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdClN5c3RlbWRDZ3JvdXAgPSB0cnVlCltwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0Kc2FuZGJveF9pbWFnZSA9ICJnc29jaS5henVyZWNyLmlvL2dpYW50c3dhcm0vcGF1c2U6My45IgoKW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeS5taXJyb3JzXQpbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkuY29uZmlnc10K
    # Source: cluster-vsphere/templates/kubeadmconfigtemplate.yaml
    apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
    kind: KubeadmConfigTemplate
    metadata:
      name: release-name-worker-bf570225
      namespace: org-giantswarm
      labels:
        app: cluster-vsphere
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: release-name
        giantswarm.io/cluster: release-name
        giantswarm.io/organization: giantswarm
        application.giantswarm.io/team: rocket
        app.kubernetes.io/version: 0.63.0
        helm.sh/chart: cluster-vsphere-0.63.0
    spec:
      template:
        spec:
          users:
          - name: giantswarm
            sudo: "ALL=(ALL) NOPASSWD:ALL"
          format: ignition
          ignition:
            containerLinuxConfig:
              additionalConfig: |
                storage:
                  files:
                  - path: /opt/set-hostname
                    filesystem: root
                    mode: 0744
                    contents:
                      inline: |
                        #!/bin/sh
                        set -x
                        echo "${COREOS_CUSTOM_HOSTNAME}" > /etc/hostname
                        hostname "${COREOS_CUSTOM_HOSTNAME}"
                        echo "::1         ipv6-localhost ipv6-loopback" >/etc/hosts
                        echo "127.0.0.1   localhost" >>/etc/hosts
                        echo "127.0.0.1   ${COREOS_CUSTOM_HOSTNAME}" >>/etc/hosts
                systemd:
                  units:
                  - name: coreos-metadata.service
                    contents: |
                      [Unit]
                      Description=VMware metadata agent
                      After=nss-lookup.target
                      After=network-online.target
                      Wants=network-online.target
                      [Service]
                      Type=oneshot
                      Restart=on-failure
                      RemainAfterExit=yes
                      Environment=OUTPUT=/run/metadata/coreos
                      ExecStart=/usr/bin/mkdir --parent /run/metadata
                      ExecStart=/usr/bin/bash -cv 'echo "COREOS_CUSTOM_HOSTNAME=$("$(find /usr/bin /usr/share/oem -name vmtoolsd -type f -executable 2>/dev/null | head -n 1)" --cmd "info-get guestinfo.metadata" | base64 -d | grep local-hostname | awk {\'print $2\'} | tr -d \'"\')" > $${OUTPUT}'
                  - name: set-hostname.service
                    enabled: true
                    contents: |
                      [Unit]
                      Description=Set the hostname for this machine
                      Requires=coreos-metadata.service
                      After=coreos-metadata.service
                      [Service]
                      Type=oneshot
                      RemainAfterExit=yes
                      EnvironmentFile=/run/metadata/coreos
                      ExecStart=/opt/set-hostname
                      [Install]
                      WantedBy=multi-user.target
                  - name: ethtool-segmentation.service
                    enabled: true
                    contents: |
                      [Unit]
                      After=network.target
                      [Service]
                      Type=oneshot
                      RemainAfterExit=yes
                      ExecStart=/usr/sbin/ethtool -K ens192 tx-udp_tnl-csum-segmentation off
                      ExecStart=/usr/sbin/ethtool -K ens192 tx-udp_tnl-segmentation off
                      [Install]
                      WantedBy=default.target
                  - name: kubeadm.service
                    enabled: true
                    dropins:
                    - name: 10-flatcar.conf
                      contents: |
                        [Unit]
                        # kubeadm must run after coreos-metadata populated /run/metadata directory.
                        Requires=coreos-metadata.service
                        After=coreos-metadata.service
                        [Service]
                        # Make metadata environment variables available for pre-kubeadm commands.
                        EnvironmentFile=/run/metadata/*
                  - name: teleport.service
                    enabled: true
                    contents: |
                      [Unit]
                      Description=Teleport Service
                      After=network.target
                      [Service]
                      Type=simple
                      Restart=on-failure
                      ExecStart=/opt/bin/teleport start --roles=node --config=/etc/teleport.yaml --pid-file=/run/teleport.pid
                      ExecReload=/bin/kill -HUP $MAINPID
                      PIDFile=/run/teleport.pid
                      LimitNOFILE=524288
                      [Install]
                      WantedBy=multi-user.target
          joinConfiguration:
            nodeRegistration:
              criSocket: /run/containerd/containerd.sock
              kubeletExtraArgs:
                # DEPRECATED - remove once CP and workers are rendered with cluster chart
    
    cloud-provider: external
                feature-gates: 
                eviction-hard: memory.available<200Mi
                eviction-max-pod-grace-period: 60
                eviction-soft: memory.available<500Mi
                eviction-soft-grace-period: memory.available=5s
                anonymous-auth: "true"
                node-labels: giantswarm.io/node-pool=worker
          files:
          - path: /etc/ssh/trusted-user-ca-keys.pem
            permissions: 0600
            content: |
              ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM4cvZ01fLmO9cJbWUj7sfF+NhECgy+Cl0bazSrZX7sU [email protected]
              
          - path: /etc/ssh/sshd_config
            permissions: 0600
            content: |
              # DEPRECATED - remove once CP and workers are rendered with cluster chart
              # Use most defaults for sshd configuration.
              Subsystem sftp internal-sftp
              ClientAliveInterval 180
              UseDNS no
              UsePAM yes
              PrintLastLog no # handled by PAM
              PrintMotd no # handled by PAM
              # Non defaults (#100)
              ClientAliveCountMax 2
              PasswordAuthentication no
              TrustedUserCAKeys /etc/ssh/trusted-user-ca-keys.pem
              MaxAuthTries 5
              LoginGraceTime 60
              AllowTcpForwarding no
              AllowAgentForwarding no
              
          - path: /etc/teleport-join-token
            permissions: 0644
            contentFrom:
              secret:
                name: release-name-teleport-join-token
                key: joinToken
          - path: /opt/teleport-node-role.sh
            permissions: 0755
            encoding: base64
            content: IyEvYmluL2Jhc2gKCmlmIHN5c3RlbWN0bCBpcy1hY3RpdmUgLXEga3ViZWxldC5zZXJ2aWNlOyB0aGVuCiAgICBpZiBbIC1lICIvZXRjL2t1YmVybmV0ZXMvbWFuaWZlc3RzL2t1YmUtYXBpc2VydmVyLnlhbWwiIF07IHRoZW4KICAgICAgICBlY2hvICJjb250cm9sLXBsYW5lIgogICAgZWxzZQogICAgICAgIGVjaG8gIndvcmtlciIKICAgIGZpCmVsc2UKICAgIGVjaG8gIiIKZmkK
          - path: /etc/teleport.yaml
            permissions: 0644
            encoding: base64
            content: IyBERVBSRUNBVEVEIC0gcmVtb3ZlIG9uY2UgQ1AgYW5kIHdvcmtlcnMgYXJlIHJlbmRlcmVkIHdpdGggY2x1c3RlciBjaGFydAp2ZXJzaW9uOiB2Mwp0ZWxlcG9ydDoKICBkYXRhX2RpcjogL3Zhci9saWIvdGVsZXBvcnQKICBqb2luX3BhcmFtczoKICAgIHRva2VuX25hbWU6IC9ldGMvdGVsZXBvcnQtam9pbi10b2tlbgogICAgbWV0aG9kOiB0b2tlbgogIHByb3h5X3NlcnZlcjogdGVsZXBvcnQuZ2lhbnRzd2FybS5pbzo0NDMKICBsb2c6CiAgICBvdXRwdXQ6IHN0ZGVycgphdXRoX3NlcnZpY2U6CiAgZW5hYmxlZDogIm5vIgpzc2hfc2VydmljZToKICBlbmFibGVkOiAieWVzIgogIGNvbW1hbmRzOgogIC0gbmFtZTogbm9kZQogICAgY29tbWFuZDogW2hvc3RuYW1lXQogICAgcGVyaW9kOiAyNGgwbTBzCiAgLSBuYW1lOiBhcmNoCiAgICBjb21tYW5kOiBbdW5hbWUsIC1tXQogICAgcGVyaW9kOiAyNGgwbTBzCiAgLSBuYW1lOiByb2xlCiAgICBjb21tYW5kOiBbL29wdC90ZWxlcG9ydC1ub2RlLXJvbGUuc2hdCiAgICBwZXJpb2Q6IDFtMHMKICBsYWJlbHM6CiAgICBpbnM6IAogICAgbWM6IAogICAgY2x1c3RlcjogcmVsZWFzZS1uYW1lCiAgICBiYXNlRG9tYWluOiBrOHMudGVzdApwcm94eV9zZXJ2aWNlOgogIGVuYWJsZWQ6ICJubyIK
          - path: /etc/containerd/config.toml
            permissions: 0600
            contentFrom:
              secret:
                name: release-name-registry-configuration-d5bcde26
                key: registry-config.toml
          preKubeadmCommands:
          - "systemctl restart sshd"
          - "/bin/test ! -d /var/lib/kubelet && (/bin/mkdir -p /var/lib/kubelet && /bin/chmod 0750 /var/lib/kubelet)"
          postKubeadmCommands:
          - "usermod -aG root nobody" # required for node-exporter to access the host's filesystem
    # Source: cluster-vsphere/templates/machinedeployment.yaml
    apiVersion: cluster.x-k8s.io/v1beta1
    kind: MachineDeployment
    metadata:
      name: release-name-worker
      namespace: org-giantswarm
      labels:
        app: cluster-vsphere
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: release-name
        giantswarm.io/cluster: release-name
        giantswarm.io/organization: giantswarm
        application.giantswarm.io/team: rocket
        app.kubernetes.io/version: 0.63.0
        helm.sh/chart: cluster-vsphere-0.63.0
    spec:
      clusterName: release-name
      replicas: 2
      revisionHistoryLimit: 0
      selector:
        matchLabels: {}
      template:
        metadata:
          labels:
            app: cluster-vsphere
            app.kubernetes.io/managed-by: Helm
            cluster.x-k8s.io/cluster-name: release-name
            giantswarm.io/cluster: release-name
            giantswarm.io/organization: giantswarm
            application.giantswarm.io/team: rocket
        spec:
          bootstrap:
            configRef:
              apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
              kind: KubeadmConfigTemplate
              name: release-name-worker-bf570225
              namespace: org-giantswarm
          clusterName: release-name
          infrastructureRef:
            apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
            kind: VSphereMachineTemplate
            name: release-name-worker-81f02996
            namespace: org-giantswarm
          version: v1.27.14
    # Source: cluster-vsphere/templates/machinehealthcheck.yaml
    apiVersion: cluster.x-k8s.io/v1beta1
    kind: MachineHealthCheck
    metadata:
      labels:
        app: cluster-vsphere
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: release-name
        giantswarm.io/cluster: release-name
        giantswarm.io/organization: giantswarm
        application.giantswarm.io/team: rocket
        app.kubernetes.io/version: 0.63.0
        helm.sh/chart: cluster-vsphere-0.63.0
      name: release-name
      namespace: org-giantswarm
    spec:
      clusterName: release-name
      maxUnhealthy: 40%!(NOVERB)
      nodeStartupTimeout: 20m0s
      selector:
        matchExpressions:
        - key: cluster.x-k8s.io/cluster-name
          operator: In
          values:
          - release-name
        - key: cluster.x-k8s.io/control-plane
          operator: DoesNotExist
      unhealthyConditions:
      - type: Ready
        status: Unknown
        timeout: 10m0s
      - type: Ready
        status: False
        timeout: 10m0s
    # Source: cluster-vsphere/templates/vspheremachinetemplate.yaml
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
    kind: VSphereMachineTemplate
    metadata:
      name: release-name-worker-81f02996
      namespace: org-giantswarm
      labels:
        app: cluster-vsphere
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: release-name
        giantswarm.io/cluster: release-name
        giantswarm.io/organization: giantswarm
        application.giantswarm.io/team: rocket
        app.kubernetes.io/version: 0.63.0
        helm.sh/chart: cluster-vsphere-0.63.0
    spec:
      template:
        spec:
          datacenter: Datacenter
          datastore: vsanDatastore
          server: "https://foo.example.com"
          thumbprint: "F7:CF:F9:E5:99:39:FF:C1:D7:14:F1:3F:8A:42:21:95:3B:A1:6E:16"
          cloneMode: linkedClone
          diskGiB: 25
          memoryMiB: 8192
          network:
            devices:
            - dhcp4: true
              networkName: grasshopper-capv
          numCPUs: 2
          resourcePool: grasshopper
          template: flatcar-stable-3815.2.2-kube-v1.27.14-gs
    
  
    ---
    # Source: cluster-vsphere/charts/cluster/templates/clusterapi/workers/kubeadmconfigtemplate.yaml
    apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
    kind: KubeadmConfigTemplate
    metadata:
      name: test-worker-85fb5
      namespace: org-giantswarm
      labels:
        giantswarm.io/machine-deployment: test-worker
        # deprecated: "app: cluster-vsphere" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-vsphere
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 1.2.1
        app.kubernetes.io/part-of: cluster-vsphere
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-1.2.1
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: test
        giantswarm.io/organization: giantswarm
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: test
        cluster.x-k8s.io/watch-filter: capi
    spec:
      template:
        spec:
          format: ignition
          ignition:
            containerLinuxConfig:
              additionalConfig: |
                systemd:
                  units:      
                  - name: os-hardening.service
                    enabled: true
                    contents: |
                      [Unit]
                      Description=Apply os hardening
                      [Service]
                      Type=oneshot
                      ExecStartPre=-/bin/bash -c "gpasswd -d core rkt; gpasswd -d core docker; gpasswd -d core wheel"
                      ExecStartPre=/bin/bash -c "until [ -f '/etc/sysctl.d/hardening.conf' ]; do echo Waiting for sysctl file; sleep 1s;done;"
                      ExecStart=/usr/sbin/sysctl -p /etc/sysctl.d/hardening.conf
                      [Install]
                      WantedBy=multi-user.target
                  - name: update-engine.service
                    enabled: false
                    mask: true
                  - name: locksmithd.service
                    enabled: false
                    mask: true
                  - name: sshkeys.service
                    enabled: false
                    mask: true
                  - name: teleport.service
                    enabled: true
                    contents: |
                      [Unit]
                      Description=Teleport Service
                      After=network.target
                      [Service]
                      Type=simple
                      Restart=on-failure
                      ExecStart=/opt/bin/teleport start --roles=node --config=/etc/teleport.yaml --pid-file=/run/teleport.pid
                      ExecReload=/bin/kill -HUP $MAINPID
                      PIDFile=/run/teleport.pid
                      LimitNOFILE=524288
                      [Install]
                      WantedBy=multi-user.target
                  - name: kubeadm.service
                    dropins:
                    - name: 10-flatcar.conf
                      contents: |
                        [Unit]
                        # kubeadm must run after coreos-metadata populated /run/metadata directory.
                        Requires=coreos-metadata.service
                        After=coreos-metadata.service
                        # kubeadm must run after containerd - see https://github.com/kubernetes-sigs/image-builder/issues/939.
                        After=containerd.service
                        # kubeadm requires having an IP
                        After=network-online.target
                        Wants=network-online.target
                        [Service]
                        # Ensure kubeadm service has access to kubeadm binary in /opt/bin on Flatcar.
                        Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/bin
                        # To make metadata environment variables available for pre-kubeadm commands.
                        EnvironmentFile=/run/metadata/*
                  - name: containerd.service
                    enabled: true
                    contents: |
                    dropins:
                    - name: 10-change-cgroup.conf
                      contents: |
                        [Service]
                        CPUAccounting=true
                        MemoryAccounting=true
                        Slice=kubereserved.slice
                  - name: audit-rules.service
                    enabled: true
                    dropins:
                    - name: 10-wait-for-containerd.conf
                      contents: |
                        [Service]
                        ExecStartPre=/bin/bash -c "while [ ! -f /etc/audit/rules.d/containerd.rules ]; do echo 'Waiting for /etc/audit/rules.d/containerd.rules to be written' && sleep 1; done"
                        Restart=on-failure      
                  - name: coreos-metadata.service
                    enabled: true
                    contents: |
                      [Unit]
                      Description=VMWare metadata agent
                      [Install]
                      WantedBy=multi-user.target
                    dropins:
                    - name: 10-coreos-metadata.conf
                      contents: |
                        [Unit]
                        After=nss-lookup.target
                        After=network-online.target
                        Wants=network-online.target
                        [Service]
                        Type=oneshot
                        Restart=on-failure
                        RemainAfterExit=yes
                        Environment=OUTPUT=/run/metadata/coreos
                        ExecStart=/usr/bin/mkdir --parent /run/metadata
                        ExecStart=/usr/bin/bash -cv 'echo "COREOS_CUSTOM_HOSTNAME=$("$(find /usr/bin /usr/share/oem -name vmtoolsd -type f -executable 2>/dev/null | head -n 1)" --cmd "info-get guestinfo.metadata" | base64 -d | awk \'/local-hostname/ {print $2}\' | tr -d \'"\')" >> ${OUTPUT}'
                        ExecStart=/usr/bin/bash -cv 'echo "COREOS_CUSTOM_IPV4=$("$(find /usr/bin /usr/share/oem -name vmtoolsd -type f -executable 2>/dev/null | head -n 1)" --cmd "info-get guestinfo.ip")" >> ${OUTPUT}'
                  - name: set-hostname.service
                    enabled: true
                    contents: |
                      [Unit]
                      Description=Set machine hostname
                      [Install]
                      WantedBy=multi-user.target
                    dropins:
                    - name: 10-set-hostname.conf
                      contents: |
                        [Unit]
                        Requires=coreos-metadata.service
                        After=coreos-metadata.service
                        Before=teleport.service
                        [Service]
                        Type=oneshot
                        RemainAfterExit=yes
                        EnvironmentFile=/run/metadata/coreos
                        ExecStart=/opt/bin/set-hostname.sh
                  - name: ethtool-segmentation.service
                    enabled: true
                    contents: |
                      [Unit]
                      Description=Disable TCP segmentation offloading
                      [Install]
                      WantedBy=default.target
                    dropins:
                    - name: 10-ethtool-segmentation.conf
                      contents: |
                        [Unit]
                        After=network.target
                        [Service]
                        Type=oneshot
                        RemainAfterExit=yes
                        ExecStart=/usr/sbin/ethtool -K ens192 tx-udp_tnl-csum-segmentation off
                        ExecStart=/usr/sbin/ethtool -K ens192 tx-udp_tnl-segmentation off
                storage:
                  filesystems:      
                  directories:      
                  - path: /var/lib/kubelet
                    mode: 0750      
                
          joinConfiguration:
            nodeRegistration:
              name: ${COREOS_CUSTOM_HOSTNAME}
              kubeletExtraArgs:
                cloud-provider: external
                healthz-bind-address: 0.0.0.0
                node-ip: ${COREOS_CUSTOM_IPV4}
                node-labels: "ip=${COREOS_CUSTOM_IPV4},role=worker,giantswarm.io/machine-pool=test-worker,"
                v: 2
            patches:
              directory: /etc/kubernetes/patches
          preKubeadmCommands:
          - "envsubst < /etc/kubeadm.yml > /etc/kubeadm.yml.tmp"
          - "mv /etc/kubeadm.yml.tmp /etc/kubeadm.yml"
          - "systemctl restart containerd"
          postKubeadmCommands:
          - "usermod -aG root nobody"
          users:
          - name: giantswarm
            groups: sudo
            sudo: "ALL=(ALL) NOPASSWD:ALL"
          files:
          - path: /etc/sysctl.d/hardening.conf
            permissions: 0644
            encoding: base64
            content: ZnMuaW5vdGlmeS5tYXhfdXNlcl93YXRjaGVzID0gMTYzODQKZnMuaW5vdGlmeS5tYXhfdXNlcl9pbnN0YW5jZXMgPSA4MTkyCmtlcm5lbC5rcHRyX3Jlc3RyaWN0ID0gMgprZXJuZWwuc3lzcnEgPSAwCm5ldC5pcHY0LmNvbmYuYWxsLmxvZ19tYXJ0aWFucyA9IDEKbmV0LmlwdjQuY29uZi5hbGwuc2VuZF9yZWRpcmVjdHMgPSAwCm5ldC5pcHY0LmNvbmYuZGVmYXVsdC5hY2NlcHRfcmVkaXJlY3RzID0gMApuZXQuaXB2NC5jb25mLmRlZmF1bHQubG9nX21hcnRpYW5zID0gMQpuZXQuaXB2NC50Y3BfdGltZXN0YW1wcyA9IDAKbmV0LmlwdjYuY29uZi5hbGwuYWNjZXB0X3JlZGlyZWN0cyA9IDAKbmV0LmlwdjYuY29uZi5kZWZhdWx0LmFjY2VwdF9yZWRpcmVjdHMgPSAwCiMgSW5jcmVhc2VkIG1tYXBmcyBiZWNhdXNlIHNvbWUgYXBwbGljYXRpb25zLCBsaWtlIEVTLCBuZWVkIGhpZ2hlciBsaW1pdCB0byBzdG9yZSBkYXRhIHByb3Blcmx5CnZtLm1heF9tYXBfY291bnQgPSAyNjIxNDQKIyBSZXNlcnZlZCB0byBhdm9pZCBjb25mbGljdHMgd2l0aCBrdWJlLWFwaXNlcnZlciwgd2hpY2ggYWxsb2NhdGVzIHdpdGhpbiB0aGlzIHJhbmdlCm5ldC5pcHY0LmlwX2xvY2FsX3Jlc2VydmVkX3BvcnRzPTMwMDAwLTMyNzY3Cm5ldC5pcHY0LmNvbmYuYWxsLnJwX2ZpbHRlciA9IDEKbmV0LmlwdjQuY29uZi5hbGwuYXJwX2lnbm9yZSA9IDEKbmV0LmlwdjQuY29uZi5hbGwuYXJwX2Fubm91bmNlID0gMgoKIyBUaGVzZSBhcmUgcmVxdWlyZWQgZm9yIHRoZSBrdWJlbGV0ICctLXByb3RlY3Qta2VybmVsLWRlZmF1bHRzJyBmbGFnCiMgU2VlIGh0dHBzOi8vZ2l0aHViLmNvbS9naWFudHN3YXJtL2dpYW50c3dhcm0vaXNzdWVzLzEzNTg3CnZtLm92ZXJjb21taXRfbWVtb3J5PTEKa2VybmVsLnBhbmljPTEwCmtlcm5lbC5wYW5pY19vbl9vb3BzPTEK
          - path: /etc/selinux/config
            permissions: 0644
            encoding: base64
            content: IyBUaGlzIGZpbGUgY29udHJvbHMgdGhlIHN0YXRlIG9mIFNFTGludXggb24gdGhlIHN5c3RlbSBvbiBib290LgoKIyBTRUxJTlVYIGNhbiB0YWtlIG9uZSBvZiB0aGVzZSB0aHJlZSB2YWx1ZXM6CiMgICAgICAgZW5mb3JjaW5nIC0gU0VMaW51eCBzZWN1cml0eSBwb2xpY3kgaXMgZW5mb3JjZWQuCiMgICAgICAgcGVybWlzc2l2ZSAtIFNFTGludXggcHJpbnRzIHdhcm5pbmdzIGluc3RlYWQgb2YgZW5mb3JjaW5nLgojICAgICAgIGRpc2FibGVkIC0gTm8gU0VMaW51eCBwb2xpY3kgaXMgbG9hZGVkLgpTRUxJTlVYPXBlcm1pc3NpdmUKCiMgU0VMSU5VWFRZUEUgY2FuIHRha2Ugb25lIG9mIHRoZXNlIGZvdXIgdmFsdWVzOgojICAgICAgIHRhcmdldGVkIC0gT25seSB0YXJnZXRlZCBuZXR3b3JrIGRhZW1vbnMgYXJlIHByb3RlY3RlZC4KIyAgICAgICBzdHJpY3QgICAtIEZ1bGwgU0VMaW51eCBwcm90ZWN0aW9uLgojICAgICAgIG1scyAgICAgIC0gRnVsbCBTRUxpbnV4IHByb3RlY3Rpb24gd2l0aCBNdWx0aS1MZXZlbCBTZWN1cml0eQojICAgICAgIG1jcyAgICAgIC0gRnVsbCBTRUxpbnV4IHByb3RlY3Rpb24gd2l0aCBNdWx0aS1DYXRlZ29yeSBTZWN1cml0eQojICAgICAgICAgICAgICAgICAgKG1scywgYnV0IG9ubHkgb25lIHNlbnNpdGl2aXR5IGxldmVsKQpTRUxJTlVYVFlQRT1tY3MK
          - path: /etc/ssh/trusted-user-ca-keys.pem
            permissions: 0600
            encoding: base64
            content: c3NoLWVkMjU1MTkgQUFBQUMzTnphQzFsWkRJMU5URTVBQUFBSU00Y3ZaMDFmTG1POWNKYldVajdzZkYrTmhFQ2d5K0NsMGJhelNyWlg3c1UgdmF1bHQtY2FAdmF1bHQub3BlcmF0aW9ucy5naWFudHN3YXJtLmlvCg==
          - path: /etc/ssh/sshd_config
            permissions: 0600
            encoding: base64
            content: IyBVc2UgbW9zdCBkZWZhdWx0cyBmb3Igc3NoZCBjb25maWd1cmF0aW9uLgpTdWJzeXN0ZW0gc2Z0cCBpbnRlcm5hbC1zZnRwCkNsaWVudEFsaXZlSW50ZXJ2YWwgMTgwClVzZUROUyBubwpVc2VQQU0geWVzClByaW50TGFzdExvZyBubyAjIGhhbmRsZWQgYnkgUEFNClByaW50TW90ZCBubyAjIGhhbmRsZWQgYnkgUEFNCiMgTm9uIGRlZmF1bHRzICgjMTAwKQpDbGllbnRBbGl2ZUNvdW50TWF4IDIKUGFzc3dvcmRBdXRoZW50aWNhdGlvbiBubwpUcnVzdGVkVXNlckNBS2V5cyAvZXRjL3NzaC90cnVzdGVkLXVzZXItY2Eta2V5cy5wZW0KTWF4QXV0aFRyaWVzIDUKTG9naW5HcmFjZVRpbWUgNjAKQWxsb3dUY3BGb3J3YXJkaW5nIG5vCkFsbG93QWdlbnRGb3J3YXJkaW5nIG5vCkNBU2lnbmF0dXJlQWxnb3JpdGhtcyBlY2RzYS1zaGEyLW5pc3RwMjU2LGVjZHNhLXNoYTItbmlzdHAzODQsZWNkc2Etc2hhMi1uaXN0cDUyMSxzc2gtZWQyNTUxOSxyc2Etc2hhMi01MTIscnNhLXNoYTItMjU2LHNzaC1yc2EK
          - path: /etc/containerd/config.toml
            permissions: 0644
            contentFrom:
              secret:
                name: test-containerd-b21d846e
                key: config.toml
          - path: /etc/kubernetes/patches/kubeletconfiguration.yaml
            permissions: 0644
            encoding: base64
            content: YXBpVmVyc2lvbjoga3ViZWxldC5jb25maWcuazhzLmlvL3YxYmV0YTEKa2luZDogS3ViZWxldENvbmZpZ3VyYXRpb24Kc2h1dGRvd25HcmFjZVBlcmlvZDogMzAwcwpzaHV0ZG93bkdyYWNlUGVyaW9kQ3JpdGljYWxQb2RzOiA2MHMKa2VybmVsTWVtY2dOb3RpZmljYXRpb246IHRydWUKZXZpY3Rpb25Tb2Z0OgogIG1lbW9yeS5hdmFpbGFibGU6ICI1MDBNaSIKZXZpY3Rpb25IYXJkOgogIG1lbW9yeS5hdmFpbGFibGU6ICIyMDBNaSIKICBpbWFnZWZzLmF2YWlsYWJsZTogIjE1JSIKZXZpY3Rpb25Tb2Z0R3JhY2VQZXJpb2Q6CiAgbWVtb3J5LmF2YWlsYWJsZTogIjVzIgpldmljdGlvbk1heFBvZEdyYWNlUGVyaW9kOiA2MAprdWJlUmVzZXJ2ZWQ6CiAgY3B1OiAzNTBtCiAgbWVtb3J5OiAxMjgwTWkKICBlcGhlbWVyYWwtc3RvcmFnZTogMTAyNE1pCmt1YmVSZXNlcnZlZENncm91cDogL2t1YmVyZXNlcnZlZC5zbGljZQpwcm90ZWN0S2VybmVsRGVmYXVsdHM6IHRydWUKc3lzdGVtUmVzZXJ2ZWQ6CiAgY3B1OiAyNTBtCiAgbWVtb3J5OiAzODRNaQpzeXN0ZW1SZXNlcnZlZENncm91cDogL3N5c3RlbS5zbGljZQp0bHNDaXBoZXJTdWl0ZXM6Ci0gVExTX0FFU18xMjhfR0NNX1NIQTI1NgotIFRMU19BRVNfMjU2X0dDTV9TSEEzODQKLSBUTFNfQ0hBQ0hBMjBfUE9MWTEzMDVfU0hBMjU2Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzEyOF9DQkNfU0hBCi0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzEyOF9HQ01fU0hBMjU2Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzI1Nl9DQkNfU0hBCi0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQ0hBQ0hBMjBfUE9MWTEzMDUKLSBUTFNfRUNESEVfRUNEU0FfV0lUSF9DSEFDSEEyMF9QT0xZMTMwNV9TSEEyNTYKLSBUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzEyOF9DQkNfU0hBCi0gVExTX0VDREhFX1JTQV9XSVRIX0FFU18xMjhfR0NNX1NIQTI1NgotIFRMU19FQ0RIRV9SU0FfV0lUSF9BRVNfMjU2X0NCQ19TSEEKLSBUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0Ci0gVExTX0VDREhFX1JTQV9XSVRIX0NIQUNIQTIwX1BPTFkxMzA1Ci0gVExTX0VDREhFX1JTQV9XSVRIX0NIQUNIQTIwX1BPTFkxMzA1X1NIQTI1NgotIFRMU19SU0FfV0lUSF9BRVNfMTI4X0NCQ19TSEEKLSBUTFNfUlNBX1dJVEhfQUVTXzEyOF9HQ01fU0hBMjU2Ci0gVExTX1JTQV9XSVRIX0FFU18yNTZfQ0JDX1NIQQotIFRMU19SU0FfV0lUSF9BRVNfMjU2X0dDTV9TSEEzODQKc2VyaWFsaXplSW1hZ2VQdWxsczogZmFsc2UKc3RyZWFtaW5nQ29ubmVjdGlvbklkbGVUaW1lb3V0OiAxaAphbGxvd2VkVW5zYWZlU3lzY3RsczoKLSAibmV0LioiCg==
          - path: /etc/systemd/logind.conf.d/zzz-kubelet-graceful-shutdown.conf
            permissions: 0700
            encoding: base64
            content: W0xvZ2luXQojIGRlbGF5CkluaGliaXREZWxheU1heFNlYz0zMDAK
          - path: /etc/teleport-join-token
            permissions: 0644
            contentFrom:
              secret:
                name: test-teleport-join-token
                key: joinToken
          - path: /opt/teleport-node-role.sh
            permissions: 0755
            encoding: base64
            content: IyEvYmluL2Jhc2gKCmlmIHN5c3RlbWN0bCBpcy1hY3RpdmUgLXEga3ViZWxldC5zZXJ2aWNlOyB0aGVuCiAgICBpZiBbIC1lICIvZXRjL2t1YmVybmV0ZXMvbWFuaWZlc3RzL2t1YmUtYXBpc2VydmVyLnlhbWwiIF07IHRoZW4KICAgICAgICBlY2hvICJjb250cm9sLXBsYW5lIgogICAgZWxzZQogICAgICAgIGVjaG8gIndvcmtlciIKICAgIGZpCmVsc2UKICAgIGVjaG8gIiIKZmkK
          - path: /etc/teleport.yaml
            permissions: 0644
            encoding: base64
            content: dmVyc2lvbjogdjMKdGVsZXBvcnQ6CiAgZGF0YV9kaXI6IC92YXIvbGliL3RlbGVwb3J0CiAgam9pbl9wYXJhbXM6CiAgICB0b2tlbl9uYW1lOiAvZXRjL3RlbGVwb3J0LWpvaW4tdG9rZW4KICAgIG1ldGhvZDogdG9rZW4KICBwcm94eV9zZXJ2ZXI6IHRlbGVwb3J0LmdpYW50c3dhcm0uaW86NDQzCiAgbG9nOgogICAgb3V0cHV0OiBzdGRlcnIKYXV0aF9zZXJ2aWNlOgogIGVuYWJsZWQ6ICJubyIKc3NoX3NlcnZpY2U6CiAgZW5hYmxlZDogInllcyIKICBjb21tYW5kczoKICAtIG5hbWU6IG5vZGUKICAgIGNvbW1hbmQ6IFtob3N0bmFtZV0KICAgIHBlcmlvZDogMjRoMG0wcwogIC0gbmFtZTogYXJjaAogICAgY29tbWFuZDogW3VuYW1lLCAtbV0KICAgIHBlcmlvZDogMjRoMG0wcwogIC0gbmFtZTogcm9sZQogICAgY29tbWFuZDogWy9vcHQvdGVsZXBvcnQtbm9kZS1yb2xlLnNoXQogICAgcGVyaW9kOiAxbTBzCiAgbGFiZWxzOgogICAgaW5zOiAKICAgIG1jOiAKICAgIGNsdXN0ZXI6IHRlc3QKICAgIGJhc2VEb21haW46IGs4cy50ZXN0CnByb3h5X3NlcnZpY2U6CiAgZW5hYmxlZDogIm5vIgo=
          - path: /etc/audit/rules.d/99-default.rules
            permissions: 0640
            encoding: base64
            content: IyBPdmVycmlkZGVuIGJ5IEdpYW50IFN3YXJtLgotYSBleGl0LGFsd2F5cyAtRiBhcmNoPWI2NCAtUyBleGVjdmUgLWsgYXVkaXRpbmcKLWEgZXhpdCxhbHdheXMgLUYgYXJjaD1iMzIgLVMgZXhlY3ZlIC1rIGF1ZGl0aW5nCg==
          - contentFrom:
              secret:
                name: test-provider-specific-files-1
                key: set-hostname.sh
            path: /opt/bin/set-hostname.sh
            permissions: 0755
    # Source: cluster-vsphere/charts/cluster/templates/clusterapi/workers/machinedeployment.yaml
    apiVersion: cluster.x-k8s.io/v1beta1
    kind: MachineDeployment
    metadata:
      annotations:
        machine-deployment.giantswarm.io/name: test-worker
      labels:
        # deprecated: "app: cluster-vsphere" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-vsphere
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 1.2.1
        app.kubernetes.io/part-of: cluster-vsphere
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-1.2.1
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: test
        giantswarm.io/organization: giantswarm
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: test
        cluster.x-k8s.io/watch-filter: capi
        giantswarm.io/machine-deployment: test-worker
      name: test-worker
      namespace: org-giantswarm
    spec:
      clusterName: test
      replicas: 2
      template:
        metadata:
          labels:
            # deprecated: "app: cluster-vsphere" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-vsphere
            app.kubernetes.io/name: cluster
            app.kubernetes.io/version: 1.2.1
            app.kubernetes.io/part-of: cluster-vsphere
            app.kubernetes.io/instance: release-name
            app.kubernetes.io/managed-by: Helm
            helm.sh/chart: cluster-1.2.1
            application.giantswarm.io/team: turtles
            giantswarm.io/cluster: test
            giantswarm.io/organization: giantswarm
            giantswarm.io/service-priority: highest
            cluster.x-k8s.io/cluster-name: test
            cluster.x-k8s.io/watch-filter: capi
        spec:
          bootstrap:
            configRef:
              apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
              kind: KubeadmConfigTemplate
              name: test-worker-85fb5
          clusterName: test
          infrastructureRef:
            apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
            kind: VSphereMachineTemplate
            name: test-worker-84ff272a
          version: 1.27.14
    # Source: cluster-vsphere/charts/cluster/templates/clusterapi/workers/machinehealthcheck.yaml
    apiVersion: cluster.x-k8s.io/v1beta1
    kind: MachineHealthCheck
    metadata:
      labels:
        giantswarm.io/machine-deployment: test-worker
        # deprecated: "app: cluster-vsphere" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-vsphere
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 1.2.1
        app.kubernetes.io/part-of: cluster-vsphere
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-1.2.1
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: test
        giantswarm.io/organization: giantswarm
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: test
        cluster.x-k8s.io/watch-filter: capi
      name: test-worker
      namespace: org-giantswarm
    spec:
      clusterName: test
      maxUnhealthy: 40%!(NOVERB)
      nodeStartupTimeout: 20m0s
      selector:
        matchLabels:
          cluster.x-k8s.io/cluster-name: test
          cluster.x-k8s.io/deployment-name: test-worker
      unhealthyConditions:
      - type: Ready
        status: Unknown
        timeout: 10m0s
      - type: Ready
        status: False
        timeout: 10m0s
    # Source: cluster-vsphere/templates/vspheremachinetemplate.yaml
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
    kind: VSphereMachineTemplate
    metadata:
      name: release-name-worker-84ff272a
      namespace: org-giantswarm
      labels:
        app: cluster-vsphere
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: release-name
        giantswarm.io/cluster: release-name
        giantswarm.io/organization: giantswarm
        application.giantswarm.io/team: rocket
        app.kubernetes.io/version: 0.63.0
        helm.sh/chart: cluster-vsphere-0.63.0
    spec:
      template:
        spec:
          datacenter: Datacenter
          datastore: vsanDatastore
          server: "https://foo.example.com"
          thumbprint: "F7:CF:F9:E5:99:39:FF:C1:D7:14:F1:3F:8A:42:21:95:3B:A1:6E:16"
          cloneMode: linkedClone
          diskGiB: 25
          memoryMiB: 8192
          network:
            devices:
            - dhcp4: true
              networkName: grasshopper-capv
          numCPUs: 2
          resourcePool: grasshopper
          template: flatcar-stable-3815.2.2-kube-v1.27.14-gs
    
  

/spec/nodeStartupTimeout  (cluster.x-k8s.io/v1beta1/MachineHealthCheck/org-giantswarm/test-control-plane)
  ± value change
    - 8m0s
    + 20m0s

@anvddriesch
Copy link
Contributor Author

/run cluster-test-suites

@tinkerers-ci
Copy link

tinkerers-ci bot commented Sep 24, 2024

cluster-test-suites

Run name pr-cluster-vsphere-282-cluster-test-suitesztgqg
Commit SHA ca9abc8
Result Failed ❌

📋 View full results in Tekton Dashboard

Rerun trigger:
/run cluster-test-suites


Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

@anvddriesch
Copy link
Contributor Author

/run cluster-test-suites TARGET_SUITES=./providers/capv/standard

@tinkerers-ci
Copy link

tinkerers-ci bot commented Sep 24, 2024

cluster-test-suites

Run name pr-cluster-vsphere-282-cluster-test-suitesbd879
Commit SHA ca9abc8
Result Completed ✅

📋 View full results in Tekton Dashboard

Rerun trigger:
/run cluster-test-suites


Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

@anvddriesch anvddriesch enabled auto-merge (squash) September 24, 2024 12:37
Copy link
Member

@glitchcrab glitchcrab left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

me likey

@anvddriesch anvddriesch merged commit 9ba89e7 into main Sep 24, 2024
14 checks passed
@anvddriesch anvddriesch deleted the migrate-workers branch September 24, 2024 13:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants