Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use giantswarm/cluster to render control plane resources #263

Merged
merged 29 commits into from
Aug 20, 2024

Conversation

glitchcrab
Copy link
Member

@glitchcrab glitchcrab commented Aug 9, 2024

This pr: ..

  • is a nightmare

@glitchcrab glitchcrab self-assigned this Aug 9, 2024
@glitchcrab glitchcrab requested a review from a team August 9, 2024 15:34
Comment on lines -212 to -218

{{- define "auditLogFiles" -}}
- path: /etc/kubernetes/policies/audit-policy.yaml
permissions: "0600"
encoding: base64
content: {{ $.Files.Get "files/etc/kubernetes/policies/audit-policy.yaml" | b64enc }}
{{- end -}}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is handled by the shared chart now

@@ -1,3 +1,4 @@
# DEPRECATED - remove once CP and workers are rendered with cluster chart
Copy link
Member Author

@glitchcrab glitchcrab Aug 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added these deprecation markers to make it easier to find things to be deleted at a later date - all deprecated functions and files will be handled by the cluster chart once it is fully integrated.

@glitchcrab
Copy link
Member Author

glitchcrab commented Aug 13, 2024

☝️ e12e8f1 is worth viewing in isolation as it explains why the vspheremachinetemplate resources have been reworked quite heavily (see the full commit message).

@glitchcrab glitchcrab force-pushed the cluster-chart/render-controlplane branch from e12e8f1 to a0aa7ee Compare August 15, 2024 09:02
@glitchcrab glitchcrab force-pushed the cluster-chart/render-controlplane branch from a0aa7ee to 0c2cefc Compare August 15, 2024 09:33
Comment on lines +1 to +13
apiVersion: v1
kind: Secret
metadata:
{{/*
You MUST bump the name suffix here and in `values.schema.json` every time one of these files
changes its content. Automatically appending a hash of the content here doesn't work
since we'd need to edit `values.schema.json` as well, but that file is created by humans.
*/}}
name: {{ include "resource.default.name" $ }}-provider-specific-files-1
namespace: {{ $.Release.Namespace | quote }}
data:
set-hostname.sh: {{ tpl ($.Files.Get "files/opt/bin/set-hostname.sh") $ | b64enc | quote }}
type: Opaque
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the cluster chart currently doesn't have the ability to pass arbitrary file contents to the files section of ignition, so instead we create a secret which stores the contents of the the file(s) and then the secret's contents are written out to the file on disk (see https://github.com/giantswarm/cluster-vsphere/pull/263/files#diff-91218d0fa586a5a23b86c05ebd3ed9ed502aee6ace9953024119e735d7e1557dR157-R163)

Comment on lines 174 to 243
{
"contents": {
"install": {
"wantedBy": [
"multi-user.target"
]
},
"unit": {
"description": "VMWare metadata agent"
}
},
"dropins": [
{
"contents": "[Unit]\nAfter=nss-lookup.target\nAfter=network-online.target\nWants=network-online.target\n[Service]\nType=oneshot\nRestart=on-failure\nRemainAfterExit=yes\nEnvironment=OUTPUT=/run/metadata/coreos\nExecStart=/usr/bin/mkdir --parent /run/metadata\nExecStart=/usr/bin/bash -cv 'echo \"COREOS_CUSTOM_HOSTNAME=$(\"$(find /usr/bin /usr/share/oem -name vmtoolsd -type f -executable 2>/dev/null | head -n 1)\" --cmd \"info-get guestinfo.metadata\" | base64 -d | awk \\'/local-hostname/ {print $2}\\' | tr -d \\'\"\\')\" >> ${OUTPUT}'\nExecStart=/usr/bin/bash -cv 'echo \"COREOS_CUSTOM_IPV4=$(\"$(find /usr/bin /usr/share/oem -name vmtoolsd -type f -executable 2>/dev/null | head -n 1)\" --cmd \"info-get guestinfo.ip\")\" >> ${OUTPUT}'",
"name": "10-coreos-metadata.conf"
}
],
"enabled": true,
"name": "coreos-metadata.service"
},
{
"contents": {
"install": {
"wantedBy": [
"multi-user.target"
]
},
"unit": {
"description": "Set machine hostname"
}
},
"dropins": [
{
"contents": "[Unit]\nRequires=coreos-metadata.service\nAfter=coreos-metadata.service\n[Service]\nType=oneshot\nRemainAfterExit=yes\nEnvironmentFile=/run/metadata/coreos\nExecStart=/opt/set-hostname",
"name": "10-set-hostname.conf"
}
],
"enabled": true,
"name": "set-hostname.service"
},
{
"contents": {
"install": {
"wantedBy": [
"default.target"
]
},
"unit": {
"description": "Disable TCP segmentation offloading"
}
},
"dropins": [
{
"contents": "[Unit]\nAfter=network.target\n[Service]\nType=oneshot\nRemainAfterExit=yes\nExecStart=/usr/sbin/ethtool -K ens192 tx-udp_tnl-csum-segmentation off\nExecStart=/usr/sbin/ethtool -K ens192 tx-udp_tnl-segmentation off",
"name": "10-ethtool-segmentation.conf"
}
],
"enabled": true,
"name": "ethtool-segmentation.service"
},
{
"dropins": [
{
"contents": "[Unit]\n# kubeadm must run after coreos-metadata populated /run/metadata directory.\nRequires=coreos-metadata.service\nAfter=coreos-metadata.service\n[Service]\n# Make metadata environment variables available for pre-kubeadm commands.\nEnvironmentFile=/run/metadata/*",
"name": "10-flatcar.conf"
}
],
"enabled": true,
"name": "kubeadm.service"
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this section duplicates provider-specific systemd units. currently, the cluster chart doesn't have the ability to template these units out fully, so a very basic unit file is created with the options available in the schema and then any other service args are added via a drop-in file. Currently these drop-ins have to be stringified as JSON doesn't support multi-line strings.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

github shows this as outdated as the last dropin for the kubeadm service isn't required so i've removed it - the cluster chart handles this now. The rest of this section is still valid.

Comment on lines +2 to +13
apiVersion: {{ include "infrastructureApiVersion" . }}
kind: VSphereMachineTemplate
metadata:
name: {{ include "resource.default.name" $ }}-control-plane-{{ include "machineTemplateSpec.hash" (dict "data" (include "controlplane-vspheremachinetemplate-spec" $) "salt" $.Values.cluster.providerIntegration.hashSalt) }}
namespace: {{ $.Release.Namespace }}
labels:
{{- include "labels.common" $ | nindent 4 }}
spec:
template:
spec:
{{- include "controlplane-vspheremachinetemplate-spec" $ | nindent 6 -}}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As the cluster chart uses different methods to create the name of the machine resources, we need to align methods. Previously, this template used a function which could range over both CP and nodepool blocks, however whilst the cluster chart is only partly integrated the two naming schemes are incompatible. So we now have an individual vspheremachinetemplate for each set of nodes.

{{/*
Generates template spec for control plane machines.
*/}}
{{- define "controlplane-vspheremachinetemplate-spec" -}}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is just the mtSpec function, but improved and separated out.

Comment on lines +13 to +27
{{/*
Hash function based on data provided
Expects two arguments (as a `dict`) E.g.
{{ include "hash" (dict "data" . "salt" .Values.providerIntegration.hasSalt) }}
Where `data` is the data to hash and `global` is the top level scope.

NOTE: this function has been copied from the giantswarm/cluster chart
(see `cluster.data.hash``) to ensure that resource naming is identical.
*/}}
{{- define "machineTemplateSpec.hash" -}}
{{- $data := mustToJson .data | toString }}
{{- $salt := "" }}
{{- if .salt }}{{ $salt = .salt}}{{end}}
{{- (printf "%s%s" $data $salt) | quote | sha1sum | trunc 8 }}
{{- end -}}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this function is duplicated from the cluster chart and used to calculate the hash suffix which is appended to the kubeadmcontrolplane resource name

Comment on lines +1 to +7
#!/bin/sh
set -x
echo "${COREOS_CUSTOM_HOSTNAME}" > /etc/hostname
hostname "${COREOS_CUSTOM_HOSTNAME}"
echo "::1 ipv6-localhost ipv6-loopback" >/etc/hosts
echo "127.0.0.1 localhost" >>/etc/hosts
echo "127.0.0.1 ${COREOS_CUSTOM_HOSTNAME}" >>/etc/hosts
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moved out of the ignition spec

@glitchcrab glitchcrab marked this pull request as ready for review August 19, 2024 09:17
@glitchcrab glitchcrab requested a review from a team August 19, 2024 09:18
@glitchcrab glitchcrab force-pushed the cluster-chart/render-controlplane branch from 37dbc3c to c4c82dd Compare August 19, 2024 09:44
@glitchcrab glitchcrab force-pushed the cluster-chart/render-controlplane branch from 260b3c6 to 1b47627 Compare August 19, 2024 09:50
"controlPlane": {
"apiServer": {
"extraArgs": {
"requestheader-allowed-names": "front-proxy-client"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we're not entirely sure why this is in the vsphere chart - it may be possible to drop it but as we're not sure i've elected to keep it for now.

},
"dropins": [
{
"contents": "[Unit]\nAfter=network.target\n[Service]\nType=oneshot\nRemainAfterExit=yes\nExecStart=/usr/sbin/ethtool -K ens192 tx-udp_tnl-csum-segmentation off\nExecStart=/usr/sbin/ethtool -K ens192 tx-udp_tnl-segmentation off",
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should revisit this later and turn it into a script which gets called by the Service - at the moment we're hard-coding the NIC name in which will probably bite us in the future.

@glitchcrab glitchcrab force-pushed the cluster-chart/render-controlplane branch from 46659fe to 21d3e15 Compare August 19, 2024 11:27
Copy link
Contributor

@anvddriesch anvddriesch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Copy link
Contributor

There were differences in the rendered Helm template, please check! ⚠️

Output
=== Differences when rendered with values file helm/cluster-vsphere/ci/test-wc-values.yaml ===

(file level)
  - three documents removed:
    ---
    # Source: cluster-vsphere/templates/kubeadmconfigtemplate.yaml
    apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
    kind: KubeadmConfigTemplate
    metadata:
      name: release-name-worker-505213e6
      namespace: org-giantswarm
      labels:
        app: cluster-vsphere
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: release-name
        giantswarm.io/cluster: release-name
        giantswarm.io/organization: giantswarm
        application.giantswarm.io/team: rocket
        app.kubernetes.io/version: 0.59.0
        helm.sh/chart: cluster-vsphere-0.59.0
    spec:
      template:
        spec:
          users:
          - name: giantswarm
            sudo: "ALL=(ALL) NOPASSWD:ALL"
          format: ignition
          ignition:
            containerLinuxConfig:
              additionalConfig: |
                storage:
                  files:
                  - path: /opt/set-hostname
                    filesystem: root
                    mode: 0744
                    contents:
                      inline: |
                        #!/bin/sh
                        set -x
                        echo "${COREOS_CUSTOM_HOSTNAME}" > /etc/hostname
                        hostname "${COREOS_CUSTOM_HOSTNAME}"
                        echo "::1         ipv6-localhost ipv6-loopback" >/etc/hosts
                        echo "127.0.0.1   localhost" >>/etc/hosts
                        echo "127.0.0.1   ${COREOS_CUSTOM_HOSTNAME}" >>/etc/hosts
                systemd:
                  units:
                  - name: coreos-metadata.service
                    contents: |
                      [Unit]
                      Description=VMware metadata agent
                      After=nss-lookup.target
                      After=network-online.target
                      Wants=network-online.target
                      [Service]
                      Type=oneshot
                      Restart=on-failure
                      RemainAfterExit=yes
                      Environment=OUTPUT=/run/metadata/coreos
                      ExecStart=/usr/bin/mkdir --parent /run/metadata
                      ExecStart=/usr/bin/bash -cv 'echo "COREOS_CUSTOM_HOSTNAME=$("$(find /usr/bin /usr/share/oem -name vmtoolsd -type f -executable 2>/dev/null | head -n 1)" --cmd "info-get guestinfo.metadata" | base64 -d | grep local-hostname | awk {\'print $2\'} | tr -d \'"\')" > $${OUTPUT}'
                  - name: set-hostname.service
                    enabled: true
                    contents: |
                      [Unit]
                      Description=Set the hostname for this machine
                      Requires=coreos-metadata.service
                      After=coreos-metadata.service
                      [Service]
                      Type=oneshot
                      RemainAfterExit=yes
                      EnvironmentFile=/run/metadata/coreos
                      ExecStart=/opt/set-hostname
                      [Install]
                      WantedBy=multi-user.target
                  - name: ethtool-segmentation.service
                    enabled: true
                    contents: |
                      [Unit]
                      After=network.target
                      [Service]
                      Type=oneshot
                      RemainAfterExit=yes
                      ExecStart=/usr/sbin/ethtool -K ens192 tx-udp_tnl-csum-segmentation off
                      ExecStart=/usr/sbin/ethtool -K ens192 tx-udp_tnl-segmentation off
                      [Install]
                      WantedBy=default.target
                  - name: kubeadm.service
                    enabled: true
                    dropins:
                    - name: 10-flatcar.conf
                      contents: |
                        [Unit]
                        # kubeadm must run after coreos-metadata populated /run/metadata directory.
                        Requires=coreos-metadata.service
                        After=coreos-metadata.service
                        [Service]
                        # Make metadata environment variables available for pre-kubeadm commands.
                        EnvironmentFile=/run/metadata/*
                  - name: teleport.service
                    enabled: true
                    contents: |
                      [Unit]
                      Description=Teleport Service
                      After=network.target
                      [Service]
                      Type=simple
                      Restart=on-failure
                      ExecStart=/opt/bin/teleport start --roles=node --config=/etc/teleport.yaml --pid-file=/run/teleport.pid
                      ExecReload=/bin/kill -HUP $MAINPID
                      PIDFile=/run/teleport.pid
                      LimitNOFILE=524288
                      [Install]
                      WantedBy=multi-user.target
          joinConfiguration:
            nodeRegistration:
              criSocket: /run/containerd/containerd.sock
              kubeletExtraArgs:
                cloud-provider: external
                feature-gates: 
                eviction-hard: memory.available<200Mi
                eviction-max-pod-grace-period: 60
                eviction-soft: memory.available<500Mi
                eviction-soft-grace-period: memory.available=5s
                anonymous-auth: "true"
                node-labels: giantswarm.io/node-pool=worker
          files:
          - path: /etc/ssh/trusted-user-ca-keys.pem
            permissions: 0600
            content: |
              ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM4cvZ01fLmO9cJbWUj7sfF+NhECgy+Cl0bazSrZX7sU [email protected]
              
          - path: /etc/ssh/sshd_config
            permissions: 0600
            content: |
              # Use most defaults for sshd configuration.
              Subsystem sftp internal-sftp
              ClientAliveInterval 180
              UseDNS no
              UsePAM yes
              PrintLastLog no # handled by PAM
              PrintMotd no # handled by PAM
              # Non defaults (#100)
              ClientAliveCountMax 2
              PasswordAuthentication no
              TrustedUserCAKeys /etc/ssh/trusted-user-ca-keys.pem
              MaxAuthTries 5
              LoginGraceTime 60
              AllowTcpForwarding no
              AllowAgentForwarding no
              
          - path: /etc/teleport-join-token
            permissions: 0644
            contentFrom:
              secret:
                name: release-name-teleport-join-token
                key: joinToken
          - path: /opt/teleport-node-role.sh
            permissions: 0755
            encoding: base64
            content: IyEvYmluL2Jhc2gKCmlmIHN5c3RlbWN0bCBpcy1hY3RpdmUgLXEga3ViZWxldC5zZXJ2aWNlOyB0aGVuCiAgICBpZiBbIC1lICIvZXRjL2t1YmVybmV0ZXMvbWFuaWZlc3RzL2t1YmUtYXBpc2VydmVyLnlhbWwiIF07IHRoZW4KICAgICAgICBlY2hvICJjb250cm9sLXBsYW5lIgogICAgZWxzZQogICAgICAgIGVjaG8gIndvcmtlciIKICAgIGZpCmVsc2UKICAgIGVjaG8gIiIKZmkK
          - path: /etc/teleport.yaml
            permissions: 0644
            encoding: base64
            content: dmVyc2lvbjogdjMKdGVsZXBvcnQ6CiAgZGF0YV9kaXI6IC92YXIvbGliL3RlbGVwb3J0CiAgam9pbl9wYXJhbXM6CiAgICB0b2tlbl9uYW1lOiAvZXRjL3RlbGVwb3J0LWpvaW4tdG9rZW4KICAgIG1ldGhvZDogdG9rZW4KICBwcm94eV9zZXJ2ZXI6IHRlbGVwb3J0LmdpYW50c3dhcm0uaW86NDQzCiAgbG9nOgogICAgb3V0cHV0OiBzdGRlcnIKYXV0aF9zZXJ2aWNlOgogIGVuYWJsZWQ6ICJubyIKc3NoX3NlcnZpY2U6CiAgZW5hYmxlZDogInllcyIKICBjb21tYW5kczoKICAtIG5hbWU6IG5vZGUKICAgIGNvbW1hbmQ6IFtob3N0bmFtZV0KICAgIHBlcmlvZDogMjRoMG0wcwogIC0gbmFtZTogYXJjaAogICAgY29tbWFuZDogW3VuYW1lLCAtbV0KICAgIHBlcmlvZDogMjRoMG0wcwogIC0gbmFtZTogcm9sZQogICAgY29tbWFuZDogWy9vcHQvdGVsZXBvcnQtbm9kZS1yb2xlLnNoXQogICAgcGVyaW9kOiAxbTBzCiAgbGFiZWxzOgogICAgaW5zOiAKICAgIG1jOiAKICAgIGNsdXN0ZXI6IHJlbGVhc2UtbmFtZQogICAgYmFzZURvbWFpbjogazhzLnRlc3QKcHJveHlfc2VydmljZToKICBlbmFibGVkOiAibm8iCg==
          - path: /etc/containerd/config.toml
            permissions: 0600
            contentFrom:
              secret:
                name: release-name-registry-configuration-d5bcde26
                key: registry-config.toml
          preKubeadmCommands:
          - "systemctl restart sshd"
          - "/bin/test ! -d /var/lib/kubelet && (/bin/mkdir -p /var/lib/kubelet && /bin/chmod 0750 /var/lib/kubelet)"
          postKubeadmCommands:
          - "usermod -aG root nobody" # required for node-exporter to access the host's filesystem
    # Source: cluster-vsphere/templates/kubeadmcontrolplane.yaml
    apiVersion: controlplane.cluster.x-k8s.io/v1beta1
    kind: KubeadmControlPlane
    metadata:
      name: release-name
      namespace: org-giantswarm
      labels:
        app: cluster-vsphere
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: release-name
        giantswarm.io/cluster: release-name
        giantswarm.io/organization: giantswarm
        application.giantswarm.io/team: rocket
        app.kubernetes.io/version: 0.59.0
        helm.sh/chart: cluster-vsphere-0.59.0
    spec:
      kubeadmConfigSpec:
        clusterConfiguration:
          apiServer:
            certSANs:
            - 10.10.222.241
            - localhost
            - 127.0.0.1
            - api.release-name.k8s.test
            extraArgs:
              audit-log-maxage: 30
              audit-log-maxbackup: 30
              audit-log-maxsize: 100
              audit-log-path: /var/log/apiserver/audit.log
              audit-policy-file: /etc/kubernetes/policies/audit-policy.yaml
              cloud-provider: external
              enable-admission-plugins: "DefaultStorageClass,DefaultTolerationSeconds,LimitRanger,MutatingAdmissionWebhook,NamespaceLifecycle,PersistentVolumeClaimResize,Priority,ResourceQuota,ServiceAccount,ValidatingAdmissionWebhook"
              kubelet-preferred-address-types: InternalIP
              profiling: "false"
              requestheader-allowed-names: front-proxy-client
              runtime-config: api/all=true
              tls-cipher-suites: "TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384"
            extraVolumes:
            - name: auditlog
              hostPath: /var/log/apiserver
              mountPath: /var/log/apiserver
              pathType: DirectoryOrCreate
            - name: policies
              hostPath: /etc/kubernetes/policies
              mountPath: /etc/kubernetes/policies
              pathType: DirectoryOrCreate
          controllerManager:
            extraArgs:
              authorization-always-allow-paths: "/healthz,/readyz,/livez,/metrics"
              bind-address: 0.0.0.0
              cloud-provider: external
              enable-hostpath-provisioner: "true"
              terminated-pod-gc-threshold: 125
              profiling: "false"
          etcd:
            local:
              extraArgs:
                listen-metrics-urls: "http://0.0.0.0:2381"
          imageRepository: gsoci.azurecr.io/giantswarm
          scheduler:
            extraArgs:
              authorization-always-allow-paths: "/healthz,/readyz,/livez,/metrics"
              bind-address: 0.0.0.0
          networking:
            serviceSubnet: 172.31.0.0/16
        users:
        - name: giantswarm
          sudo: "ALL=(ALL) NOPASSWD:ALL"
        format: ignition
        ignition:
          containerLinuxConfig:
            additionalConfig: |
              storage:
                files:
                - path: /opt/set-hostname
                  filesystem: root
                  mode: 0744
                  contents:
                    inline: |
                      #!/bin/sh
                      set -x
                      echo "${COREOS_CUSTOM_HOSTNAME}" > /etc/hostname
                      hostname "${COREOS_CUSTOM_HOSTNAME}"
                      echo "::1         ipv6-localhost ipv6-loopback" >/etc/hosts
                      echo "127.0.0.1   localhost" >>/etc/hosts
                      echo "127.0.0.1   ${COREOS_CUSTOM_HOSTNAME}" >>/etc/hosts
              systemd:
                units:
                - name: coreos-metadata.service
                  contents: |
                    [Unit]
                    Description=VMware metadata agent
                    After=nss-lookup.target
                    After=network-online.target
                    Wants=network-online.target
                    [Service]
                    Type=oneshot
                    Restart=on-failure
                    RemainAfterExit=yes
                    Environment=OUTPUT=/run/metadata/coreos
                    ExecStart=/usr/bin/mkdir --parent /run/metadata
                    ExecStart=/usr/bin/bash -cv 'echo "COREOS_CUSTOM_HOSTNAME=$("$(find /usr/bin /usr/share/oem -name vmtoolsd -type f -executable 2>/dev/null | head -n 1)" --cmd "info-get guestinfo.metadata" | base64 -d | grep local-hostname | awk {\'print $2\'} | tr -d \'"\')" > $${OUTPUT}'
                - name: set-hostname.service
                  enabled: true
                  contents: |
                    [Unit]
                    Description=Set the hostname for this machine
                    Requires=coreos-metadata.service
                    After=coreos-metadata.service
                    [Service]
                    Type=oneshot
                    RemainAfterExit=yes
                    EnvironmentFile=/run/metadata/coreos
                    ExecStart=/opt/set-hostname
                    [Install]
                    WantedBy=multi-user.target
                - name: ethtool-segmentation.service
                  enabled: true
                  contents: |
                    [Unit]
                    After=network.target
                    [Service]
                    Type=oneshot
                    RemainAfterExit=yes
                    ExecStart=/usr/sbin/ethtool -K ens192 tx-udp_tnl-csum-segmentation off
                    ExecStart=/usr/sbin/ethtool -K ens192 tx-udp_tnl-segmentation off
                    [Install]
                    WantedBy=default.target
                - name: kubeadm.service
                  enabled: true
                  dropins:
                  - name: 10-flatcar.conf
                    contents: |
                      [Unit]
                      # kubeadm must run after coreos-metadata populated /run/metadata directory.
                      Requires=coreos-metadata.service
                      After=coreos-metadata.service
                      [Service]
                      # Make metadata environment variables available for pre-kubeadm commands.
                      EnvironmentFile=/run/metadata/*
                - name: teleport.service
                  enabled: true
                  contents: |
                    [Unit]
                    Description=Teleport Service
                    After=network.target
                    [Service]
                    Type=simple
                    Restart=on-failure
                    ExecStart=/opt/bin/teleport start --roles=node --config=/etc/teleport.yaml --pid-file=/run/teleport.pid
                    ExecReload=/bin/kill -HUP $MAINPID
                    PIDFile=/run/teleport.pid
                    LimitNOFILE=524288
                    [Install]
                    WantedBy=multi-user.target
        initConfiguration:
          skipPhases:
          - addon/coredns
          - addon/kube-proxy
          patches:
            directory: /etc/kubernetes/patches
          nodeRegistration:
            criSocket: /run/containerd/containerd.sock
            kubeletExtraArgs:
              cloud-provider: external
              feature-gates: 
              eviction-hard: memory.available<200Mi
              eviction-max-pod-grace-period: 60
              eviction-soft: memory.available<500Mi
              eviction-soft-grace-period: memory.available=5s
              anonymous-auth: "true"
        joinConfiguration:
          patches:
            directory: /etc/kubernetes/patches
          nodeRegistration:
            criSocket: /run/containerd/containerd.sock
            kubeletExtraArgs:
              cloud-provider: external
              feature-gates: 
              eviction-hard: memory.available<200Mi
              eviction-max-pod-grace-period: 60
              eviction-soft: memory.available<500Mi
              eviction-soft-grace-period: memory.available=5s
              anonymous-auth: "true"
        files:
        - path: /etc/kubernetes/manifests/kube-vip.yaml
          permissions: 0600
          contentFrom:
            secret:
              name: release-name-kubevip-pod
              key: content
        - path: /etc/ssh/trusted-user-ca-keys.pem
          permissions: 0600
          content: |
            ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM4cvZ01fLmO9cJbWUj7sfF+NhECgy+Cl0bazSrZX7sU [email protected]
            
        - path: /etc/ssh/sshd_config
          permissions: 0600
          content: |
            # Use most defaults for sshd configuration.
            Subsystem sftp internal-sftp
            ClientAliveInterval 180
            UseDNS no
            UsePAM yes
            PrintLastLog no # handled by PAM
            PrintMotd no # handled by PAM
            # Non defaults (#100)
            ClientAliveCountMax 2
            PasswordAuthentication no
            TrustedUserCAKeys /etc/ssh/trusted-user-ca-keys.pem
            MaxAuthTries 5
            LoginGraceTime 60
            AllowTcpForwarding no
            AllowAgentForwarding no
            
        - path: /etc/teleport-join-token
          permissions: 0644
          contentFrom:
            secret:
              name: release-name-teleport-join-token
              key: joinToken
        - path: /opt/teleport-node-role.sh
          permissions: 0755
          encoding: base64
          content: IyEvYmluL2Jhc2gKCmlmIHN5c3RlbWN0bCBpcy1hY3RpdmUgLXEga3ViZWxldC5zZXJ2aWNlOyB0aGVuCiAgICBpZiBbIC1lICIvZXRjL2t1YmVybmV0ZXMvbWFuaWZlc3RzL2t1YmUtYXBpc2VydmVyLnlhbWwiIF07IHRoZW4KICAgICAgICBlY2hvICJjb250cm9sLXBsYW5lIgogICAgZWxzZQogICAgICAgIGVjaG8gIndvcmtlciIKICAgIGZpCmVsc2UKICAgIGVjaG8gIiIKZmkK
        - path: /etc/teleport.yaml
          permissions: 0644
          encoding: base64
          content: dmVyc2lvbjogdjMKdGVsZXBvcnQ6CiAgZGF0YV9kaXI6IC92YXIvbGliL3RlbGVwb3J0CiAgam9pbl9wYXJhbXM6CiAgICB0b2tlbl9uYW1lOiAvZXRjL3RlbGVwb3J0LWpvaW4tdG9rZW4KICAgIG1ldGhvZDogdG9rZW4KICBwcm94eV9zZXJ2ZXI6IHRlbGVwb3J0LmdpYW50c3dhcm0uaW86NDQzCiAgbG9nOgogICAgb3V0cHV0OiBzdGRlcnIKYXV0aF9zZXJ2aWNlOgogIGVuYWJsZWQ6ICJubyIKc3NoX3NlcnZpY2U6CiAgZW5hYmxlZDogInllcyIKICBjb21tYW5kczoKICAtIG5hbWU6IG5vZGUKICAgIGNvbW1hbmQ6IFtob3N0bmFtZV0KICAgIHBlcmlvZDogMjRoMG0wcwogIC0gbmFtZTogYXJjaAogICAgY29tbWFuZDogW3VuYW1lLCAtbV0KICAgIHBlcmlvZDogMjRoMG0wcwogIC0gbmFtZTogcm9sZQogICAgY29tbWFuZDogWy9vcHQvdGVsZXBvcnQtbm9kZS1yb2xlLnNoXQogICAgcGVyaW9kOiAxbTBzCiAgbGFiZWxzOgogICAgaW5zOiAKICAgIG1jOiAKICAgIGNsdXN0ZXI6IHJlbGVhc2UtbmFtZQogICAgYmFzZURvbWFpbjogazhzLnRlc3QKcHJveHlfc2VydmljZToKICBlbmFibGVkOiAibm8iCg==
        - path: /etc/kubernetes/policies/audit-policy.yaml
          permissions: 0600
          encoding: base64
          content: YXBpVmVyc2lvbjogYXVkaXQuazhzLmlvL3YxCmtpbmQ6IFBvbGljeQpydWxlczoKICAjIFRoZSBmb2xsb3dpbmcgcmVxdWVzdHMgd2VyZSBtYW51YWxseSBpZGVudGlmaWVkIGFzIGhpZ2gtdm9sdW1lIGFuZCBsb3ctcmlzaywKICAjIHNvIGRyb3AgdGhlbS4KICAtIGxldmVsOiBOb25lCiAgICB1c2VyczogWyJzeXN0ZW06a3ViZS1wcm94eSJdCiAgICB2ZXJiczogWyJ3YXRjaCJdCiAgICByZXNvdXJjZXM6CiAgICAgIC0gZ3JvdXA6ICIiICMgY29yZQogICAgICAgIHJlc291cmNlczogWyJlbmRwb2ludHMiLCAic2VydmljZXMiLCAic2VydmljZXMvc3RhdHVzIl0KICAtIGxldmVsOiBOb25lCiAgICAjIEluZ3Jlc3MgY29udHJvbGxlciByZWFkcyAnY29uZmlnbWFwcy9pbmdyZXNzLXVpZCcgdGhyb3VnaCB0aGUgdW5zZWN1cmVkIHBvcnQuCiAgICB1c2VyczogWyJzeXN0ZW06dW5zZWN1cmVkIl0KICAgIG5hbWVzcGFjZXM6IFsia3ViZS1zeXN0ZW0iXQogICAgdmVyYnM6IFsiZ2V0Il0KICAgIHJlc291cmNlczoKICAgICAgLSBncm91cDogIiIgIyBjb3JlCiAgICAgICAgcmVzb3VyY2VzOiBbImNvbmZpZ21hcHMiXQogIC0gbGV2ZWw6IE5vbmUKICAgIHVzZXJzOiBbImt1YmVsZXQiXSAjIGxlZ2FjeSBrdWJlbGV0IGlkZW50aXR5CiAgICB2ZXJiczogWyJnZXQiXQogICAgcmVzb3VyY2VzOgogICAgICAtIGdyb3VwOiAiIiAjIGNvcmUKICAgICAgICByZXNvdXJjZXM6IFsibm9kZXMiLCAibm9kZXMvc3RhdHVzIl0KICAtIGxldmVsOiBOb25lCiAgICB1c2VyR3JvdXBzOiBbInN5c3RlbTpub2RlcyJdCiAgICB2ZXJiczogWyJnZXQiXQogICAgcmVzb3VyY2VzOgogICAgICAtIGdyb3VwOiAiIiAjIGNvcmUKICAgICAgICByZXNvdXJjZXM6IFsibm9kZXMiLCAibm9kZXMvc3RhdHVzIl0KICAtIGxldmVsOiBOb25lCiAgICB1c2VyczoKICAgICAgLSBzeXN0ZW06a3ViZS1jb250cm9sbGVyLW1hbmFnZXIKICAgICAgLSBzeXN0ZW06a3ViZS1zY2hlZHVsZXIKICAgICAgLSBzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06ZW5kcG9pbnQtY29udHJvbGxlcgogICAgdmVyYnM6IFsiZ2V0IiwgInVwZGF0ZSJdCiAgICBuYW1lc3BhY2VzOiBbImt1YmUtc3lzdGVtIl0KICAgIHJlc291cmNlczoKICAgICAgLSBncm91cDogIiIgIyBjb3JlCiAgICAgICAgcmVzb3VyY2VzOiBbImVuZHBvaW50cyJdCiAgLSBsZXZlbDogTm9uZQogICAgdXNlcnM6IFsic3lzdGVtOmFwaXNlcnZlciJdCiAgICB2ZXJiczogWyJnZXQiXQogICAgcmVzb3VyY2VzOgogICAgICAtIGdyb3VwOiAiIiAjIGNvcmUKICAgICAgICByZXNvdXJjZXM6IFsibmFtZXNwYWNlcyIsICJuYW1lc3BhY2VzL3N0YXR1cyIsICJuYW1lc3BhY2VzL2ZpbmFsaXplIl0KICAtIGxldmVsOiBOb25lCiAgICB1c2VyczogWyJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06Y2x1c3Rlci1hdXRvc2NhbGVyIl0KICAgIHZlcmJzOiBbImdldCIsICJ1cGRhdGUiXQogICAgbmFtZXNwYWNlczogWyJrdWJlLXN5c3RlbSJdCiAgICByZXNvdXJjZXM6CiAgICAgIC0gZ3JvdXA6ICIiICMgY29yZQogICAgICAgIHJlc291cmNlczogWyJjb25maWdtYXBzIiwgImVuZHBvaW50cyJdCiAgIyBEb24ndCBsb2cgSFBBIGZldGNoaW5nIG1ldHJpY3MuCiAgLSBsZXZlbDogTm9uZQogICAgdXNlcnM6CiAgICAgIC0gc3lzdGVtOmt1YmUtY29udHJvbGxlci1tYW5hZ2VyCiAgICB2ZXJiczogWyJnZXQiLCAibGlzdCJdCiAgICByZXNvdXJjZXM6CiAgICAgIC0gZ3JvdXA6ICJtZXRyaWNzLms4cy5pbyIKICAjIERvbid0IGxvZyB0aGVzZSByZWFkLW9ubHkgVVJMcy4KICAtIGxldmVsOiBOb25lCiAgICBub25SZXNvdXJjZVVSTHM6CiAgICAgIC0gL2hlYWx0aHoqCiAgICAgIC0gL3ZlcnNpb24KICAgICAgLSAvc3dhZ2dlcioKICAjIERvbid0IGxvZyBldmVudHMgcmVxdWVzdHMuCiAgLSBsZXZlbDogTm9uZQogICAgcmVzb3VyY2VzOgogICAgICAtIGdyb3VwOiAiIiAjIGNvcmUKICAgICAgICByZXNvdXJjZXM6IFsiZXZlbnRzIl0KICAjIG5vZGUgYW5kIHBvZCBzdGF0dXMgY2FsbHMgZnJvbSBub2RlcyBhcmUgaGlnaC12b2x1bWUgYW5kIGNhbiBiZSBsYXJnZSwgZG9uJ3QgbG9nIHJlc3BvbnNlcyBmb3IgZXhwZWN0ZWQgdXBkYXRlcyBmcm9tIG5vZGVzCiAgLSBsZXZlbDogUmVxdWVzdAogICAgdXNlcnM6CiAgICAgIFsKICAgICAgICAia3ViZWxldCIsCiAgICAgICAgInN5c3RlbTpub2RlLXByb2JsZW0tZGV0ZWN0b3IiLAogICAgICAgICJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06bm9kZS1wcm9ibGVtLWRldGVjdG9yIiwKICAgICAgXQogICAgdmVyYnM6IFsidXBkYXRlIiwgInBhdGNoIl0KICAgIHJlc291cmNlczoKICAgICAgLSBncm91cDogIiIgIyBjb3JlCiAgICAgICAgcmVzb3VyY2VzOiBbIm5vZGVzL3N0YXR1cyIsICJwb2RzL3N0YXR1cyJdCiAgICBvbWl0U3RhZ2VzOgogICAgICAtICJSZXF1ZXN0UmVjZWl2ZWQiCiAgLSBsZXZlbDogUmVxdWVzdAogICAgdXNlckdyb3VwczogWyJzeXN0ZW06bm9kZXMiXQogICAgdmVyYnM6IFsidXBkYXRlIiwgInBhdGNoIl0KICAgIHJlc291cmNlczoKICAgICAgLSBncm91cDogIiIgIyBjb3JlCiAgICAgICAgcmVzb3VyY2VzOiBbIm5vZGVzL3N0YXR1cyIsICJwb2RzL3N0YXR1cyJdCiAgICBvbWl0U3RhZ2VzOgogICAgICAtICJSZXF1ZXN0UmVjZWl2ZWQiCiAgIyBkZWxldGVjb2xsZWN0aW9uIGNhbGxzIGNhbiBiZSBsYXJnZSwgZG9uJ3QgbG9nIHJlc3BvbnNlcyBmb3IgZXhwZWN0ZWQgbmFtZXNwYWNlIGRlbGV0aW9ucwogIC0gbGV2ZWw6IFJlcXVlc3QKICAgIHVzZXJzOiBbInN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpuYW1lc3BhY2UtY29udHJvbGxlciJdCiAgICB2ZXJiczogWyJkZWxldGVjb2xsZWN0aW9uIl0KICAgIG9taXRTdGFnZXM6CiAgICAgIC0gIlJlcXVlc3RSZWNlaXZlZCIKICAjIFNlY3JldHMsIENvbmZpZ01hcHMsIGFuZCBUb2tlblJldmlld3MgY2FuIGNvbnRhaW4gc2Vuc2l0aXZlICYgYmluYXJ5IGRhdGEsCiAgIyBzbyBvbmx5IGxvZyBhdCB0aGUgTWV0YWRhdGEgbGV2ZWwuCiAgLSBsZXZlbDogTWV0YWRhdGEKICAgIHJlc291cmNlczoKICAgICAgLSBncm91cDogIiIgIyBjb3JlCiAgICAgICAgcmVzb3VyY2VzOiBbInNlY3JldHMiLCAiY29uZmlnbWFwcyJdCiAgICAgIC0gZ3JvdXA6IGF1dGhlbnRpY2F0aW9uLms4cy5pbwogICAgICAgIHJlc291cmNlczogWyJ0b2tlbnJldmlld3MiXQogICAgb21pdFN0YWdlczoKICAgICAgLSAiUmVxdWVzdFJlY2VpdmVkIgogICMgR2V0IHJlcHNvbnNlcyBjYW4gYmUgbGFyZ2U7IHNraXAgdGhlbS4KICAtIGxldmVsOiBSZXF1ZXN0CiAgICB2ZXJiczogWyJnZXQiLCAibGlzdCIsICJ3YXRjaCJdCiAgICByZXNvdXJjZXM6CiAgICAgIC0gZ3JvdXA6ICIiICMgY29yZQogICAgICAtIGdyb3VwOiAiYWRtaXNzaW9ucmVnaXN0cmF0aW9uLms4cy5pbyIKICAgICAgLSBncm91cDogImFwaWV4dGVuc2lvbnMuazhzLmlvIgogICAgICAtIGdyb3VwOiAiYXBpcmVnaXN0cmF0aW9uLms4cy5pbyIKICAgICAgLSBncm91cDogImFwcHMiCiAgICAgIC0gZ3JvdXA6ICJhdXRoZW50aWNhdGlvbi5rOHMuaW8iCiAgICAgIC0gZ3JvdXA6ICJhdXRob3JpemF0aW9uLms4cy5pbyIKICAgICAgLSBncm91cDogImF1dG9zY2FsaW5nIgogICAgICAtIGdyb3VwOiAiYmF0Y2giCiAgICAgIC0gZ3JvdXA6ICJjZXJ0aWZpY2F0ZXMuazhzLmlvIgogICAgICAtIGdyb3VwOiAiZXh0ZW5zaW9ucyIKICAgICAgLSBncm91cDogIm1ldHJpY3MuazhzLmlvIgogICAgICAtIGdyb3VwOiAibmV0d29ya2luZy5rOHMuaW8iCiAgICAgIC0gZ3JvdXA6ICJwb2xpY3kiCiAgICAgIC0gZ3JvdXA6ICJyYmFjLmF1dGhvcml6YXRpb24uazhzLmlvIgogICAgICAtIGdyb3VwOiAic2NoZWR1bGluZy5rOHMuaW8iCiAgICAgIC0gZ3JvdXA6ICJzZXR0aW5ncy5rOHMuaW8iCiAgICAgIC0gZ3JvdXA6ICJzdG9yYWdlLms4cy5pbyIKICAgIG9taXRTdGFnZXM6CiAgICAgIC0gIlJlcXVlc3RSZWNlaXZlZCIKICAjIERlZmF1bHQgbGV2ZWwgZm9yIGtub3duIEFQSXMKICAtIGxldmVsOiBSZXF1ZXN0UmVzcG9uc2UKICAgIHJlc291cmNlczoKICAgICAgLSBncm91cDogIiIgIyBjb3JlCiAgICAgIC0gZ3JvdXA6ICJhZG1pc3Npb25yZWdpc3RyYXRpb24uazhzLmlvIgogICAgICAtIGdyb3VwOiAiYXBpZXh0ZW5zaW9ucy5rOHMuaW8iCiAgICAgIC0gZ3JvdXA6ICJhcGlyZWdpc3RyYXRpb24uazhzLmlvIgogICAgICAtIGdyb3VwOiAiYXBwcyIKICAgICAgLSBncm91cDogImF1dGhlbnRpY2F0aW9uLms4cy5pbyIKICAgICAgLSBncm91cDogImF1dGhvcml6YXRpb24uazhzLmlvIgogICAgICAtIGdyb3VwOiAiYXV0b3NjYWxpbmciCiAgICAgIC0gZ3JvdXA6ICJiYXRjaCIKICAgICAgLSBncm91cDogImNlcnRpZmljYXRlcy5rOHMuaW8iCiAgICAgIC0gZ3JvdXA6ICJleHRlbnNpb25zIgogICAgICAtIGdyb3VwOiAibWV0cmljcy5rOHMuaW8iCiAgICAgIC0gZ3JvdXA6ICJuZXR3b3JraW5nLms4cy5pbyIKICAgICAgLSBncm91cDogInBvbGljeSIKICAgICAgLSBncm91cDogInJiYWMuYXV0aG9yaXphdGlvbi5rOHMuaW8iCiAgICAgIC0gZ3JvdXA6ICJzY2hlZHVsaW5nLms4cy5pbyIKICAgICAgLSBncm91cDogInNldHRpbmdzLms4cy5pbyIKICAgICAgLSBncm91cDogInN0b3JhZ2UuazhzLmlvIgogICAgb21pdFN0YWdlczoKICAgICAgLSAiUmVxdWVzdFJlY2VpdmVkIgogICMgRGVmYXVsdCBsZXZlbCBmb3IgYWxsIG90aGVyIHJlcXVlc3RzLgogIC0gbGV2ZWw6IE1ldGFkYXRhCiAgICBvbWl0U3RhZ2VzOgogICAgICAtICJSZXF1ZXN0UmVjZWl2ZWQiCg==
        - path: /etc/containerd/config.toml
          permissions: 0600
          contentFrom:
            secret:
              name: release-name-registry-configuration-d5bcde26
              key: registry-config.toml
        - path: /etc/kubernetes/patches/kube-apiserver+json.tpl
          permissions: 0600
          content: |
            [
                {
                    "op": "add",
                    "path": "/spec/dnsPolicy",
                    "value": "ClusterFirstWithHostNet"
                },
                {
                    "op": "add",
                    "path": "/spec/containers/0/command/-",
                    "value": "--max-requests-inflight=$MAX_REQUESTS_INFLIGHT"
                },
                {
                    "op": "add",
                    "path": "/spec/containers/0/command/-",
                    "value": "--max-mutating-requests-inflight=$MAX_MUTATING_REQUESTS_INFLIGHT"
                },
                {
                    "op": "replace",
                    "path": "/spec/containers/0/resources/requests/cpu",
                    "value": "$API_SERVER_CPU_REQUEST"
                },
                {
                    "op": "replace",
                    "path": "/spec/containers/0/resources/requests/memory",
                    "value": "$API_SERVER_MEMORY_REQUEST"
                }
            ]
        - path: /etc/kubernetes/patches/kube-apiserver-patch.sh
          permissions: 0600
          content: |
            #!/usr/bin/env bash
            
            #
            # Creates kube-apiserver+json.json file by replacing
            # environment variables in kube-apiserver+json.tpl
            #
            
            set -o errexit
            set -o nounset
            set -o pipefail
            set -x
            
            if [ "$#" -ne 1 ]; then
                echo "Illegal number of parameters" >&2
                echo "Usage: bash kube-apiserver-patch.sh <resource-ratio>" >&2
                exit 1
            fi
            
            ratio=$1
            
            cpus="$(grep -c ^processor /proc/cpuinfo)"
            memory="$(awk '/MemTotal/ { printf "%d \n", $2/1024 }' /proc/meminfo)"
            
            export MAX_REQUESTS_INFLIGHT=$((cpus*(1600/ratio)))
            export MAX_MUTATING_REQUESTS_INFLIGHT=$((cpus*(800/ratio)))
            export API_SERVER_CPU_REQUEST=$((cpus*(1000/ratio)))m
            export API_SERVER_MEMORY_REQUEST=$((memory/ratio))Mi
            
            envsubst < "/etc/kubernetes/patches/kube-apiserver+json.tpl"  > "/etc/kubernetes/patches/kube-apiserver+json.json"
        preKubeadmCommands:
        - "systemctl restart sshd"
        - "bash /etc/kubernetes/patches/kube-apiserver-patch.sh 8"
        - "/bin/test ! -d /var/lib/kubelet && (/bin/mkdir -p /var/lib/kubelet && /bin/chmod 0750 /var/lib/kubelet)"
      machineTemplate:
        infrastructureRef:
          apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
          kind: VSphereMachineTemplate
          name: release-name-control-plane-81f02996
          namespace: org-giantswarm
      replicas: 1
      version: v1.27.14
    # Source: cluster-vsphere/templates/vspheremachinetemplate.yaml
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
    kind: VSphereMachineTemplate
    metadata:
      name: release-name-control-plane-81f02996
      namespace: org-giantswarm
      labels:
        app: cluster-vsphere
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: release-name
        giantswarm.io/cluster: release-name
        giantswarm.io/organization: giantswarm
        application.giantswarm.io/team: rocket
        app.kubernetes.io/version: 0.59.0
        helm.sh/chart: cluster-vsphere-0.59.0
    spec:
      template:
        spec:
          datacenter: Datacenter
          datastore: vsanDatastore
          server: "https://foo.example.com"
          thumbprint: "F7:CF:F9:E5:99:39:FF:C1:D7:14:F1:3F:8A:42:21:95:3B:A1:6E:16"
          cloneMode: linkedClone
          diskGiB: 25
          memoryMiB: 8192
          network:
            devices:
            - dhcp4: true
              networkName: grasshopper-capv
          numCPUs: 2
          resourcePool: grasshopper
          template: flatcar-stable-3815.2.2-kube-v1.27.14-gs
    
  
    ---
    # Source: cluster-vsphere/charts/cluster/templates/containerd.yaml
    apiVersion: v1
    kind: Secret
    metadata:
      name: test-containerd-b21d846e
    data:
      config.toml: dmVyc2lvbiA9IDIKCiMgcmVjb21tZW5kZWQgZGVmYXVsdHMgZnJvbSBodHRwczovL2dpdGh1Yi5jb20vY29udGFpbmVyZC9jb250YWluZXJkL2Jsb2IvbWFpbi9kb2NzL29wcy5tZCNiYXNlLWNvbmZpZ3VyYXRpb24KIyBzZXQgY29udGFpbmVyZCBhcyBhIHN1YnJlYXBlciBvbiBsaW51eCB3aGVuIGl0IGlzIG5vdCBydW5uaW5nIGFzIFBJRCAxCnN1YnJlYXBlciA9IHRydWUKIyBzZXQgY29udGFpbmVyZCdzIE9PTSBzY29yZQpvb21fc2NvcmUgPSAtOTk5CmRpc2FibGVkX3BsdWdpbnMgPSBbXQpbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ydW50aW1lLnYxLmxpbnV4Il0KIyBzaGltIGJpbmFyeSBuYW1lL3BhdGgKc2hpbSA9ICJjb250YWluZXJkLXNoaW0iCiMgcnVudGltZSBiaW5hcnkgbmFtZS9wYXRoCnJ1bnRpbWUgPSAicnVuYyIKIyBkbyBub3QgdXNlIGEgc2hpbSB3aGVuIHN0YXJ0aW5nIGNvbnRhaW5lcnMsIHNhdmVzIG9uIG1lbW9yeSBidXQKIyBsaXZlIHJlc3RvcmUgaXMgbm90IHN1cHBvcnRlZApub19zaGltID0gZmFsc2UKCltwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiMgc2V0dGluZyBydW5jLm9wdGlvbnMgdW5zZXRzIHBhcmVudCBzZXR0aW5ncwpydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgpbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lcy5ydW5jLm9wdGlvbnNdClN5c3RlbWRDZ3JvdXAgPSB0cnVlCltwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIl0Kc2FuZGJveF9pbWFnZSA9ICJnc29jaS5henVyZWNyLmlvL2dpYW50c3dhcm0vcGF1c2U6My45IgoKW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeS5taXJyb3JzXQogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIiwiaHR0cHM6Ly9naWFudHN3YXJtLmF6dXJlY3IuaW8iLF0KICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeS5taXJyb3JzLiJnc29jaS5henVyZWNyLmlvIl0KICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vem90Li5rOHMudGVzdCIsImh0dHBzOi8vZ3NvY2kuYXp1cmVjci5pbyIsXQpbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkuY29uZmlnc10K
    # Source: cluster-vsphere/templates/provider-specific-files.yaml
    apiVersion: v1
    kind: Secret
    metadata:
      name: release-name-provider-specific-files-1
      namespace: org-giantswarm
    data:
      set-hostname.sh: IyEvYmluL3NoCnNldCAteAplY2hvICIke0NPUkVPU19DVVNUT01fSE9TVE5BTUV9IiA+IC9ldGMvaG9zdG5hbWUKaG9zdG5hbWUgIiR7Q09SRU9TX0NVU1RPTV9IT1NUTkFNRX0iCmVjaG8gIjo6MSAgICAgICAgIGlwdjYtbG9jYWxob3N0IGlwdjYtbG9vcGJhY2siID4vZXRjL2hvc3RzCmVjaG8gIjEyNy4wLjAuMSAgIGxvY2FsaG9zdCIgPj4vZXRjL2hvc3RzCmVjaG8gIjEyNy4wLjAuMSAgICR7Q09SRU9TX0NVU1RPTV9IT1NUTkFNRX0iID4+L2V0Yy9ob3N0cw==
    type: Opaque
    # Source: cluster-vsphere/templates/kubeadmconfigtemplate.yaml
    apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
    kind: KubeadmConfigTemplate
    metadata:
      name: release-name-worker-bf570225
      namespace: org-giantswarm
      labels:
        app: cluster-vsphere
        app.kubernetes.io/managed-by: Helm
        cluster.x-k8s.io/cluster-name: release-name
        giantswarm.io/cluster: release-name
        giantswarm.io/organization: giantswarm
        application.giantswarm.io/team: rocket
        app.kubernetes.io/version: 0.59.0
        helm.sh/chart: cluster-vsphere-0.59.0
    spec:
      template:
        spec:
          users:
          - name: giantswarm
            sudo: "ALL=(ALL) NOPASSWD:ALL"
          format: ignition
          ignition:
            containerLinuxConfig:
              additionalConfig: |
                storage:
                  files:
                  - path: /opt/set-hostname
                    filesystem: root
                    mode: 0744
                    contents:
                      inline: |
                        #!/bin/sh
                        set -x
                        echo "${COREOS_CUSTOM_HOSTNAME}" > /etc/hostname
                        hostname "${COREOS_CUSTOM_HOSTNAME}"
                        echo "::1         ipv6-localhost ipv6-loopback" >/etc/hosts
                        echo "127.0.0.1   localhost" >>/etc/hosts
                        echo "127.0.0.1   ${COREOS_CUSTOM_HOSTNAME}" >>/etc/hosts
                systemd:
                  units:
                  - name: coreos-metadata.service
                    contents: |
                      [Unit]
                      Description=VMware metadata agent
                      After=nss-lookup.target
                      After=network-online.target
                      Wants=network-online.target
                      [Service]
                      Type=oneshot
                      Restart=on-failure
                      RemainAfterExit=yes
                      Environment=OUTPUT=/run/metadata/coreos
                      ExecStart=/usr/bin/mkdir --parent /run/metadata
                      ExecStart=/usr/bin/bash -cv 'echo "COREOS_CUSTOM_HOSTNAME=$("$(find /usr/bin /usr/share/oem -name vmtoolsd -type f -executable 2>/dev/null | head -n 1)" --cmd "info-get guestinfo.metadata" | base64 -d | grep local-hostname | awk {\'print $2\'} | tr -d \'"\')" > $${OUTPUT}'
                  - name: set-hostname.service
                    enabled: true
                    contents: |
                      [Unit]
                      Description=Set the hostname for this machine
                      Requires=coreos-metadata.service
                      After=coreos-metadata.service
                      [Service]
                      Type=oneshot
                      RemainAfterExit=yes
                      EnvironmentFile=/run/metadata/coreos
                      ExecStart=/opt/set-hostname
                      [Install]
                      WantedBy=multi-user.target
                  - name: ethtool-segmentation.service
                    enabled: true
                    contents: |
                      [Unit]
                      After=network.target
                      [Service]
                      Type=oneshot
                      RemainAfterExit=yes
                      ExecStart=/usr/sbin/ethtool -K ens192 tx-udp_tnl-csum-segmentation off
                      ExecStart=/usr/sbin/ethtool -K ens192 tx-udp_tnl-segmentation off
                      [Install]
                      WantedBy=default.target
                  - name: kubeadm.service
                    enabled: true
                    dropins:
                    - name: 10-flatcar.conf
                      contents: |
                        [Unit]
                        # kubeadm must run after coreos-metadata populated /run/metadata directory.
                        Requires=coreos-metadata.service
                        After=coreos-metadata.service
                        [Service]
                        # Make metadata environment variables available for pre-kubeadm commands.
                        EnvironmentFile=/run/metadata/*
                  - name: teleport.service
                    enabled: true
                    contents: |
                      [Unit]
                      Description=Teleport Service
                      After=network.target
                      [Service]
                      Type=simple
                      Restart=on-failure
                      ExecStart=/opt/bin/teleport start --roles=node --config=/etc/teleport.yaml --pid-file=/run/teleport.pid
                      ExecReload=/bin/kill -HUP $MAINPID
                      PIDFile=/run/teleport.pid
                      LimitNOFILE=524288
                      [Install]
                      WantedBy=multi-user.target
          joinConfiguration:
            nodeRegistration:
              criSocket: /run/containerd/containerd.sock
              kubeletExtraArgs:
                # DEPRECATED - remove once CP and workers are rendered with cluster chart
    
    cloud-provider: external
                feature-gates: 
                eviction-hard: memory.available<200Mi
                eviction-max-pod-grace-period: 60
                eviction-soft: memory.available<500Mi
                eviction-soft-grace-period: memory.available=5s
                anonymous-auth: "true"
                node-labels: giantswarm.io/node-pool=worker
          files:
          - path: /etc/ssh/trusted-user-ca-keys.pem
            permissions: 0600
            content: |
              ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM4cvZ01fLmO9cJbWUj7sfF+NhECgy+Cl0bazSrZX7sU [email protected]
              
          - path: /etc/ssh/sshd_config
            permissions: 0600
            content: |
              # DEPRECATED - remove once CP and workers are rendered with cluster chart
              # Use most defaults for sshd configuration.
              Subsystem sftp internal-sftp
              ClientAliveInterval 180
              UseDNS no
              UsePAM yes
              PrintLastLog no # handled by PAM
              PrintMotd no # handled by PAM
              # Non defaults (#100)
              ClientAliveCountMax 2
              PasswordAuthentication no
              TrustedUserCAKeys /etc/ssh/trusted-user-ca-keys.pem
              MaxAuthTries 5
              LoginGraceTime 60
              AllowTcpForwarding no
              AllowAgentForwarding no
              
          - path: /etc/teleport-join-token
            permissions: 0644
            contentFrom:
              secret:
                name: release-name-teleport-join-token
                key: joinToken
          - path: /opt/teleport-node-role.sh
            permissions: 0755
            encoding: base64
            content: IyEvYmluL2Jhc2gKCmlmIHN5c3RlbWN0bCBpcy1hY3RpdmUgLXEga3ViZWxldC5zZXJ2aWNlOyB0aGVuCiAgICBpZiBbIC1lICIvZXRjL2t1YmVybmV0ZXMvbWFuaWZlc3RzL2t1YmUtYXBpc2VydmVyLnlhbWwiIF07IHRoZW4KICAgICAgICBlY2hvICJjb250cm9sLXBsYW5lIgogICAgZWxzZQogICAgICAgIGVjaG8gIndvcmtlciIKICAgIGZpCmVsc2UKICAgIGVjaG8gIiIKZmkK
          - path: /etc/teleport.yaml
            permissions: 0644
            encoding: base64
            content: IyBERVBSRUNBVEVEIC0gcmVtb3ZlIG9uY2UgQ1AgYW5kIHdvcmtlcnMgYXJlIHJlbmRlcmVkIHdpdGggY2x1c3RlciBjaGFydAp2ZXJzaW9uOiB2Mwp0ZWxlcG9ydDoKICBkYXRhX2RpcjogL3Zhci9saWIvdGVsZXBvcnQKICBqb2luX3BhcmFtczoKICAgIHRva2VuX25hbWU6IC9ldGMvdGVsZXBvcnQtam9pbi10b2tlbgogICAgbWV0aG9kOiB0b2tlbgogIHByb3h5X3NlcnZlcjogdGVsZXBvcnQuZ2lhbnRzd2FybS5pbzo0NDMKICBsb2c6CiAgICBvdXRwdXQ6IHN0ZGVycgphdXRoX3NlcnZpY2U6CiAgZW5hYmxlZDogIm5vIgpzc2hfc2VydmljZToKICBlbmFibGVkOiAieWVzIgogIGNvbW1hbmRzOgogIC0gbmFtZTogbm9kZQogICAgY29tbWFuZDogW2hvc3RuYW1lXQogICAgcGVyaW9kOiAyNGgwbTBzCiAgLSBuYW1lOiBhcmNoCiAgICBjb21tYW5kOiBbdW5hbWUsIC1tXQogICAgcGVyaW9kOiAyNGgwbTBzCiAgLSBuYW1lOiByb2xlCiAgICBjb21tYW5kOiBbL29wdC90ZWxlcG9ydC1ub2RlLXJvbGUuc2hdCiAgICBwZXJpb2Q6IDFtMHMKICBsYWJlbHM6CiAgICBpbnM6IAogICAgbWM6IAogICAgY2x1c3RlcjogcmVsZWFzZS1uYW1lCiAgICBiYXNlRG9tYWluOiBrOHMudGVzdApwcm94eV9zZXJ2aWNlOgogIGVuYWJsZWQ6ICJubyIK
          - path: /etc/containerd/config.toml
            permissions: 0600
            contentFrom:
              secret:
                name: release-name-registry-configuration-d5bcde26
                key: registry-config.toml
          preKubeadmCommands:
          - "systemctl restart sshd"
          - "/bin/test ! -d /var/lib/kubelet && (/bin/mkdir -p /var/lib/kubelet && /bin/chmod 0750 /var/lib/kubelet)"
          postKubeadmCommands:
          - "usermod -aG root nobody" # required for node-exporter to access the host's filesystem
    # Source: cluster-vsphere/charts/cluster/templates/clusterapi/controlplane/kubeadmcontrolplane.yaml
    apiVersion: controlplane.cluster.x-k8s.io/v1beta1
    kind: KubeadmControlPlane
    metadata:
      annotations:
        helm.sh/resource-policy: keep
      labels:
        # deprecated: "app: cluster-vsphere" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-vsphere
        app.kubernetes.io/name: cluster
        app.kubernetes.io/version: 1.2.1
        app.kubernetes.io/part-of: cluster-vsphere
        app.kubernetes.io/instance: release-name
        app.kubernetes.io/managed-by: Helm
        helm.sh/chart: cluster-1.2.1
        application.giantswarm.io/team: turtles
        giantswarm.io/cluster: test
        giantswarm.io/organization: giantswarm
        giantswarm.io/service-priority: highest
        cluster.x-k8s.io/cluster-name: test
        cluster.x-k8s.io/watch-filter: capi
      name: test
      namespace: org-giantswarm
    spec:
      machineTemplate:
        metadata:
          labels:
            # deprecated: "app: cluster-vsphere" label is deprecated and it will be removed after upgrading
    # to Kubernetes 1.25. We still need it here because existing ClusterResourceSet selectors
    # need this label on the Cluster resource.
    app: cluster-vsphere
            app.kubernetes.io/name: cluster
            app.kubernetes.io/version: 1.2.1
            app.kubernetes.io/part-of: cluster-vsphere
            app.kubernetes.io/instance: release-name
            app.kubernetes.io/managed-by: Helm
            helm.sh/chart: cluster-1.2.1
            application.giantswarm.io/team: turtles
            giantswarm.io/cluster: test
            giantswarm.io/organization: giantswarm
            giantswarm.io/service-priority: highest
            cluster.x-k8s.io/cluster-name: test
            cluster.x-k8s.io/watch-filter: capi
        infrastructureRef:
          apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
          kind: VSphereMachineTemplate
          name: test-control-plane-37795751
      kubeadmConfigSpec:
        diskSetup:
          # Workaround for https://github.com/kubernetes-sigs/cluster-api/issues/7679.
    # Don't define partitions here, they are defined in "ignition.containerLinuxConfig.additionalConfig"      
    filesystems: 
        mounts: 
        format: ignition
        ignition:
          containerLinuxConfig:
            additionalConfig: |
              systemd:
                units:      
                - name: os-hardening.service
                  enabled: true
                  contents: |
                    [Unit]
                    Description=Apply os hardening
                    [Service]
                    Type=oneshot
                    ExecStartPre=-/bin/bash -c "gpasswd -d core rkt; gpasswd -d core docker; gpasswd -d core wheel"
                    ExecStartPre=/bin/bash -c "until [ -f '/etc/sysctl.d/hardening.conf' ]; do echo Waiting for sysctl file; sleep 1s;done;"
                    ExecStart=/usr/sbin/sysctl -p /etc/sysctl.d/hardening.conf
                    [Install]
                    WantedBy=multi-user.target
                - name: update-engine.service
                  enabled: false
                  mask: true
                - name: locksmithd.service
                  enabled: false
                  mask: true
                - name: sshkeys.service
                  enabled: false
                  mask: true
                - name: teleport.service
                  enabled: true
                  contents: |
                    [Unit]
                    Description=Teleport Service
                    After=network.target
                    [Service]
                    Type=simple
                    Restart=on-failure
                    ExecStart=/opt/bin/teleport start --roles=node --config=/etc/teleport.yaml --pid-file=/run/teleport.pid
                    ExecReload=/bin/kill -HUP $MAINPID
                    PIDFile=/run/teleport.pid
                    LimitNOFILE=524288
                    [Install]
                    WantedBy=multi-user.target
                - name: kubeadm.service
                  dropins:
                  - name: 10-flatcar.conf
                    contents: |
                      [Unit]
                      # kubeadm must run after coreos-metadata populated /run/metadata directory.
                      Requires=coreos-metadata.service
                      After=coreos-metadata.service
                      # kubeadm must run after containerd - see https://github.com/kubernetes-sigs/image-builder/issues/939.
                      After=containerd.service
                      # kubeadm requires having an IP
                      After=network-online.target
                      Wants=network-online.target
                      [Service]
                      # Ensure kubeadm service has access to kubeadm binary in /opt/bin on Flatcar.
                      Environment=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/opt/bin
                      # To make metadata environment variables available for pre-kubeadm commands.
                      EnvironmentFile=/run/metadata/*
                - name: containerd.service
                  enabled: true
                  contents: |
                  dropins:
                  - name: 10-change-cgroup.conf
                    contents: |
                      [Service]
                      CPUAccounting=true
                      MemoryAccounting=true
                      Slice=kubereserved.slice
                - name: audit-rules.service
                  enabled: true
                  dropins:
                  - name: 10-wait-for-containerd.conf
                    contents: |
                      [Service]
                      ExecStartPre=/bin/bash -c "while [ ! -f /etc/audit/rules.d/containerd.rules ]; do echo 'Waiting for /etc/audit/rules.d/containerd.rules to be written' && sleep 1; done"
                      Restart=on-failure      
                - name: etcd3-defragmentation.service
                  enabled: false
                  contents: |
                    [Unit]
                    Description=etcd defragmentation job
                    After=containerd.service kubelet.service
                    Requires=containerd.service kubelet.service
                    [Service]
                    Type=oneshot
                    ExecStart=/bin/sh -c "crictl exec $(crictl ps --name=^etcd$ -q) etcdctl \
                      --cacert=/etc/kubernetes/pki/etcd/ca.crt \
                      --cert=/etc/kubernetes/pki/etcd/peer.crt \
                      --key=/etc/kubernetes/pki/etcd/peer.key \
                      defrag \
                      --command-timeout=60s \
                      --dial-timeout=60s \
                      --keepalive-timeout=25s"
                - name: etcd3-defragmentation.timer
                  enabled: true
                  contents: |
                    [Unit]
                    Description=Execute etcd3-defragmentation hourly
                    [Timer]
                    OnCalendar=hourly
                    RandomizedDelaySec=55m
                    FixedRandomDelay=true
                    Unit=etcd3-defragmentation.service
                    [Install]
                    WantedBy=multi-user.target      
                - name: coreos-metadata.service
                  enabled: true
                  contents: |
                    [Unit]
                    Description=VMWare metadata agent
                    [Install]
                    WantedBy=multi-user.target
                  dropins:
                  - name: 10-coreos-metadata.conf
                    contents: |
                      [Unit]
                      After=nss-lookup.target
                      After=network-online.target
                      Wants=network-online.target
                      [Service]
                      Type=oneshot
                      Restart=on-failure
                      RemainAfterExit=yes
                      Environment=OUTPUT=/run/metadata/coreos
                      ExecStart=/usr/bin/mkdir --parent /run/metadata
                      ExecStart=/usr/bin/bash -cv 'echo "COREOS_CUSTOM_HOSTNAME=$("$(find /usr/bin /usr/share/oem -name vmtoolsd -type f -executable 2>/dev/null | head -n 1)" --cmd "info-get guestinfo.metadata" | base64 -d | awk \'/local-hostname/ {print $2}\' | tr -d \'"\')" >> ${OUTPUT}'
                      ExecStart=/usr/bin/bash -cv 'echo "COREOS_CUSTOM_IPV4=$("$(find /usr/bin /usr/share/oem -name vmtoolsd -type f -executable 2>/dev/null | head -n 1)" --cmd "info-get guestinfo.ip")" >> ${OUTPUT}'
                - name: set-hostname.service
                  enabled: true
                  contents: |
                    [Unit]
                    Description=Set machine hostname
                    [Install]
                    WantedBy=multi-user.target
                  dropins:
                  - name: 10-set-hostname.conf
                    contents: |
                      [Unit]
                      Requires=coreos-metadata.service
                      After=coreos-metadata.service
                      Before=teleport.service
                      [Service]
                      Type=oneshot
                      RemainAfterExit=yes
                      EnvironmentFile=/run/metadata/coreos
                      ExecStart=/opt/bin/set-hostname.sh
                - name: ethtool-segmentation.service
                  enabled: true
                  contents: |
                    [Unit]
                    Description=Disable TCP segmentation offloading
                    [Install]
                    WantedBy=default.target
                  dropins:
                  - name: 10-ethtool-segmentation.conf
                    contents: |
                      [Unit]
                      After=network.target
                      [Service]
                      Type=oneshot
                      RemainAfterExit=yes
                      ExecStart=/usr/sbin/ethtool -K ens192 tx-udp_tnl-csum-segmentation off
                      ExecStart=/usr/sbin/ethtool -K ens192 tx-udp_tnl-segmentation off
              storage:
                filesystems:      
                directories:      
                - path: /var/lib/kubelet
                  mode: 0750      
                disks:      
              
        clusterConfiguration:
          # Avoid accessibility issues (e.g. on private clusters) and potential future rate limits for
    # the default `registry.k8s.io`.
    imageRepository: gsoci.azurecr.io/giantswarm
          # API server configuration
    apiServer:
            certSANs:
            - localhost
            - 127.0.0.1
            - api.test.k8s.test
            - apiserver.test.k8s.test
            timeoutForControlPlane: 20m
            extraArgs:
              audit-log-maxage: 30
              audit-log-maxbackup: 30
              audit-log-maxsize: 100
              audit-log-path: /var/log/apiserver/audit.log
              audit-policy-file: /etc/kubernetes/policies/audit-policy.yaml
              cloud-provider: external
              enable-admission-plugins: "DefaultStorageClass,DefaultTolerationSeconds,LimitRanger,MutatingAdmissionWebhook,NamespaceLifecycle,PersistentVolumeClaimResize,Priority,ResourceQuota,ServiceAccount,ValidatingAdmissionWebhook"
              enable-priority-and-fairness: "true"
              encryption-provider-config: /etc/kubernetes/encryption/config.yaml
              feature-gates: StatefulSetAutoDeletePVC=true
              kubelet-preferred-address-types: InternalIP
              profiling: "false"
              runtime-config: api/all=true
              service-account-lookup: "true"
              service-cluster-ip-range: 172.31.0.0/16
              tls-cipher-suites: "TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384"
              requestheader-allowed-names: front-proxy-client
            extraVolumes:
            - name: auditlog
              hostPath: /var/log/apiserver
              mountPath: /var/log/apiserver
              readOnly: false
              pathType: DirectoryOrCreate
            - name: policies
              hostPath: /etc/kubernetes/policies
              mountPath: /etc/kubernetes/policies
              readOnly: false
              pathType: DirectoryOrCreate
            - name: encryption
              hostPath: /etc/kubernetes/encryption
              mountPath: /etc/kubernetes/encryption
              readOnly: false
              pathType: DirectoryOrCreate
          controllerManager:
            extraArgs:
              allocate-node-cidrs: "true"
              authorization-always-allow-paths: "/healthz,/readyz,/livez,/metrics"
              bind-address: 0.0.0.0
              cloud-provider: external
              cluster-cidr: 10.244.0.0/16
              feature-gates: StatefulSetAutoDeletePVC=true
              terminated-pod-gc-threshold: 125
          scheduler:
            extraArgs:
              authorization-always-allow-paths: "/healthz,/readyz,/livez,/metrics"
              bind-address: 0.0.0.0
          etcd:
            local:
              extraArgs:
                listen-metrics-urls: "http://0.0.0.0:2381"
                quota-backend-bytes: 8589934592
          networking:
            serviceSubnet: 172.31.0.0/16
        initConfiguration:
          skipPhases:
          - addon/kube-proxy
          - addon/coredns
          localAPIEndpoint:
            advertiseAddress: 
            bindPort: 6443
          nodeRegistration:
            kubeletExtraArgs:
              cloud-provider: external
              healthz-bind-address: 0.0.0.0
              node-ip: ${COREOS_CUSTOM_IPV4}
              node-labels: ip=${COREOS_CUSTOM_IPV4}
              v: 2
            name: ${COREOS_CUSTOM_HOSTNAME}
          patches:
            directory: /etc/kubernetes/patches
        joinConfiguration:
          discovery: {}
          controlPlane:
            localAPIEndpoint:
              bindPort: 6443
          nodeRegistration:
            kubeletExtraArgs:
              cloud-provider: external
              node-ip: ${COREOS_CUSTOM_IPV4}
              node-labels: ip=${COREOS_CUSTOM_IPV4}
            name: ${COREOS_CUSTOM_HOSTNAME}
          patches:
            directory: /etc/kubernetes/patches
        files:
        - path: /etc/sysctl.d/hardening.conf
          permissions: 0644
          encoding: base64
          content: ZnMuaW5vdGlmeS5tYXhfdXNlcl93YXRjaGVzID0gMTYzODQKZnMuaW5vdGlmeS5tYXhfdXNlcl9pbnN0YW5jZXMgPSA4MTkyCmtlcm5lbC5rcHRyX3Jlc3RyaWN0ID0gMgprZXJuZWwuc3lzcnEgPSAwCm5ldC5pcHY0LmNvbmYuYWxsLmxvZ19tYXJ0aWFucyA9IDEKbmV0LmlwdjQuY29uZi5hbGwuc2VuZF9yZWRpcmVjdHMgPSAwCm5ldC5pcHY0LmNvbmYuZGVmYXVsdC5hY2NlcHRfcmVkaXJlY3RzID0gMApuZXQuaXB2NC5jb25mLmRlZmF1bHQubG9nX21hcnRpYW5zID0gMQpuZXQuaXB2NC50Y3BfdGltZXN0YW1wcyA9IDAKbmV0LmlwdjYuY29uZi5hbGwuYWNjZXB0X3JlZGlyZWN0cyA9IDAKbmV0LmlwdjYuY29uZi5kZWZhdWx0LmFjY2VwdF9yZWRpcmVjdHMgPSAwCiMgSW5jcmVhc2VkIG1tYXBmcyBiZWNhdXNlIHNvbWUgYXBwbGljYXRpb25zLCBsaWtlIEVTLCBuZWVkIGhpZ2hlciBsaW1pdCB0byBzdG9yZSBkYXRhIHByb3Blcmx5CnZtLm1heF9tYXBfY291bnQgPSAyNjIxNDQKIyBSZXNlcnZlZCB0byBhdm9pZCBjb25mbGljdHMgd2l0aCBrdWJlLWFwaXNlcnZlciwgd2hpY2ggYWxsb2NhdGVzIHdpdGhpbiB0aGlzIHJhbmdlCm5ldC5pcHY0LmlwX2xvY2FsX3Jlc2VydmVkX3BvcnRzPTMwMDAwLTMyNzY3Cm5ldC5pcHY0LmNvbmYuYWxsLnJwX2ZpbHRlciA9IDEKbmV0LmlwdjQuY29uZi5hbGwuYXJwX2lnbm9yZSA9IDEKbmV0LmlwdjQuY29uZi5hbGwuYXJwX2Fubm91bmNlID0gMgoKIyBUaGVzZSBhcmUgcmVxdWlyZWQgZm9yIHRoZSBrdWJlbGV0ICctLXByb3RlY3Qta2VybmVsLWRlZmF1bHRzJyBmbGFnCiMgU2VlIGh0dHBzOi8vZ2l0aHViLmNvbS9naWFudHN3YXJtL2dpYW50c3dhcm0vaXNzdWVzLzEzNTg3CnZtLm92ZXJjb21taXRfbWVtb3J5PTEKa2VybmVsLnBhbmljPTEwCmtlcm5lbC5wYW5pY19vbl9vb3BzPTEK
        - path: /etc/selinux/config
          permissions: 0644
          encoding: base64
          content: IyBUaGlzIGZpbGUgY29udHJvbHMgdGhlIHN0YXRlIG9mIFNFTGludXggb24gdGhlIHN5c3RlbSBvbiBib290LgoKIyBTRUxJTlVYIGNhbiB0YWtlIG9uZSBvZiB0aGVzZSB0aHJlZSB2YWx1ZXM6CiMgICAgICAgZW5mb3JjaW5nIC0gU0VMaW51eCBzZWN1cml0eSBwb2xpY3kgaXMgZW5mb3JjZWQuCiMgICAgICAgcGVybWlzc2l2ZSAtIFNFTGludXggcHJpbnRzIHdhcm5pbmdzIGluc3RlYWQgb2YgZW5mb3JjaW5nLgojICAgICAgIGRpc2FibGVkIC0gTm8gU0VMaW51eCBwb2xpY3kgaXMgbG9hZGVkLgpTRUxJTlVYPXBlcm1pc3NpdmUKCiMgU0VMSU5VWFRZUEUgY2FuIHRha2Ugb25lIG9mIHRoZXNlIGZvdXIgdmFsdWVzOgojICAgICAgIHRhcmdldGVkIC0gT25seSB0YXJnZXRlZCBuZXR3b3JrIGRhZW1vbnMgYXJlIHByb3RlY3RlZC4KIyAgICAgICBzdHJpY3QgICAtIEZ1bGwgU0VMaW51eCBwcm90ZWN0aW9uLgojICAgICAgIG1scyAgICAgIC0gRnVsbCBTRUxpbnV4IHByb3RlY3Rpb24gd2l0aCBNdWx0aS1MZXZlbCBTZWN1cml0eQojICAgICAgIG1jcyAgICAgIC0gRnVsbCBTRUxpbnV4IHByb3RlY3Rpb24gd2l0aCBNdWx0aS1DYXRlZ29yeSBTZWN1cml0eQojICAgICAgICAgICAgICAgICAgKG1scywgYnV0IG9ubHkgb25lIHNlbnNpdGl2aXR5IGxldmVsKQpTRUxJTlVYVFlQRT1tY3MK
        - path: /etc/ssh/trusted-user-ca-keys.pem
          permissions: 0600
          encoding: base64
          content: c3NoLWVkMjU1MTkgQUFBQUMzTnphQzFsWkRJMU5URTVBQUFBSU00Y3ZaMDFmTG1POWNKYldVajdzZkYrTmhFQ2d5K0NsMGJhelNyWlg3c1UgdmF1bHQtY2FAdmF1bHQub3BlcmF0aW9ucy5naWFudHN3YXJtLmlvCg==
        - path: /etc/ssh/sshd_config
          permissions: 0600
          encoding: base64
          content: IyBVc2UgbW9zdCBkZWZhdWx0cyBmb3Igc3NoZCBjb25maWd1cmF0aW9uLgpTdWJzeXN0ZW0gc2Z0cCBpbnRlcm5hbC1zZnRwCkNsaWVudEFsaXZlSW50ZXJ2YWwgMTgwClVzZUROUyBubwpVc2VQQU0geWVzClByaW50TGFzdExvZyBubyAjIGhhbmRsZWQgYnkgUEFNClByaW50TW90ZCBubyAjIGhhbmRsZWQgYnkgUEFNCiMgTm9uIGRlZmF1bHRzICgjMTAwKQpDbGllbnRBbGl2ZUNvdW50TWF4IDIKUGFzc3dvcmRBdXRoZW50aWNhdGlvbiBubwpUcnVzdGVkVXNlckNBS2V5cyAvZXRjL3NzaC90cnVzdGVkLXVzZXItY2Eta2V5cy5wZW0KTWF4QXV0aFRyaWVzIDUKTG9naW5HcmFjZVRpbWUgNjAKQWxsb3dUY3BGb3J3YXJkaW5nIG5vCkFsbG93QWdlbnRGb3J3YXJkaW5nIG5vCkNBU2lnbmF0dXJlQWxnb3JpdGhtcyBlY2RzYS1zaGEyLW5pc3RwMjU2LGVjZHNhLXNoYTItbmlzdHAzODQsZWNkc2Etc2hhMi1uaXN0cDUyMSxzc2gtZWQyNTUxOSxyc2Etc2hhMi01MTIscnNhLXNoYTItMjU2LHNzaC1yc2EK
        - path: /etc/containerd/config.toml
          permissions: 0644
          contentFrom:
            secret:
              name: test-containerd-b21d846e
              key: config.toml
        - path: /etc/kubernetes/patches/kubeletconfiguration.yaml
          permissions: 0644
          encoding: base64
          content: YXBpVmVyc2lvbjoga3ViZWxldC5jb25maWcuazhzLmlvL3YxYmV0YTEKa2luZDogS3ViZWxldENvbmZpZ3VyYXRpb24Kc2h1dGRvd25HcmFjZVBlcmlvZDogMzAwcwpzaHV0ZG93bkdyYWNlUGVyaW9kQ3JpdGljYWxQb2RzOiA2MHMKa2VybmVsTWVtY2dOb3RpZmljYXRpb246IHRydWUKZXZpY3Rpb25Tb2Z0OgogIG1lbW9yeS5hdmFpbGFibGU6ICI1MDBNaSIKZXZpY3Rpb25IYXJkOgogIG1lbW9yeS5hdmFpbGFibGU6ICIyMDBNaSIKICBpbWFnZWZzLmF2YWlsYWJsZTogIjE1JSIKZXZpY3Rpb25Tb2Z0R3JhY2VQZXJpb2Q6CiAgbWVtb3J5LmF2YWlsYWJsZTogIjVzIgpldmljdGlvbk1heFBvZEdyYWNlUGVyaW9kOiA2MAprdWJlUmVzZXJ2ZWQ6CiAgY3B1OiAzNTBtCiAgbWVtb3J5OiAxMjgwTWkKICBlcGhlbWVyYWwtc3RvcmFnZTogMTAyNE1pCmt1YmVSZXNlcnZlZENncm91cDogL2t1YmVyZXNlcnZlZC5zbGljZQpwcm90ZWN0S2VybmVsRGVmYXVsdHM6IHRydWUKc3lzdGVtUmVzZXJ2ZWQ6CiAgY3B1OiAyNTBtCiAgbWVtb3J5OiAzODRNaQpzeXN0ZW1SZXNlcnZlZENncm91cDogL3N5c3RlbS5zbGljZQp0bHNDaXBoZXJTdWl0ZXM6Ci0gVExTX0FFU18xMjhfR0NNX1NIQTI1NgotIFRMU19BRVNfMjU2X0dDTV9TSEEzODQKLSBUTFNfQ0hBQ0hBMjBfUE9MWTEzMDVfU0hBMjU2Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzEyOF9DQkNfU0hBCi0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzEyOF9HQ01fU0hBMjU2Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzI1Nl9DQkNfU0hBCi0gVExTX0VDREhFX0VDRFNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0Ci0gVExTX0VDREhFX0VDRFNBX1dJVEhfQ0hBQ0hBMjBfUE9MWTEzMDUKLSBUTFNfRUNESEVfRUNEU0FfV0lUSF9DSEFDSEEyMF9QT0xZMTMwNV9TSEEyNTYKLSBUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzEyOF9DQkNfU0hBCi0gVExTX0VDREhFX1JTQV9XSVRIX0FFU18xMjhfR0NNX1NIQTI1NgotIFRMU19FQ0RIRV9SU0FfV0lUSF9BRVNfMjU2X0NCQ19TSEEKLSBUTFNfRUNESEVfUlNBX1dJVEhfQUVTXzI1Nl9HQ01fU0hBMzg0Ci0gVExTX0VDREhFX1JTQV9XSVRIX0NIQUNIQTIwX1BPTFkxMzA1Ci0gVExTX0VDREhFX1JTQV9XSVRIX0NIQUNIQTIwX1BPTFkxMzA1X1NIQTI1NgotIFRMU19SU0FfV0lUSF9BRVNfMTI4X0NCQ19TSEEKLSBUTFNfUlNBX1dJVEhfQUVTXzEyOF9HQ01fU0hBMjU2Ci0gVExTX1JTQV9XSVRIX0FFU18yNTZfQ0JDX1NIQQotIFRMU19SU0FfV0lUSF9BRVNfMjU2X0dDTV9TSEEzODQKc2VyaWFsaXplSW1hZ2VQdWxsczogZmFsc2UKc3RyZWFtaW5nQ29ubmVjdGlvbklkbGVUaW1lb3V0OiAxaAphbGxvd2VkVW5zYWZlU3lzY3RsczoKLSAibmV0LioiCg==
        - path: /etc/systemd/logind.conf.d/zzz-kubelet-graceful-shutdown.conf
          permissions: 0700
          encoding: base64
          content: W0xvZ2luXQojIGRlbGF5CkluaGliaXREZWxheU1heFNlYz0zMDAK
        - path: /etc/teleport-join-token
          permissions: 0644
          contentFrom:
            secret:
              name: test-teleport-join-token
              key: joinToken
        - path: /opt/teleport-node-role.sh
          permissions: 0755
          encoding: base64
          content: IyEvYmluL2Jhc2gKCmlmIHN5c3RlbWN0bCBpcy1hY3RpdmUgLXEga3ViZWxldC5zZXJ2aWNlOyB0aGVuCiAgICBpZiBbIC1lICIvZXRjL2t1YmVybmV0ZXMvbWFuaWZlc3RzL2t1YmUtYXBpc2VydmVyLnlhbWwiIF07IHRoZW4KICAgICAgICBlY2hvICJjb250cm9sLXBsYW5lIgogICAgZWxzZQogICAgICAgIGVjaG8gIndvcmtlciIKICAgIGZpCmVsc2UKICAgIGVjaG8gIiIKZmkK
        - path: /etc/teleport.yaml
          permissions: 0644
          encoding: base64
          content: dmVyc2lvbjogdjMKdGVsZXBvcnQ6CiAgZGF0YV9kaXI6IC92YXIvbGliL3RlbGVwb3J0CiAgam9pbl9wYXJhbXM6CiAgICB0b2tlbl9uYW1lOiAvZXRjL3RlbGVwb3J0LWpvaW4tdG9rZW4KICAgIG1ldGhvZDogdG9rZW4KICBwcm94eV9zZXJ2ZXI6IHRlbGVwb3J0LmdpYW50c3dhcm0uaW86NDQzCiAgbG9nOgogICAgb3V0cHV0OiBzdGRlcnIKYXV0aF9zZXJ2aWNlOgogIGVuYWJsZWQ6ICJubyIKc3NoX3NlcnZpY2U6CiAgZW5hYmxlZDogInllcyIKICBjb21tYW5kczoKICAtIG5hbWU6IG5vZGUKICAgIGNvbW1hbmQ6IFtob3N0bmFtZV0KICAgIHBlcmlvZDogMjRoMG0wcwogIC0gbmFtZTogYXJjaAogICAgY29tbWFuZDogW3VuYW1lLCAtbV0KICAgIHBlcmlvZDogMjRoMG0wcwogIC0gbmFtZTogcm9sZQogICAgY29tbWFuZDogWy9vcHQvdGVsZXBvcnQtbm9kZS1yb2xlLnNoXQogICAgcGVyaW9kOiAxbTBzCiAgbGFiZWxzOgogICAgaW5zOiAKICAgIG1jOiAKICAgIGNsdXN0ZXI6IHRlc3QKICAgIGJhc2VEb21haW46IGs4cy50ZXN0CnByb3h5X3NlcnZpY2U6CiAgZW5hYmxlZDogIm5vIgo=
        - path: /etc/audit/rules.d/99-default.rules
          permissions: 0640
          encoding: base64
          content: IyBPdmVycmlkZGVuIGJ5IEdpYW50IFN3YXJtLgotYSBleGl0LGFsd2F5cyAtRiBhcmNoPWI2NCAtUyBleGVjdmUgLWsgYXVkaXRpbmcKLWEgZXhpdCxhbHdheXMgLUYgYXJjaD1iMzIgLVMgZXhlY3ZlIC1rIGF1ZGl0aW5nCg==
        - contentFrom:
            secret:
              key: set-hostname.sh
              name: test-provider-specific-files-1
          path: /opt/bin/set-hostname.sh
          permissions: 0755
        - path: /etc/kubernetes/policies/audit-policy.yaml
          permissions: 0600
          encoding: base64
          content: YXBpVmVyc2lvbjogYXVkaXQuazhzLmlvL3YxCmtpbmQ6IFBvbGljeQpydWxlczoKICAjIFRoZSBmb2xsb3dpbmcgcmVxdWVzdHMgd2VyZSBtYW51YWxseSBpZG...*[Comment body truncated]*

@glitchcrab
Copy link
Member Author

/run cluster-test-suites

@tinkerers-ci
Copy link

tinkerers-ci bot commented Aug 19, 2024

cluster-test-suites

Run name pr-cluster-vsphere-263-cluster-test-suitesnhdrs
Commit SHA 6140679
Result Completed ✅

📋 View full results in Tekton Dashboard

Rerun trigger:
/run cluster-test-suites


Tip

To only re-run the failed test suites you can provide a TARGET_SUITES parameter with your trigger that points to the directory path of the test suites to run, e.g. /run cluster-test-suites TARGET_SUITES=./providers/capa/standard to re-run the CAPA standard test suite. This supports multiple test suites with each path separated by a comma.

Copy link
Member

@njuettner njuettner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@anvddriesch
Copy link
Contributor

@glitchcrab glitchcrab merged commit 10ed2c3 into main Aug 20, 2024
14 checks passed
@glitchcrab glitchcrab deleted the cluster-chart/render-controlplane branch August 20, 2024 07:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants