Skip to content

Releases: giantswarm/cluster-vsphere

v0.66.0

14 Nov 13:24
9284225
Compare
Choose a tag to compare

Changed

  • Use Renovate to update kube-vip static pod manifest.
  • Updated giantswarm/cluster to v1.6.0.
  • Update kubectl image used by IPAM job to 1.29.9.
  • Use init-container to prepare /etc/hosts file for kube-vip.

v0.65.2

28 Oct 20:17
a265199
Compare
Choose a tag to compare
Release v0.65.2 (#300)

v0.65.1

23 Oct 13:55
1801985
Compare
Choose a tag to compare

Changed

  • Render Flatcar and Kubernetes version from cluster chart.

v0.65.0

15 Oct 10:16
a2d162c
Compare
Choose a tag to compare

⚠️ Breaking change ⚠️

  • Support for Release CR's.
Migration steps
  • In ConfigMap <cluster name>-userconfig set .Values.global.release to the release version, e.g. 27.0.0.
  • In App <cluster name> set the version to an empty string.

Changed

  • Update kube-vip static pod to v0.8.3.
  • Allow .Values.global.managementCluster in values schema.

v0.64.0

24 Sep 15:40
d8ed69a
Compare
Choose a tag to compare

Changed

  • Migrated all worker resources (KubeadmConfigTemplate, MachineDeployment and MachineHealthCheck) to be rendered from the shared cluster chart.
  • Render cleanup hook job using cluster chart.

v0.63.0

03 Sep 08:00
37a9927
Compare
Choose a tag to compare

Added

  • Adding global.connectivity.network.loadBalancers.numberOfIps to specify the number of preassigned ips for load balancers. (New default is 3)

v0.62.0

02 Sep 09:58
e234f6d
Compare
Choose a tag to compare
  • Allow adding custom annotations to the infrastructure cluster resource using providerSpecific.additionalVsphereClusterAnnotations value.

v0.61.0

26 Aug 17:54
bec680a
Compare
Choose a tag to compare

Warning

This release adds all default apps to cluster-vsphere, so default-apps-vsphere App is not used anymore. Changes in
cluster-vsphere are breaking and cluster upgrade requires manual steps where default-apps-vsphere App is removed
before upgrading cluster-vsphere. See details below.

Added

  • Render capi-node-labeler App CR from cluster chart.
  • Render cert-exporter App CR from cluster chart and add vSphere-specific cert-exporter config.
  • Render cert-manager App CR from cluster chart and add vSphere-specific cert-manager config.
  • Render chart-operator-extensions App CR from cluster chart.
  • Render cilium HelmRelease CR from cluster chart and add vSphere-specific cilium config.
  • Render cilium-servicemonitors App CR from cluster chart.
  • Render coredns HelmRelease CR from cluster chart.
  • Render etc-kubernetes-resources-count-exporter App CR from cluster chart.
  • Render k8s-dns-node-cache App CR from cluster chart.
  • Render metrics-server App CR from cluster chart.
  • Render net-exporter App CR from cluster chart.
  • Render network-policies HelmRelease CR from cluster chart and add vSphere-specific network-policies config.
  • Render node-exporter App CR from cluster chart and add vSphere-specific node-exporter config.
  • Render observability-bundle App CR from cluster chart.
  • Render observability-policies App CR from cluster chart.
  • Render security-bundle App CR from cluster chart.
  • Render teleport-kube-agent App CR from cluster chart.
  • Render vertical-pod-autoscaler App CR from cluster chart.
  • Render vertical-pod-autoscaler-crd HelmRelease CR from cluster chart.
  • Render HelmRepository CRs from cluster chart.
  • Add missing Helm value .Values.global.controlPlane.apiServerPort.
  • Add Makefile template target that renders manifests with CI values from the chart.
  • Add Makefile generate target that normalizes and validates schema, generates docs and Helm values, and updates Helm dependencies.

Removed

  • Remove cilium HelmRelease.
  • Remove coredns HelmRelease.
  • Remove network-policies HelmRelease.
  • Remove HelmRepository CRs.

⚠️ Workload cluster upgrade with manual steps

The steps to upgrade a workload cluster, with unifying cluster-vsphere and default-apps-vsphere, are the following:

  • Upgrade default-apps-vsphere App to the v0.16.0 release.
  • Update default-apps-vsphere Helm value .Values.deleteOptions.moveAppsHelmOwnershipToClusterVSphere to true.
    • All App CRs, except observability-bundle and security-bundle, will get app-operator.giantswarm.io/paused: true annotation,
      so wait few minutes for Helm post-upgrade hook to apply the change to all required App CRs.
  • Delete default-apps-vsphere CR.
    • ⚠️ In case you are removing default-apps-vsphere App CR from your gitops repo which is using Flux, and depending on
      how Flux is configured, default-apps-vsphere App CR may or may not get deleted from the management cluster. In case
      Flux does not delete default-apps-vsphere App CR from the management cluster, make sure to delete it manually.
    • App CRs (on the MC) for all default apps will get deleted. Wait few minutes for this to happen.
    • Chart CRs on the workload cluster will remain untouched, so all apps will continue running.
  • Upgrade cluster-vsphere App CR to the v0.61.0 release.
    • cluster-vsphere will deploy all default apps, so wait a few minutes for all Apps to be successfully deployed.
    • Chart resources on the workload cluster will get updated, as newly deployed App resources will take over the reconciliation
      of the existing Chart resources.

We're almost there, with just one more issue to fix manually.

VPA CRD used to installed as an App resource from default-apps-vsphere, and now it's being installed as a HelmRelease from
cluster-vsphere. Now, as a consequence of the above upgrade, we have the following situation:

  • default-apps-vsphere App has been deleted, but the vertical-pod-autoscaler-crd Chart CRs remained in the workload cluster.
  • cluster-vsphere has been upgraded, so now it also installs vertical-pod-autoscaler-crd HelmRelease.
  • outcome: we now have vertical-pod-autoscaler-crd HelmRelease in the MC and vertical-pod-autoscaler-crd Chart CR in the WC.

Now we will remove the leftover vertical-pod-autoscaler-crd Chart CR in a safe way:

  1. Pause vertical-pod-autoscaler-crd Chart CR.

Add annotation chart-operator.giantswarm.io/paused: "true" to vertical-pod-autoscaler-crd Chart CR in the workload cluster:

kubectl annotate -n giantswarm chart vertical-pod-autoscaler-crd chart-operator.giantswarm.io/paused="true" --overwrite
  1. Delete vertical-pod-autoscaler-crd Chart CR in the workload cluster.
kubectl delete -n giantswarm chart vertical-pod-autoscaler-crd

The command line will probably hang, as the chart-operator finalizer has is not getting removed (vertical-pod-autoscaler-crd
Chart CR has been paused). Proceed to the next step to remove the finalizer and unblock the deletion.

  1. Remove finalizers from the vertical-pod-autoscaler-crd Chart CR

Open another terminal window and run the following command to remove the vertical-pod-autoscaler-crd Chart CR finalizers:

kubectl patch chart vertical-pod-autoscaler-crd -n giantswarm --type=json -p='[{"op": "remove", "path": "/metadata/finalizers"}]'

This will unblock the deletion and vertical-pod-autoscaler-crd will get removed, without actually deleting VPA CustomResourceDefinition.

From now on, VPA CustomResourceDefinition will be maintained by the vertical-pod-autoscaler HelmRelease on the management cluster.

v0.60.1

23 Aug 07:52
79b0839
Compare
Choose a tag to compare

Fixed

  • Rename caFile to caPem in values schema.

v0.60.0

22 Aug 07:25
1254a75
Compare
Choose a tag to compare

Breaking change.

Caution

It is important that you check each of the sections in the upgrade guide below. Note that some may not apply to your specific cluster configuration. However, the cleanup section must always be run against the cluster values.

Upgrade guide: how to migrate values (from v0.59.0)

Use the snippets below if the section applies to your chart's values:

Control Plane endpoint address

If the controlPlane endpoint IP (loadbalancer for the Kubernetes API) has been statically assigned (this likely will not apply to workload clusters) then this value will need to be duplicated to the extraCertificateSANs list.

yq eval --inplace 'with(select(.global.connectivity.network.controlPlaneEndpoint.host != null); .cluster.internal.advancedConfiguration.controlPlane.apiServer.extraCertificateSANs += [ .global.connectivity.network.controlPlaneEndpoint.host ])' values.yaml

API server admission plugins

The default list is here. If you have not extended this list then you do not need to provide a list of admission plugins at all (defaults will be used from the cluster chart). If this is the case, please ignore the following command.

yq eval --inplace 'with(select(.internal.apiServer.enableAdmissionPlugins != null); .cluster.providerIntegration.controlPlane.kubeadmConfig.clusterConfiguration.apiServer.additionalAdmissionPlugins = .internal.apiServer.enableAdmissionPlugins)' values.yaml

API server feature gates

There is no default list of feature gates in the shared cluster chart, so if you have any values under .internal.apiServer.featureGates then these must be migrated to the new location.

yq eval --inplace 'with(select(.internal.apiServer.featureGates != null); .cluster.providerIntegration.controlPlane.kubeadmConfig.clusterConfiguration.apiServer.featureGates = .internal.apiServer.featureGates)' values.yaml

OIDC config

caFile has been renamed to caPem.

yq eval --inplace 'with(select(.global.controlPlane.oidc.caFile != null); .global.controlPlane.oidc.caPem = .global.controlPlane.oidc.caFile)' values.yaml

SSH trusted CA keys

If you are providing additional trusted CA keys for SSH authentication (other than the default Giant Swarm key) then these need to migrated to the new location.

yq eval --inplace 'with(select(.global.connectivity.shell.sshTrustedUserCAKeys != null); .cluster.providerIntegration.connectivity.sshSsoPublicKey = .global.connectivity.shell.sshTrustedUserCAKeys)' values.yaml

Upstream proxy settings

If your cluster is behind an upstream proxy (if .global.connectivity.proxy.enabled: true) then the proxy configuration must also be added to the cluster chart's values.

  • httpProxy: upstream proxy protocol, address and port (e.g. http://proxy-address:port)
  • httpsProxy: upstream proxy protocol, address and port (e.g. http://proxy-address:port)
  • noProxy: comma-separated list of domains and IP CIDRs which should not be proxied (e.g. 10.10.10.0/24,internal.domain.com)

Additional notes:

  • Encryption is always enabled with the shared cluster chart, so this toggle is removed entirely (.internal.enableEncryptionProvider).
  • OIDC groupsPrefix and usernamePrefix are removed.
  • Upstream proxy configuration is no longer read from the .global.connectivity.proxy.secretName value.

Cleanup

Final tidyup to remove deprecated values:

yq eval --inplace 'del(.internal.apiServer.enableAdmissionPlugins) |
    del(.internal.apiServer.featureGates) |
    del(.internal.enableEncryptionProvider) |
    del(.global.controlPlane.oidc.caFile) |
    del(.global.controlPlane.oidc.groupsPrefix) |
    del(.global.controlPlane.oidc.usernamePrefix) |
    del(.global.connectivity.shell.sshTrustedUserCAKeys) |
    del(.global.connectivity.proxy.secretName) |
    del(.internal.apiServer) |
    del(.internal.controllerManager) |
    del(.internal.enableEncryptionProvider) ' values.yaml