Skip to content

Releases: vitobotta/hetzner-k3s

v2.2.6

08 Feb 21:34
Compare
Choose a tag to compare

Fixes

  • Fixed the network zone associations for Ashburn and Hillsboro locations

v2.2.5

06 Feb 16:16
Compare
Choose a tag to compare

Fixes

  • Fixed the network zones for the US and Singapore areas - by @mcampa

v2.2.4

03 Feb 19:56
Compare
Choose a tag to compare

Improvements

  • Cluster deletion: now after we’ve deleted the load balancer—which must be removed before the masters—we switch to the master1's kubeconfig context. This allows us to continue identifying (via kubectl) and deleting autoscaled nodes as well static nodes. Previously, autoscaled nodes were left behind because the load balancer was no longer available at that point, so we couldn't reach the Kubernetes API to detect autoscaled nodes.
  • When removing unused placement groups, we now focus only on those whose names begin with the cluster name. This helps prevent accidentally deleting placement groups that were created by the user or other tools within the same project.

v2.2.3

30 Jan 23:25
dd802e8
Compare
Choose a tag to compare

New

  • A highly requested feature is now available: you can configure a regional cluster with each master placed in a different location across Europe, ensuring the highest level of availability. Additionally, you have the option to convert an existing zonal cluster into a regional one. For more details and step-by-step instructions, check out this page.
  • We’ve also included new documentation to guide you through the proper migration of a cluster created with hetzner-k3s v1.x to v2.x. This includes scenarios where your current cluster still relies on instance types that Hetzner has since deprecated. Take a look at this page for all the information and instructions you’ll need.

Upgrading from v2.2.2

  • Update the location property in the masters pool to locations and make it an array of locations. Before version 2.2.2, clusters could only be created with all masters in one location, so you can modify the configuration like this. For example, if the location is fsn1, it should look like:
masters_pool:
  locations:
    - fsn1

v2.2.2

30 Jan 17:12
Compare
Choose a tag to compare

Improvements

  • We now use the kube context of the first master to install the software, then only switch to the load balancer context at the very end, if it’s available. This approach helps because the load balancer might take some time to become healthy, which could otherwise slow down the installation process.
  • Added an exponential backoff mechanism for cases where instance creation fails, such as when the selected instance types aren’t available in the chosen locations. This should help handle temporary issues more effectively.
  • Added a new --force option to the delete command. If you set it to true, the cluster will be deleted without any prompts. This is really handy for automated operations.

Fixes

  • Fixed an issue where the create command would time out before setting up the cluster autoscaler. This happened when there were no static worker node pools configured.
  • Fixed an issue that surfaces when using an existing private network with subnet size other than /16 - by @ValentinVoigt

Upgrading from v2.1.0

See instructions for v2.2.0.

v2.2.1

29 Jan 15:22
Compare
Choose a tag to compare

Improvements

  • We now use the kube context of the first master to install the software, then only switch to the load balancer context at the very end, if it’s available. This approach helps because the load balancer might take some time to become healthy, which could otherwise slow down the installation process.
  • Added an exponential backoff mechanism for cases where instance creation fails, such as when the selected instance types aren’t available in the chosen locations. This should help handle temporary issues more effectively.
  • Added a new --force option to the delete command. If you set it to true, the cluster will be deleted without any prompts. This is really handy for automated operations.

Fixes

  • Fixed an issue where the create command would time out before setting up the cluster autoscaler. This happened when there were no static worker node pools configured.

Upgrading from v2.1.0

See instructions for v2.2.0.

v2.2.0

28 Jan 02:28
Compare
Choose a tag to compare

New

  • Added support for the Singapore location.
  • We’ve reintroduced the option to create a load balancer for the Kubernetes API, but this time it’s optional and turned off by default. If you want to use it, you can enable it by setting create_load_balancer_for_the_kubernetes_api: false. Just a heads-up: the load balancer was removed a few versions back because Hetzner doesn’t yet support load balancers in their firewalls. This means you can’t restrict access to the Kubernetes API when using a load balancer. However, since some users asked for it, we’ve brought it back for flexibility. You can now enable it if needed!

Fixes

  • Fixed a problem that caused extra placement groups to be created.
  • Resolved an issue where pagination was missing when fetching SSH keys in projects with more than 25 keys.
  • Fixed the assignment of labels and taints to nodes.

Improvements

  • We took out the library we were using for SSH sessions because it occasionally caused issues with certain keys. Those problems were tricky to figure out and fix. Now, we’re using the standard ssh binary that comes with the operating system to run commands on remote nodes. This change should help prevent those strange compatibility problems that popped up with some keys or environments.
  • The cached list of available k3s versions now refreshes automatically if the cache is older than 7 days.
  • The system now waits for at least one worker node to be ready before installing the Cluster Autoscaler. This prevents premature autoscaling when creating a new cluster. Previously, the Cluster Autoscaler was installed before worker nodes were ready, which could trigger autoscaling as soon as pending pods were detected. Reference.
  • For consistency, autoscaled node pools now include the cluster name as a prefix in node names, similar to static node pools.
  • Added a confirmation prompt before deleting a cluster to avoid accidental deletion when using the wrong config file.
  • Clusters are now protected from deletion by default as an additional measure to prevent accidentally deleting the wrong one. If you're working with test or temporary clusters and need to delete them, you can disable this protection by setting protect_against_deletion: false in the configuration file.
  • Added a confirmation prompt before upgrading a cluster to prevent accidentally upgrading the wrong cluster.
  • Improved exception handling during the software installation phase. Previously, a failure in installing a software component could stop the setup of worker nodes.
  • Disabled the local-path storage class by default to avoid conflicts where k3s automatically sets it as the default storage class.
  • The tool no longer opens firewall ports for the embedded registry mirror if a private network is available.
  • Made the image tag for the Cluster Autoscaler customizable using the setting manifests.cluster_autoscaler_container_image_tag.
  • Autoscaled nodes are now considered when determining upgrade concurrency.
  • Added error and debugging information when SSH sessions to nodes fail.

Miscellaneous

  • Upgraded the System Upgrade Controller to the latest version.
  • Upgraded the Hetzner CSI Driver to the latest version.
  • Upgraded the Hetzner Cloud Controller Manager to the latest version.
  • Upgraded the Cluster Autoscaler to the latest version.
  • Upgraded Cilium to the latest version.

Upgrading from v2.1.0

  • If you have active autoscaled node pools (pools with one or more nodes currently in the cluster), you need to set the property include_cluster_name_as_prefix to false for those pools due to the naming convention change mentioned earlier.
  • If you are using the local-path storage class, you need to set local_path_storage_class.enabled to true.
  • If you'd rather use a load balancer for the Kubernetes API instead of constantly switching between contexts, you can enable it by setting create_load_balancer_for_the_kubernetes_api: true. After that, just run the create command to set up the load balancer.

v2.1.0

13 Jan 18:26
Compare
Choose a tag to compare

Improvements

  • This update lets different types of instances coexist within the same node pool. This will make it easier for older clusters to transition from the 1.1.5 naming system, which included instance type in the name, to the newer 2.x naming scheme that doesn’t include this detail.

Upgrading

Important: See notes for v2.0.0 if you are upgrading from v1.1.5.

v2.0.9

07 Nov 15:46
Compare
Choose a tag to compare

Miscellaneous

  • Switched to new official autoscaler image (by @ghaering)

Upgrading

Important: See notes for v2.0.0 if you are upgrading from v1.1.5.

v2.0.8

30 Aug 19:22
Compare
Choose a tag to compare

Fixed

  • Fixed an issue preventing correct detection of the private network interface for autoscaled nodes

Upgrading

Important: See notes for v2.0.0 if you are upgrading from v1.1.5.