Skip to content

Commit

Permalink
Merge branch 'master' into rke2-prov-vsphere
Browse files Browse the repository at this point in the history
  • Loading branch information
btat committed May 10, 2022
2 parents 0ef065f + f1b6a38 commit 57a2e45
Show file tree
Hide file tree
Showing 70 changed files with 4,940 additions and 475 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,5 @@ package-lock.json
/scripts/converters/results_to_markdown/.terraform
/scripts/converters/results_to_markdown/terraform.tfstate*
/scripts/converters/results_to_markdown/*.tfvars

.idea/
2 changes: 1 addition & 1 deletion config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -209,4 +209,4 @@ pre = "<i class='material-icons'>keyboard_arrow_down</i>"
[[menu.main]]
name = "Partners"
url = "https://rancher.com/partners/"
parent = "about"
parent = "about"
4 changes: 2 additions & 2 deletions content/k3s/latest/en/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ K3s is a fully compliant Kubernetes distribution with the following enhancements
* Secure by default with reasonable defaults for lightweight environments.
* Simple but powerful "batteries-included" features have been added, such as: a local storage provider, a service load balancer, a Helm controller, and the Traefik ingress controller.
* Operation of all Kubernetes control plane components is encapsulated in a single binary and process. This allows K3s to automate and manage complex cluster operations like distributing certificates.
* External dependencies have been minimized (just a modern kernel and cgroup mounts needed). K3s packages required dependencies, including:
* External dependencies have been minimized (just a modern kernel and cgroup mounts needed). K3s packages the required dependencies, including:
* containerd
* Flannel
* CoreDNS
Expand All @@ -38,4 +38,4 @@ K3s is a fully compliant Kubernetes distribution with the following enhancements

# What's with the name?

We wanted an installation of Kubernetes that was half the size in terms of memory footprint. Kubernetes is a 10-letter word stylized as K8s. So something half as big as Kubernetes would be a 5-letter word stylized as K3s. There is no long form of K3s and no official pronunciation.
We wanted an installation of Kubernetes that was half the size in terms of memory footprint. Kubernetes is a 10-letter word stylized as K8s. So something half as big as Kubernetes would be a 5-letter word stylized as K3s. There is no long form of K3s and no official pronunciation.
68 changes: 37 additions & 31 deletions content/k3s/latest/en/advanced/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,11 +17,11 @@ This section contains advanced information describing the different ways you can
- [Node labels and taints](#node-labels-and-taints)
- [Starting the server with the installation script](#starting-the-server-with-the-installation-script)
- [Additional preparation for Alpine Linux setup](#additional-preparation-for-alpine-linux-setup)
- [Additional preparation for (Red Hat/CentOS) Enterprise Linux](#additional-preparation-for-red-hat/centos-enterprise-linux)
- [Additional preparation for Raspberry Pi OS Setup](#additional-preparation-for-raspberry-pi-os-setup)
- [Enabling vxlan on Ubuntu 21.10+ on Raspberry Pi](#enabling-vxlan-on-ubuntu-21.10+-on-raspberry-pi)
- [Running K3d (K3s in Docker) and docker-compose](#running-k3d-k3s-in-docker-and-docker-compose)
- [Enabling legacy iptables on Raspbian Buster](#enabling-legacy-iptables-on-raspbian-buster)
- [Enabling cgroups for Raspbian Buster](#enabling-cgroups-for-raspbian-buster)
- [SELinux Support](#selinux-support)
- [Additional preparation for (Red Hat/CentOS) Enterprise Linux](#additional-preparation-for-red-hat-centos-enterprise-linux)
- [Enabling Lazy Pulling of eStargz (Experimental)](#enabling-lazy-pulling-of-estargz-experimental)
- [Additional Logging Sources](#additional-logging-sources)
- [Server and agent tokens](#server-and-agent-tokens)
Expand Down Expand Up @@ -143,7 +143,7 @@ K3s will generate config.toml for containerd in `/var/lib/rancher/k3s/agent/etc/

For advanced customization for this file you can create another file called `config.toml.tmpl` in the same directory and it will be used instead.

The `config.toml.tmpl` will be treated as a Go template file, and the `config.Node` structure is being passed to the template. [This template](https://github.com/rancher/k3s/blob/master/pkg/agent/templates/templates.go#L16-L32) example on how to use the structure to customize the configuration file.
The `config.toml.tmpl` will be treated as a Go template file, and the `config.Node` structure is being passed to the template. See [this folder](https://github.com/k3s-io/k3s/blob/master/pkg/agent/templates) for Linux and Windows examples on how to use the structure to customize the configuration file.


# Running K3s with Rootless mode (Experimental)
Expand Down Expand Up @@ -256,6 +256,39 @@ Then update the config and reboot:
update-extlinux
reboot
```
# Additional preparation for (Red Hat/CentOS) Enterprise Linux

It is recommended to turn off firewalld:
```
systemctl disable firewalld --now
```

If enabled, it is required to disable nm-cloud-setup and reboot the node:
```
systemctl disable nm-cloud-setup.service nm-cloud-setup.timer
reboot
```

# Additional preparation for Raspberry Pi OS Setup
## Enabling legacy iptables on Raspberry Pi OS
Raspberry Pi OS (formerly Raspbian) defaults to using `nftables` instead of `iptables`. **K3S** networking features require `iptables` and do not work with `nftables`. Follow the steps below to switch configure **Buster** to use `legacy iptables`:
```
sudo iptables -F
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
sudo reboot
```

## Enabling cgroups for Raspberry Pi OS

Standard Raspberry Pi OS installations do not start with `cgroups` enabled. **K3S** needs `cgroups` to start the systemd service. `cgroups`can be enabled by appending `cgroup_memory=1 cgroup_enable=memory` to `/boot/cmdline.txt`.

# Enabling vxlan on Ubuntu 21.10+ on Raspberry Pi

Starting with Ubuntu 21.10, vxlan support on Raspberry Pi has been moved into a seperate kernel module.
```
sudo apt install linux-modules-extra-raspi
```

# Running K3d (K3s in Docker) and docker-compose

Expand Down Expand Up @@ -293,20 +326,6 @@ Alternatively the `docker run` command can also be used:
--privileged rancher/k3s:vX.Y.Z


# Enabling legacy iptables on Raspbian Buster

Raspbian Buster defaults to using `nftables` instead of `iptables`. **K3S** networking features require `iptables` and do not work with `nftables`. Follow the steps below to switch configure **Buster** to use `legacy iptables`:
```
sudo iptables -F
sudo update-alternatives --set iptables /usr/sbin/iptables-legacy
sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy
sudo reboot
```

# Enabling cgroups for Raspbian Buster

Standard Raspbian Buster installations do not start with `cgroups` enabled. **K3S** needs `cgroups` to start the systemd service. `cgroups`can be enabled by appending `cgroup_memory=1 cgroup_enable=memory` to `/boot/cmdline.txt`.

### example of /boot/cmdline.txt
```
console=serial0,115200 console=tty1 root=PARTUUID=58b06195-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait cgroup_memory=1 cgroup_enable=memory
Expand Down Expand Up @@ -365,19 +384,6 @@ Using a custom `--data-dir` under SELinux is not supported. To customize it, you
{{%/tab%}}
{{% /tabs %}}

# Additional preparation for (Red Hat/CentOS) Enterprise Linux

It is recommended to turn off firewalld:
```
systemctl disable firewalld --now
```

If enabled, it is required to disable nm-cloud-setup and reboot the node:
```
systemctl disable nm-cloud-setup.service nm-cloud-setup.timer
reboot
```

# Enabling Lazy Pulling of eStargz (Experimental)

### What's lazy pulling and eStargz?
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,5 +46,4 @@ token: "secret"
node-ip: 10.0.10.22,2a05:d012:c6f:4655:d73c:c825:a184:1b75
cluster-cidr: 10.42.0.0/16,2001:cafe:42:0::/56
service-cidr: 10.43.0.0/16,2001:cafe:42:1::/112
disable-network-policy: true
```
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ K3s performance depends on the performance of the database. To ensure optimal sp

The K3s server needs port 6443 to be accessible by all nodes.

The nodes need to be able to reach other nodes over UDP port 8472 when Flannel VXLAN is used. The node should not listen on any other port. K3s uses reverse tunneling such that the nodes make outbound connections to the server and all kubelet traffic runs through that tunnel. However, if you do not use Flannel and provide your own custom CNI, then port 8472 is not needed by K3s.
The nodes need to be able to reach other nodes over UDP port 8472 when Flannel VXLAN is used or over UDP ports 51820 and 51821 (when using IPv6) when Flannel Wireguard backend is used. The node should not listen on any other port. K3s uses reverse tunneling such that the nodes make outbound connections to the server and all kubelet traffic runs through that tunnel. However, if you do not use Flannel and provide your own custom CNI, then the ports needed by Flannel are not needed by K3s.

If you wish to utilize the metrics server, you will need to open port 10250 on each node.

Expand All @@ -59,6 +59,8 @@ If you plan on achieving high availability with embedded etcd, server nodes must
|-----|-----|----------------|---|
| TCP | 6443 | K3s agent nodes | Kubernetes API Server
| UDP | 8472 | K3s server and agent nodes | Required only for Flannel VXLAN
| UDP | 51820 | K3s server and agent nodes | Required only for Flannel Wireguard backend
| UDP | 51821 | K3s server and agent nodes | Required only for Flannel Wireguard backend with IPv6
| TCP | 10250 | K3s server and agent nodes | Kubelet metrics
| TCP | 2379-2380 | K3s server nodes | Required only for HA with embedded etcd

Expand Down
18 changes: 13 additions & 5 deletions content/k3s/latest/en/installation/network-options/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ If you wish to use WireGuard as your flannel backend it may require additional k
<span style="white-space: nowrap">`--flannel-backend=ipsec`</span> | Uses the IPSEC backend which encrypts network traffic. |
<span style="white-space: nowrap">`--flannel-backend=host-gw`</span> | Uses the host-gw backend. |
<span style="white-space: nowrap">`--flannel-backend=wireguard`</span> | Uses the WireGuard backend which encrypts network traffic. May require additional kernel modules and configuration. |
<span style="white-space: nowrap">`--flannel-ipv6-masq`</span> | Apply masquerading rules to IPv6 traffic (default for IPv4). Only applies on dual-stack or IPv6-only clusters |

### Custom CNI

Expand Down Expand Up @@ -74,15 +75,22 @@ You should see that IP forwarding is set to true.

Dual-stack networking must be configured when the cluster is first created. It cannot be enabled on an existing single-stack cluster.

To enable dual-stack in k3s, you must provide valid dual-stack `cluster-cidr` and `service-cidr`, and set `disable-network-policy` on all server nodes. Both servers and agents must provide valid dual-stack `node-ip` settings. Node address auto-detection and network policy enforcement are not supported on dual-stack clusters when using the default flannel CNI. Besides, only vxlan backend is supported at the moment. This is an example of a valid configuration:
Dual-stack is supported on k3s v1.21 or above.

To enable dual-stack in K3s, you must provide valid dual-stack `cluster-cidr` and `service-cidr` on all server nodes. Both servers and agents must provide valid dual-stack `node-ip` settings. Node address auto-detection is not supported on dual-stack clusters, because kubelet fetches only the first IP address that it finds. Additionally, only vxlan backend is supported currently. This is an example of a valid configuration:

```
node-ip: 10.0.10.7,2a05:d012:c6f:4611:5c2:5602:eed2:898c
cluster-cidr: 10.42.0.0/16,2001:cafe:42:0::/56
service-cidr: 10.43.0.0/16,2001:cafe:42:1::/112
disable-network-policy: true
k3s server --node-ip 10.0.10.7,2a05:d012:c6f:4611:5c2:5602:eed2:898c --cluster-cidr 10.42.0.0/16,2001:cafe:42:0::/56 --service-cidr 10.43.0.0/16,2001:cafe:42:1::/112
```

Note that you can choose whatever `cluster-cidr` and `service-cidr` value, however the `node-ip` values must correspond to the ip addresses of your main interface. Remember to allow ipv6 traffic if you are deploying in a public cloud.

If you are using a custom cni plugin, i.e. a cni plugin different from flannel, the previous configuration might not be enough to enable dual-stack in the cni plugin. Please check how to enable dual-stack in its documentation and verify if network policies can be enabled.

### IPv6 only installation

IPv6 only setup is supported on k3s v1.22 or above. Note that network policy enforcement is not supported on IPv6-only clusters when using the default flannel CNI. This is an example of a valid configuration:

```
k3s server --disable-network-policy
```
43 changes: 32 additions & 11 deletions content/k3s/latest/en/installation/private-registry/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,22 +32,43 @@ mirrors:

Each mirror must have a name and set of endpoints. When pulling an image from a registry, containerd will try these endpoint URLs one by one, and use the first working one.

#### Rewrites

Each mirror can have a set of rewrites. Rewrites can change the tag of an image based on a regular expression. This is useful if the organization/project structure in the mirror registry is different to the upstream one.

For example, the following configuration would transparently pull the image `docker.io/rancher/coredns-coredns:1.6.3` from `registry.example.com:5000/mirrorproject/rancher-images/coredns-coredns:1.6.3`:

```
mirrors:
docker.io:
endpoint:
- "https://registry.example.com:5000"
rewrite:
"^rancher/(.*)": "mirrorproject/rancher-images/$1"
```

The image will still be stored under the original name so that a `crictl image ls` will show `docker.io/rancher/coredns-coredns:1.6.3` as available on the node, even though the image was pulled from the mirrored registry with a different name.

### Configs

The configs section defines the TLS and credential configuration for each mirror. For each mirror you can define `auth` and/or `tls`. The TLS part consists of:
The `configs` section defines the TLS and credential configuration for each mirror. For each mirror you can define `auth` and/or `tls`.

The `tls` part consists of:

Directive | Description
----------|------------
`cert_file` | The client certificate path that will be used to authenticate with the registry
`key_file` | The client key path that will be used to authenticate with the registry
`ca_file` | Defines the CA certificate path to be used to verify the registry's server cert file
`insecure_skip_verify` | Boolean that defines if TLS verification should be skipped for the registry
| Directive | Description |
|------------------------|--------------------------------------------------------------------------------------|
| `cert_file` | The client certificate path that will be used to authenticate with the registry |
| `key_file` | The client key path that will be used to authenticate with the registry |
| `ca_file` | Defines the CA certificate path to be used to verify the registry's server cert file |
| `insecure_skip_verify` | Boolean that defines if TLS verification should be skipped for the registry |

The credentials consist of either username/password or authentication token:
The `auth` part consists of either username/password or authentication token:

- username: user name of the private registry basic auth
- password: user password of the private registry basic auth
- auth: authentication token of the private registry basic auth
| Directive | Description |
|------------|---------------------------------------------------------|
| `username` | user name of the private registry basic auth |
| `password` | user password of the private registry basic auth |
| `auth` | authentication token of the private registry basic auth |

Below are basic examples of using private registries in different modes:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: Opening Ports with firewalld
weight: 1
---

> We recommend disabling firewalld. For Kubernetes 1.19, firewalld must be turned off.
> We recommend disabling firewalld. For Kubernetes 1.19.x and higher, firewalld must be turned off.
Some distributions of Linux [derived from RHEL,](https://en.wikipedia.org/wiki/Red_Hat_Enterprise_Linux#Rebuilds) including Oracle Linux, may have default firewall rules that block communication with Helm.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,3 +19,22 @@ Certificates can be rotated for the following services:
- kube-scheduler
- kube-controller-manager


### Certificate Rotation

Rancher launched Kubernetes clusters have the ability to rotate the auto-generated certificates through the UI.

1. In the **Global** view, navigate to the cluster that you want to rotate certificates.

2. Select **⋮ > Rotate Certificates**.

3. Select which certificates that you want to rotate.

* Rotate all Service certificates (keep the same CA)
* Rotate an individual service and choose one of the services from the drop-down menu

4. Click **Save**.

**Results:** The selected certificates will be rotated and the related services will be restarted to start using the new certificate.

> **Note:** Even though the RKE CLI can use custom certificates for the Kubernetes cluster components, Rancher currently doesn't allow the ability to upload these in Rancher launched Kubernetes clusters.
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ aliases:
- /rancher/v2.5/en/cluster-provisioning/rke-clusters/options/cloud-providers
- /rancher/v2.x/en/cluster-provisioning/rke-clusters/cloud-providers/
---
A _cloud provider_ is a module in Kubernetes that provides an interface for managing nodes, load balancers, and networking routes. For more information, refer to the [official Kubernetes documentation on cloud providers.](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/)
A _cloud provider_ is a module in Kubernetes that provides an interface for managing nodes, load balancers, and networking routes.

When a cloud provider is set up in Rancher, the Rancher server can automatically provision new nodes, load balancers or persistent storage devices when launching Kubernetes definitions, if the cloud provider you're using supports such automation.

Expand Down Expand Up @@ -39,9 +39,9 @@ For details on enabling the vSphere cloud provider, refer to [this page.](./vsph

### Setting up a Custom Cloud Provider

The `Custom` cloud provider is available if you want to configure any [Kubernetes cloud provider](https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/).
The `Custom` cloud provider is available if you want to configure any Kubernetes cloud provider.

For the custom cloud provider option, you can refer to the [RKE docs]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/) on how to edit the yaml file for your specific cloud provider. There are specific cloud providers that have more detailed configuration :
For the custom cloud provider option, you can refer to the [RKE docs]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/) on how to edit the yaml file for your specific cloud provider. There are specific cloud providers that have more detailed configuration:

* [vSphere]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/vsphere/)
* [OpenStack]({{<baseurl>}}/rke/latest/en/config-options/cloud-providers/openstack/)
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,10 @@ This page covers how to install the Cloud Provider Interface (CPI) and Cloud Sto

# Prerequisites

The vSphere version must be 7.0u1 or higher.
The vSphere versions supported:

* 6.7u3
* 7.0u1 or higher.

The Kubernetes version must be 1.19 or higher.

Expand Down
Loading

0 comments on commit 57a2e45

Please sign in to comment.