Skip to content

Latest commit

 

History

History
236 lines (175 loc) · 8.92 KB

README.md

File metadata and controls

236 lines (175 loc) · 8.92 KB

Ansible Playbooks

├── playbooks
│ ├── tasks
| |  ├── reboot.yml
│ ├── vars
| |  ├── config.yml
| ├── k3s-pre-install.yml
| ├── k3s-install.yml
| ├── k3s-post-install.yml
| ├── k3s-uninstall.yml
| ├── k3s-packages-install.yml
| ├── k3s-packages-uninstall.yml
| ├── update-deskpis.yml
| ├── reboot-deskpis.yml
| ├── shutdown-deskpis.yml

This structure houses the Ansible Playbooks.

This playbook prepares the cluster for k3s installation.

From the project root directory, run:

ansible-playbook playbooks/k3s-pre-install.yml

Synopsis

This playbook will, on every host in the cluster:

On the Control Plane / Master Node this playbook will also:

This playbook installs the k3s cluster.

Configuration

The cluster configuration is largely contained within config.yml and consists of the following items:

  • A kubelet configuration that enables Graceful Node Shutdown
  • Extra arguments for the k3s server installation (i.e. Control Plane / Master Node):
    • --write-kubeconfig-mode '0644' gives read permissions to Kube Config file (located at /etc/rancher/k3s/k3s.yaml)
    • --disable servicelb disables the default service load balancer installed by k3s (i.e. Klipper Load Balancer), instead we'll install MetalLB in a later step.
    • --disable traefik disables the default ingress controller installed by k3s (i.e. Traefik), instead we'll install Traefik ourselves in a later step.
    • --kubelet-arg 'config=/etc/rancher/k3s/kubelet.config' points to the kubelet configuration (see above).
    • --kube-scheduler-arg 'bind-address=0.0.0.0' exposes the 0.0.0.0 address endpoint on the Kube Scheduler for metrics scraping.
    • --kube-proxy-arg 'metrics-bind-address=0.0.0.0' exposes the 0.0.0.0 address endpoint on the Kube Proxy for metrics scraping.
    • --kube-controller-manager-arg 'bind-address=0.0.0.0' exposes the 0.0.0.0 address endpoint on the Kube Controller Manager for metrics scraping.
    • --kube-controller-manager-arg 'terminated-pod-gc-threshold=10' set a limit of 10 terminated pods that can exist before the garbage collector starts deleting terminated pods.
  • Extra arguments for k3s agent installation (i.e. Worker Nodes)
    • --node-label 'node_type=worker'adds a custom label to the worker node.
    • --kubelet-arg 'config=/etc/rancher/k3s/kubelet.config' points to the kubelet configuration (see above).
    • --kube-proxy-arg 'metrics-bind-address=0.0.0.0' exposes the 0.0.0.0 address endpoint on the Kube Proxy for metrics scraping.

Installation

Run the k3s-install.yml playbook from the project root directory:

ansible-playbook playbooks/k3s-install.yml

Once the play completes you can check whether the cluster was successfully installed by logging into the master node and running kubectl get nodes. You should see something like the following:

deskpi@deskpi1:~ $ kubectl get nodes
NAME      STATUS   ROLES                  AGE VERSION
deskpi1   Ready    control-plane,master   33s v1.25.6+k3s1
deskpi2   Ready    worker                 32s v1.25.6+k3s1
deskpi3   Ready    worker                 32s v1.25.6+k3s1
deskpi4   Ready    worker                 32s v1.25.6+k3s1
deskpi5   Ready    worker                 32s v1.25.6+k3s1
deskpi6   Ready    worker                 32s v1.25.6+k3s1

If something went wrong during the installation you can check the installation log, which is saved to a file called k3s_install_log.txt in the home directory of root.

deskpi@deskpi1:~ $ sudo -i
root@deskpi1:~# cat k3s_install_log.txt
[INFO]  Finding release for channel stable
[INFO]  Using v1.25.6+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.25.6+k3s1/sha256sum-arm64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.25.6+k3s1/k3s-arm64
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping installation of SELinux RPM
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
[INFO]  systemd: Starting k3s

You can uninstall k3s by running the k3s-uninstall.yml playbook from the project root:

ansible-playbook playbooks/k3s-uninstall.yml

This playbook executes several operations and installs several packages after k3s has been successfully installed to the deskpi cluster.

Synopsis

This playbook does the following:

  • Configures kubectl autocompletion and creates the alias kc for kubectl, which is automatically installed by the k3s installation script, on every host in the cluster.
  • Installs Helm, the package manager for kubernetes, which will be used to install other k8s packages.
  • Creates an NFS Storage Class, based on an NFS export, on the Control Plane.

Configuration

Configuration variables can be found in the vars/config.yml. To configure the host as a local NFS Server, set the fact:

local_nfs_server: true

Alternatively, to set the location of a remote NFS Server, set the facts:

local_nfs_server: false
nfs_server: <ip_address_of_nfs>

In both cases ensure the path to the share is correct:

nfs_path: <path_to_share>

Installation

After k3s has been successfully set up on your cluster, you can run the post-install playbook from the project root:

ansible-playbook playbooks/k3s-post-install.yml

The tasks are tagged and can be played individually via the --tags argument for ansible-playbook

For example, to install specifically only Helm you can run the playbook as follows:

ansible-playbook playbooks/k3s-post-install.yml --tags "helm" 

Installs several packages to k3s:

Package Tag
MetalLB metallb
Cert-Manager certmanager
Traefik traefik
Linkerd linkerd
Longhorn longhorn
ansible-playbook playbooks/k3s-packages-install.yml 

Packages can be individually installed with the corresponding tag, for example:

ansible-playbook playbooks/k3s-packages-install.yml --tags "metallb,certmanager,traefik" 

Removes the packages installed to k3s with the k3s-packages-install playbook.

Package Tag
Longhorn longhorn
Traefik traefik
Linkerd linkerd
Cert-Manager certmanager
MetalLB metallb
ansible-playbook playbooks/k3s-packages-uninstall.yml 

Packages can be individually uninstalled with the corresponding tag, for example:

ansible-playbook playbooks/k3s-packages-uninstall.yml --tags "metallb,certmanager,traefik" 

Additional Playbooks

Updates the software on all the Pi's in the cluster. Reboots them if required.

ansible-playbook playbooks/update-deskpis.yml

(Run from the project root directory)

Reboots all the Pi's in the cluster.

ansible-playbook playbooks/reboot-deskpis.yml

(Run from the project root directory)

Shuts down all the Pi's in the cluster.

ansible-playbook playbooks/shutdown-deskpis.yml

(Run from the project root directory)