-
Needs Docker installed
-
Deploys everything via hyperkube as containers
-
Can deploy onto a single host with Docker or an existing K8s cluster (likely needs Docker as the CRE) for HA
-
-
Easy to deploy etcd, controlplane, and worker on a single node
-
Deploying the first node, and it seems adding a node to the cluster, are the same command, but with --etcd --controlplan
Note
|
For some reason the action of K8s trying to mount an RBD image to a pod doesn’t cause the rbd kernel module to load. Use lsmod | grep rbd to check it and sudo modprobe rbd to load it non-persitently
|
-
Need to ensure the rbd kernel module is loaded on boot:
sudo bash -c "echo rbd > /etc/modules-load.d/rbd.conf"
-
Create the secrets with the command line first. They cannot be created through the UI
Caution
|
Creating secrets through the UI seems to only create them as type=Opaque. For the RBD SC, they need to be type=kubernetes.io/rbd. If not, PVC provisioning will fail with "Cannot get secret of type kubernetes.io/rbd" |
Important
|
When creating secrets through Rancher UI, do not base64 encode the admin and user secrets |
-
Select the cluster → Storage → Storage Classes
-
"Add Class"
-
Click in the "Provisioner" field
-
Enter: kubernetes.io/rbd
-
Select "kubernetes.io/rbd (Custom)"
-
-
Fill the fields under Parameters
-
Will only accept an imageFormat of 1 or 2
-
Note that secret fields need the name of the deployed secret, not the key alone
-
-
Resources → Workloads → Deploy
-
Select Global → <cluster name> to work on non-namespaced resources (i.e. storageClass)
-
Select Global → <cluster name> → <namespace> to work on namespaced resources (i.e. pods)
-
PVCs are found under Resources → Workloads → Volumes
-
-
Don’t need an ingress controller to specifically assign open LB IPs to a svc
-
An allocated LB svc IP doesn’t seem to get plumbed at the O/S level (or perhaps, more specifically, in the host network namespace)
-
Doesn’t respond to ping
-
Can be verified with
nmap -Pn <LB IP>
-
Seems like the public ports allocated show as open
-
-
-
Can use a single authentication service for all clusters
-
Don’t need each engineer to get a Google account, just to be able to have enterprise class access to clusters in GKE
-
-
Will Chan says just sell Rancher support, nothing else. Everything else is add-on value.
-
Hosted Rancher service runs on AWS for customers who want the control-plane to be 100% managed.
-
99.9% uptime SLA for control plane
-
Add-on to Platinum subscription
-
-
Submariner connects overlay networks from various clusters
-
Fleet - "GitOps at scale" create a pipeline that runs when a git repo changes and push it up to millions of K3s clusters.
-
RKE:
-
Deploys from a single yaml file called cluster.yml
-
Create the file by downloading the rke binary from github, then running
rke config
-
When the cluster.yml file is ready, install with
rke up
(likely need to be in the same directory) -
kube_config_cluster.yml file is the kubeconfig file for the cluster
-
-
Can be updated and re-applied to a running cluster
-
Can generate Let’s Encrypt certificates
-
Saves etcd backups locally and on S3-compatible
-
Upgrading is done by downloading a newer version of the RKE binary from github, then
rke up
-
Can modify the order and method for upgrading the nodes
-
-
-
K3s:
-
Deploy with
curl -sfL https://get.k3s.io | sh -
-
/etc/rancher/k3s/k3s.yml is the kubeconfig file
-
-
To update the SAN for an existing certificate:
/usr/bin/certbot certonly --cert-name rancher-demo.susealliances.com --dns-route53 -d rancher-demo.susealliances.com -d egress.susealliances.com -d samip-demo.susealliances.com
-
NOTE:
-
Need the AWS credentials available. Exporting them with variables is pretty easy.
-
Renews the cert as well. Can only renew the cert five times per week.
-
The first SAN specified with -d is chosen as the CN in the cert, no matter what the name of the cert is.
-
-
https://dev.to/nabbisen/let-s-encrypt-wildcard-certificate-with-certbot-plo
-
To view the cert:
openssl x509 -noout --text -in /etc/letsencrypt/live/rancher-demo.susealliances.com/fullchain.pem | less