Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

samples: add proxmox #149

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 26 additions & 0 deletions samples/proxmox/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# importing a vm template into proxmox

A standard KVM optimized Ubuntu 22.04 image can be imported via

```
TEMPLATE_URL=https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64-disk-kvm.img
TEMPLATE_VMID=10000
TEMPLATE_NAME=ubuntu-22.04
TEMPLATE_STORAGE=tank
TEMPLATE_DISK_OPTIONS="discard=on,iothread=1,ssd=1"

curl -o template.img ${TEMPLATE_URL}

qm create "${TEMPLATE_VMID}" --name ${TEMPLATE_NAME} --memory 16
qm importdisk "${TEMPLATE_VMID}" template.img "${TEMPLATE_STORAGE}"
qm set "${TEMPLATE_VMID}" \
--scsihw virtio-scsi-single \
--scsi0 ${TEMPLATE_STORAGE}:vm-${TEMPLATE_VMID}-disk-0,${TEMPLATE_DISK_OPTIONS} \
--boot order=scsi0 \
--cpu host \
--rng0 source=/dev/urandom \
--template 1 \
--agent 1 \
--onboot 1
```
Then for Ubuntu to work properly, you have to extend the `preK3sCommands` of both `KThreesConfigTemplate` and `KThreesControlPlane` with `apt update && apt -y install qemu-guest-agent && systemctl enable --now qemu-guest-agent`
283 changes: 283 additions & 0 deletions samples/proxmox/cluster-template-k3s.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,283 @@
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: "${CLUSTER_NAME}"
spec:
clusterNetwork:
pods:
cidrBlocks:
- 10.42.0.0/16
services:
cidrBlocks:
- 10.43.0.0/16
serviceDomain: cluster.local
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: ProxmoxCluster
name: "${CLUSTER_NAME}"
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: KThreesControlPlane
name: "${CLUSTER_NAME}-control-plane"
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: ProxmoxCluster
metadata:
name: "${CLUSTER_NAME}"
spec:
controlPlaneEndpoint:
host: ${CONTROL_PLANE_ENDPOINT_IP}
port: 6443
ipv4Config:
addresses: ${NODE_IP_RANGES}
prefix: ${IP_PREFIX}
gateway: ${GATEWAY}
dnsServers: ${DNS_SERVERS}
allowedNodes: ${ALLOWED_NODES:=[]}
---
kind: ProxmoxMachineTemplate
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
metadata:
name: "${CLUSTER_NAME}-control-plane"
spec:
template:
spec:
sourceNode: "${PROXMOX_SOURCENODE}"
templateID: ${PROXMOX_TEMPLATE_VMID}
format: "qcow2"
full: true
numSockets: ${NUM_SOCKETS:=1}
numCores: ${NUM_CORES:=2}
memoryMiB: ${MEMORY_MIB:=2048}
disks:
bootVolume:
disk: ${BOOT_VOLUME_DEVICE:=scsi0}
sizeGb: ${BOOT_VOLUME_SIZE:=32}
network:
default:
bridge: ${BRIDGE}
model: virtio
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: ProxmoxMachineTemplate
metadata:
name: "${CLUSTER_NAME}-worker"
spec:
template:
spec:
sourceNode: "${PROXMOX_SOURCENODE}"
templateID: ${PROXMOX_TEMPLATE_VMID}
format: "qcow2"
full: true
numSockets: ${NUM_SOCKETS:=1}
numCores: ${NUM_CORES:=1}
memoryMiB: ${MEMORY_MIB:=2048}
disks:
bootVolume:
disk: ${BOOT_VOLUME_DEVICE:=scsi0}
sizeGb: ${BOOT_VOLUME_SIZE:=32}
network:
default:
bridge: ${BRIDGE}
model: virtio
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: KThreesControlPlane
metadata:
name: "${CLUSTER_NAME}-control-plane"
spec:
machineTemplate:
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: ProxmoxMachineTemplate
name: "${CLUSTER_NAME}-control-plane"
kthreesConfigSpec:
serverConfig:
# cloudProviderName: "external"
disableCloudController: false
disableComponents: ${K3S_DISABLE_COMPONENTS:=[]}
agentConfig:
nodeName: "{{ ds.meta_data.local_hostname }}"
kubeletArgs:
- "provider-id=proxmox://{{ ds.meta_data.instance_id }}"
files:
- path: /var/lib/rancher/k3s/server/manifests/kube-vip.yaml
owner: root:root
content: |
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-vip
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
name: system:kube-vip-role
rules:
- apiGroups: [""]
resources: ["services/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["list","get","watch", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list","get","watch", "update", "patch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["list", "get", "watch", "update", "create"]
- apiGroups: ["discovery.k8s.io"]
resources: ["endpointslices"]
verbs: ["list","get","watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: system:kube-vip-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-vip-role
subjects:
- kind: ServiceAccount
name: kube-vip
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/name: kube-vip-ds
app.kubernetes.io/version: v0.8.7
name: kube-vip-ds
namespace: kube-system
spec:
selector:
matchLabels:
app.kubernetes.io/name: kube-vip-ds
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/name: kube-vip-ds
app.kubernetes.io/version: v0.8.7
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: vip_nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: vip_interface
value: ""
- name: address
value: ${CONTROL_PLANE_ENDPOINT_IP}
- name: port
value: ${CONTROL_PLANE_ENDPOINT_PORT="6443"}
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: svc_enable
value: "true"
- name: svc_leasename
value: plndr-svcs-lock
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: prometheus_server
value: :2112
- name: enableUPNP
value: "false"
image: ghcr.io/kube-vip/kube-vip:v0.8.7
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
hostNetwork: true
serviceAccountName: kube-vip
tolerations:
- effect: NoSchedule
operator: Exists
- effect: NoExecute
operator: Exists
updateStrategy: {}
preK3sCommands:
- mkdir -p /root/.ssh
- chmod 700 /root/.ssh
- echo "${VM_SSH_KEYS}" > /root/.ssh/authorized_keys
- chmod 600 /root/.ssh/authorized_keys
replicas: ${CONTROL_PLANE_MACHINE_COUNT=1}
version: "${KUBERNETES_VERSION}"
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: KThreesConfigTemplate
metadata:
name: "${CLUSTER_NAME}-worker"
spec:
template:
spec:
preK3sCommands:
- mkdir -p /root/.ssh
- chmod 700 /root/.ssh
- echo "${VM_SSH_KEYS}" > /root/.ssh/authorized_keys
- chmod 600 /root/.ssh/authorized_keys
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: "${CLUSTER_NAME}-worker"
spec:
clusterName: "${CLUSTER_NAME}"
replicas: ${WORKER_MACHINE_COUNT=1}
selector:
matchLabels: {}
template:
metadata:
labels:
node-role.kubernetes.io/node: ""
spec:
clusterName: "${CLUSTER_NAME}"
version: "${KUBERNETES_VERSION}"
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: KThreesConfigTemplate
name: "${CLUSTER_NAME}-worker"
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
kind: ProxmoxMachineTemplate
name: "${CLUSTER_NAME}-worker"
83 changes: 83 additions & 0 deletions samples/proxmox/setup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
## Configure your Proxmox parameters

if [ -z "${CLUSTER_NAME}" ]; then
echo "Please set CLUSTER_NAME"
exit 0
fi

if [ -z "${KUBERNETES_VERSION}" ]; then
echo "Please set KUBERNETES_VERSION. For ex. v1.31.2+k3s1"
exit 0
fi

if [ -z "${CONTROL_PLANE_ENDPOINT_IP}" ]; then
echo "Please set CONTROL_PLANE_ENDPOINT_IP. For ex. '10.10.10.4'"
exit 0
fi

if [ -z "${NODE_IP_RANGES}" ] || [ -z "${GATEWAY}" ] || [ -z "${IP_PREFIX}" ] || [ -z "${DNS_SERVERS}" ] || [ -z "${BRIDGE}" ]; then
echo "Please set NODE_IP_RANGES. For ex. '[10.10.10.5-10.10.10.50]'"
echo "Please set GATEWAY. For ex. '10.10.10.1'"
echo "Please set IP_PREFIX. For ex. '24'"
echo "Please set DNS_SERVERS. For ex. '[8.8.8.8,8.8.4.4]'"
echo "Please set BRIDGE. For ex. 'vmbr0'"
exit 0
fi

if [ -z "${PROXMOX_URL}" ] || [ -z "${PROXMOX_TOKEN}" ] || [ -z "${PROXMOX_SECRET}" ] || [ -z "${PROXMOX_SOURCENODE}" ] || [ -z "${PROXMOX_TEMPLATE_VMID}" ]; then
echo "Please set PROXMOX_URL, PROXMOX_TOKEN, PROXMOX_SECRET, PROXMOX_SOURCENODE, PROXMOX_TEMPLATE_VMID"
echo "- See https://github.com/ionos-cloud/cluster-api-provider-proxmox/blob/main/docs/Usage.md"
exit 0
fi

# The device used for the boot disk.
export BOOT_VOLUME_DEVICE="scsi0"
# The size of the boot disk in GB.
export BOOT_VOLUME_SIZE="32"
# The number of sockets for the VMs.
export NUM_SOCKETS="1"
# The number of cores for the VMs.
export NUM_CORES="1"
# The memory size for the VMs.
export MEMORY_MIB="4069"

# K3s components to disable
# For example because you plan to use MetalLB over ServiceLB, or Longhorn over local-storage, or...
# export K3S_DISABLE_COMPONENTS="[servicelb,local-storage,traefik,metrics-server,helm-controller]"

## Install your cluser-api-k3s provider correctly
mkdir -p ~/.cluster-api
cat samples/clusterctl.yaml | envsubst > ~/.cluster-api/clusterctl.yaml

cat >> ~/.cluster-api/clusterctl.yaml <<EOC
- name: "in-cluster"
url: https://github.com/kubernetes-sigs/cluster-api-ipam-provider-in-cluster/releases/latest/ipam-components.yaml
type: "IPAMProvider"
EOC

clusterctl init \
--infrastructure proxmox \
--bootstrap k3s \
--control-plane k3s \
--ipam in-cluster

kubectl wait --for=condition=Available --timeout=5m \
-n capi-system deployment/capi-controller-manager
kubectl wait --for=condition=Available --timeout=5m \
-n capi-k3s-control-plane-system deployment/capi-k3s-control-plane-controller-manager
kubectl wait --for=condition=Available --timeout=5m \
-n capi-k3s-bootstrap-system deployment/capi-k3s-bootstrap-controller-manager
kubectl wait --for=condition=Available --timeout=5m \
-n capmox-system deployment/capmox-controller-manager

clusterctl generate cluster \
"${CLUSTER_NAME}" \
--from samples/proxmox/cluster-template-k3s.yaml \
| kubectl apply -f -

echo "Once the cluster is up, run 'clusterctl get kubeconfig $CLUSTER_NAME > k3s.yaml' to retrieve your kubeconfig"
echo "- Run 'kubectl scale kthreescontrolplane $CLUSTER_NAME-control-plane --replicas 3' to enable HA for your control-planes"
echo "- or run 'kubectl scale machinedeployment $CLUSTER_NAME-worker --replicas 3' to deploy worker nodes"
echo "- or to just use the single node cluster, you might need to also run the following commands:"
echo " kubectl taint nodes --all node-role.kubernetes.io/control-plane-"
echo " kubectl taint nodes --all node-role.kubernetes.io/master-"