-
Notifications
You must be signed in to change notification settings - Fork 2
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add scripts to setup CC with qemu on a single node
Scripts create a VM image with TDX support, install required dependencies and start qemu with confidential cluster running on it. Signed-off-by: Archana Shinde <[email protected]>
- Loading branch information
Showing
4 changed files
with
567 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,77 @@ | ||
# Deployment Guide on TD baremetal host | ||
|
||
This guide introduces how to setup an Intel TDX host on Ubuntu 24.04 and a TD VM with | ||
a single node kubernetes cluster running on it. | ||
Follow these instructions to setup Intel TDX host, create a TD image, boot the TD and run a | ||
kubernetes cluster within the TD. | ||
|
||
### Prerequisite | ||
|
||
Instructions are relevant for 4th Generation Intel® Xeon® Scalable Processors with activated Intel® TDX | ||
and all 5th Generation Intel® Xeon® Scalable Processors. | ||
|
||
### Setup host | ||
|
||
We first need to install a generic Ubuntu 24.04 server image, install necessary packages to turn | ||
the host OS into an Intel TDX-enabled host OS and enable TDX settings in the BIOS. | ||
Detailed instructions to do so can be found here [setup-tdx-host](https://github.com/canonical/tdx?tab=readme-ov-file#setup-tdx-host). | ||
|
||
To setup your host, you will essentially need to do this: | ||
``` | ||
$ curl https://raw.githubusercontent.com/canonical/tdx/noble-24.04/setup-tdx-host.sh | ||
$ ./setup-tdx-host.sh | ||
``` | ||
|
||
Once the above step is completed, you will need to reboot your machine and proceed to change the | ||
BIOS settings to enable TDX. | ||
|
||
Go to Socket Configuration > Processor Configuration > TME, TME-MT, TDX. | ||
|
||
* Set `Memory Encryption (TME)` to `Enabled` | ||
* Set `Total Memory Encryption Bypass` to `Enabled` (Optional setting for best host OS and regular VM performance.) | ||
* Set `Total Memory Encryption Multi-Tenant (TME-MT)` to `Enabled` | ||
* Set `TME-MT memory integrity` to `Disabled` | ||
* Set `Trust Domain Extension (TDX)` to `Enabled` | ||
* Set `TDX Secure Arbitration Mode Loader (SEAM Loader)` to `Enabled`. (NOTE: This allows loading Intel TDX Loader and Intel TDX Module from the ESP or BIOS.) | ||
* Set `TME-MT/TDX key split` to a non-zero value | ||
|
||
Go to `Socket Configuration > Processor Configuration > Software Guard Extension (SGX)`. | ||
|
||
* Set `SW Guard Extensions (SGX)` to `Enabled` | ||
|
||
Save BIOS settings and boot up. Verify that the host has TDX enabled using dmesg command: | ||
``` | ||
$ sudo dmesg | grep -i tdx | ||
[ 1.523617] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-6.8.0-1004-intel root=UUID=f5524554-48b2-4edf-b0aa-3cebac84b167 ro kvm_intel.tdx=1 nohibernate nomodeset | ||
[ 2.551768] virt/tdx: BIOS enabled: private KeyID range [16, 128) | ||
[ 2.551773] virt/tdx: Disable ACPI S3. Turn off TDX in the BIOS to use ACPI S3. | ||
[ 20.408972] virt/tdx: TDX module: attributes 0x0, vendor_id 0x8086, major_version 2, minor_version 0, build_date 20231112, build_num 635 | ||
``` | ||
|
||
### Setup guest | ||
|
||
To setup a guest image with TDX kernel and has all the binaries required for running | ||
a k3s/k8s cluster, run the following script: | ||
|
||
``` | ||
./setup_cc.sh | ||
``` | ||
|
||
After running the script, you should see an image with the name `tdx-guest-ubuntu-24.04-intel.qcow2` | ||
generated in the current directory. The image is setup with user `tdx` with password `123456`. | ||
|
||
### Launch a kubernetes cluster | ||
|
||
The above step will install a helper script to start a single node kubernetes cluster in the | ||
home directory for the `tdx` user in the guest image. | ||
|
||
The `setup_cc.sh` script should have copied a script called `start-virt.sh` into the current directory. | ||
Use that too ssh into the TD VM: | ||
``` | ||
$ ./start-virt.sh -i tdx-guest-ubuntu-24.04-intel.qcow2 | ||
``` | ||
|
||
Once you have logged in the TD VM, run the following script to start a single node kubernetes cluster: | ||
``` | ||
$ sudo -E /home/tdx/create_k8s_node.sh | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,110 @@ | ||
#!/bin/bash | ||
# | ||
# Copyright (c) 2024 Intel Corporation | ||
# | ||
# SPDX-License-Identifier: Apache-2.0 | ||
# | ||
|
||
#set -o xtrace | ||
set -o errexit | ||
set -o nounset | ||
set -o pipefail | ||
set -o errtrace | ||
|
||
http_proxy=${http_proxy:-} | ||
https_proxy=${https_proxy:-} | ||
no_proxy=${no_proxy:-} | ||
|
||
pod_network_cidr=${pod_network_cidr:-"10.244.0.0/16"} | ||
service_cidr=${service_cidr:-"10.96.0.0/12"} | ||
cni_project=${cni_project:-"calico"} | ||
local_ip_address="" | ||
|
||
init_cluster() { | ||
if [ -d "$HOME/.kube" ]; then | ||
rm -rf "$HOME/.kube" | ||
fi | ||
|
||
sudo bash -c 'modprobe br_netfilter' | ||
sudo bash -c 'modprobe overlay' | ||
sudo bash -c 'swapoff -a' | ||
|
||
sudo systemctl stop apparmor | ||
sudo systemctl disable apparmor | ||
|
||
# initialize cluster | ||
sudo -E kubeadm init --ignore-preflight-errors=all --config kubeadm-config.yaml | ||
|
||
mkdir -p "${HOME}/.kube" | ||
cp /etc/kubernetes/admin.conf $HOME/.kube/config | ||
sudo chown $(id -u):$(id -g) $HOME/.kube/config | ||
|
||
# taint master node: | ||
kubectl taint nodes --all node-role.kubernetes.io/control-plane- | ||
} | ||
|
||
install_cni() { | ||
|
||
if [[ $cni_project == "calico" ]]; then | ||
calico_url="https://projectcalico.docs.tigera.io/manifests/calico.yaml" | ||
kubectl apply -f $calico_url | ||
else | ||
flannel_url="https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml" | ||
kubectl apply -f $flannel_url | ||
fi | ||
} | ||
|
||
find_local_ip_addr() { | ||
# Find the network interface name starting with "enp" or "eth" | ||
interface=$(ip -o link show | awk -F': ' '/^2: enp|^2: eth/ {print $2}') | ||
if [ ! -z "$interface" ]; then | ||
# Get the IP address of the found interface | ||
local_ip_address=$(ip -o -4 addr show "$interface" | awk '{print $4}' | cut -d/ -f1) | ||
fi | ||
} | ||
|
||
# Set proxy with systemctl. | ||
# This steps is required for kubeadm, even though the proxies are set in the systemd config files. | ||
set_systemctl_proxy() { | ||
# Config proxy | ||
local HTTPS_PROXY="$HTTPS_PROXY" | ||
local https_proxy="$https_proxy" | ||
if [ -z "$HTTPS_PROXY" ]; then | ||
HTTPS_PROXY="$https_proxy" | ||
fi | ||
|
||
local HTTP_PROXY="$HTTP_PROXY" | ||
local http_proxy="$http_proxy" | ||
if [ -z "$HTTP_PROXY" ]; then | ||
HTTP_PROXY="$http_proxy" | ||
fi | ||
|
||
local NO_PROXY="$NO_PROXY" | ||
local no_proxy="$no_proxy" | ||
if [ -z "$NO_PROXY" ]; then | ||
NO_PROXY="$no_proxy" | ||
fi | ||
|
||
find_local_ip_addr | ||
if [ ! -z "$local_ip_address" ]; then | ||
NO_PROXY="$NO_PROXY,${ip_address}" | ||
fi | ||
|
||
NO_PROXY="$NO_PROXY,${pod_network_cidr},${service_cidr}" | ||
export $NO_PROXY | ||
|
||
if [[ -n $HTTP_PROXY ]] || [[ -n $HTTPS_PROXY ]] || [[ -n $NO_PROXY ]]; then | ||
sudo systemctl set-environment HTTP_PROXY="$HTTP_PROXY" | ||
sudo systemctl set-environment HTTPS_PROXY="$HTTPS_PROXY" | ||
sudo systemctl set-environment NO_PROXY="$NO_PROXY" | ||
sudo systemctl restart containerd.service | ||
fi | ||
} | ||
|
||
main() { | ||
set_systemctl_proxy | ||
init_cluster | ||
install_cni | ||
} | ||
|
||
main $@ |
Oops, something went wrong.