Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add rollout upgrade integration test #29

Merged
merged 32 commits into from
Sep 4, 2024
Merged
Show file tree
Hide file tree
Changes from 27 commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
9ac14ae
rollout tests
bschimke95 Jul 17, 2024
bb0663d
reduce to 3
bschimke95 Jul 17, 2024
9b03bd6
Update docker build process and CI
bschimke95 Jul 24, 2024
ef2c035
Pin upgrade version to 1.30
bschimke95 Jul 24, 2024
c3dcbdf
fix e2e workflow
bschimke95 Jul 24, 2024
53b36e0
build args flag
bschimke95 Jul 24, 2024
8fcfba2
fix naming
bschimke95 Jul 24, 2024
1ff682d
login to Github
bschimke95 Jul 24, 2024
fb7a237
add tmate debug
bschimke95 Jul 25, 2024
7b5101d
do not fail fast
bschimke95 Jul 25, 2024
f7e7e41
use main branch
bschimke95 Jul 25, 2024
481b788
use bigger runner
bschimke95 Jul 31, 2024
b534742
remove logging function
bschimke95 Aug 1, 2024
f9b4a99
update DockerMachineTemplate instead of replacing it
bschimke95 Aug 1, 2024
37353c3
rename docker tags to old/new
bschimke95 Aug 1, 2024
46fc91a
fix docker build script
bschimke95 Aug 2, 2024
c9a90c5
fix build error
bschimke95 Aug 2, 2024
f06be10
docker fix
bschimke95 Aug 2, 2024
4c74334
update image tags
bschimke95 Aug 7, 2024
8394c4b
build script
bschimke95 Aug 8, 2024
b5edec7
add wrapper scripts
bschimke95 Aug 8, 2024
383a574
fix remediation
bschimke95 Aug 29, 2024
a1d4fe4
debug
bschimke95 Sep 1, 2024
060a216
make linter happy
bschimke95 Sep 2, 2024
98bfbf9
Add key to ck8sconfig_controller logs
HomayoonAlimohammadi Sep 3, 2024
63bb371
improve logs
HomayoonAlimohammadi Sep 3, 2024
3cefdb3
cleanup tests
bschimke95 Sep 3, 2024
15b2fe7
address comments
bschimke95 Sep 3, 2024
acb7383
readd tmate for debugging
bschimke95 Sep 4, 2024
4a16863
fix worker image
bschimke95 Sep 4, 2024
32dfa35
only validate major minor version
bschimke95 Sep 4, 2024
1728168
pin kubernetes version for new image
bschimke95 Sep 4, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
44 changes: 35 additions & 9 deletions .github/workflows/e2e.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,14 @@ jobs:
name: Build & Run E2E Images
runs-on: [self-hosted, linux, X64, jammy, large]
steps:
-
name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
# We run into rate limiting issues if we don't authenticate
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Check out repo
uses: actions/checkout@v4
- name: Install requirements
bschimke95 marked this conversation as resolved.
Show resolved Hide resolved
Expand All @@ -22,29 +30,32 @@ jobs:
sudo snap install kubectl --classic --channel=1.30/stable
- name: Build provider images
run: sudo make docker-build-e2e
- name: Build k8s-snap image
- name: Build k8s-snap images
working-directory: hack/
run: |
cd templates/docker
sudo docker build . -t k8s-snap:dev
./build-e2e-images.sh
- name: Save provider image
run: |
sudo docker save -o provider-images.tar ghcr.io/canonical/cluster-api-k8s/controlplane-controller:dev ghcr.io/canonical/cluster-api-k8s/bootstrap-controller:dev
sudo chmod 775 provider-images.tar
- name: Save k8s-snap image
run: |
sudo docker save -o k8s-snap-image.tar k8s-snap:dev
sudo chmod 775 k8s-snap-image.tar
sudo docker save -o k8s-snap-image-old.tar k8s-snap:dev-old
sudo docker save -o k8s-snap-image-new.tar k8s-snap:dev-new
sudo chmod 775 k8s-snap-image-old.tar
sudo chmod 775 k8s-snap-image-new.tar
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: e2e-images
path: |
provider-images.tar
k8s-snap-image.tar
k8s-snap-image-old.tar
k8s-snap-image-new.tar

run-e2e-tests:
name: Run E2E Tests
runs-on: [self-hosted, linux, X64, jammy, large]
runs-on: [self-hosted, linux, X64, jammy, xlarge]
needs: build-e2e-images
strategy:
matrix:
Expand All @@ -54,7 +65,17 @@ jobs:
- "Workload cluster creation"
- "Workload cluster scaling"
- "Workload cluster upgrade"
# TODO(ben): Remove once all tests are running stable.
fail-fast: false
steps:
-
name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
# We run into rate limiting issues if we don't authenticate
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Check out repo
uses: actions/checkout@v4
- name: Install requirements
Expand All @@ -71,8 +92,13 @@ jobs:
path: .
- name: Load provider image
run: sudo docker load -i provider-images.tar
- name: Load k8s-snap image
run: sudo docker load -i k8s-snap-image.tar
- name: Load k8s-snap old image
if: matrix.ginkgo_focus == 'Workload cluster upgrade'
run: |
sudo docker load -i k8s-snap-image-old.tar
- name: Load k8s-snap new image
run: |
sudo docker load -i k8s-snap-image-new.tar
- name: Create docker network
run: |
sudo docker network create kind --driver=bridge -o com.docker.network.bridge.enable_ip_masquerade=true
Expand Down
14 changes: 14 additions & 0 deletions hack/build-e2e-images.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
#!/bin/bash

# Description:
# Build k8s-snap docker images required for e2e tests.
#
# Usage:
# ./build-e2e-images.sh

DIR="$(realpath "$(dirname "${0}")")"

cd "${DIR}/../templates/docker"
sudo docker build . -t k8s-snap:dev-old --build-arg BRANCH=main --build-arg KUBERNETES_VERSION=v1.29.6
sudo docker build . -t k8s-snap:dev-new --build-arg BRANCH=main
cd -
2 changes: 1 addition & 1 deletion pkg/ck8s/workload_cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -290,7 +290,7 @@ func (w *Workload) doK8sdRequest(ctx context.Context, method, endpoint string, r
return fmt.Errorf("k8sd request failed: %s", responseBody.Error)
}
if responseBody.Metadata == nil || response == nil {
bschimke95 marked this conversation as resolved.
Show resolved Hide resolved
// Nothing to decode
// No response expected.
return nil
}
if err := json.Unmarshal(responseBody.Metadata, response); err != nil {
Expand Down
13 changes: 12 additions & 1 deletion templates/docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,8 @@ FROM $BUILD_BASE AS builder
ARG REPO=https://github.com/canonical/k8s-snap
ARG BRANCH=main

ARG KUBERNETES_VERSION=""

## NOTE(neoaggelos): install dependencies needed to build the tools
## !!!IMPORTANT!!! Keep up to date with "snapcraft.yaml:parts.build-deps.build-packages"
RUN apt-get update \
Expand Down Expand Up @@ -86,7 +88,12 @@ RUN /src/k8s-snap/build-scripts/build-component.sh helm

## kubernetes build
FROM builder AS build-kubernetes
RUN /src/k8s-snap/build-scripts/build-component.sh kubernetes
ENV KUBERNETES_VERSION=${KUBERNETES_VERSION}
RUN if [ -n "$KUBERNETES_VERSION" ]; then \
echo "Overwriting Kubernetes version with $KUBERNETES_VERSION"; \
echo "$KUBERNETES_VERSION" > /src/k8s-snap/build-scripts/components/kubernetes/version; \
fi
RUN /src/k8s-snap/build-scripts/build-component.sh kubernetes

## runc build
FROM builder AS build-runc
Expand Down Expand Up @@ -156,3 +163,7 @@ ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/k8s/

## NOTE(neoaggelos): Required for containerd to properly set up overlayfs for pods
VOLUME ["/var/snap/k8s/common/var/lib/containerd"]

## NOTE(ben): Remove existing kind image kubectl and kubelet binaries
# to avoid version confusion.
RUN rm -f /usr/bin/kubectl /usr/bin/kubelet
4 changes: 3 additions & 1 deletion test/e2e/cluster_upgrade.go
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ func ClusterUpgradeSpec(ctx context.Context, inputGetter func() ClusterUpgradeSp
ClusterctlConfigPath: input.ClusterctlConfigPath,
KubeconfigPath: input.BootstrapClusterProxy.GetKubeconfigPath(),
InfrastructureProvider: *input.InfrastructureProvider,
Flavor: ptr.Deref(input.Flavor, ""),
Flavor: ptr.Deref(input.Flavor, "upgrades"),
Namespace: namespace.Name,
ClusterName: clusterName,
KubernetesVersion: input.E2EConfig.GetVariable(KubernetesVersion),
Expand All @@ -158,6 +158,7 @@ func ClusterUpgradeSpec(ctx context.Context, inputGetter func() ClusterUpgradeSp
Cluster: result.Cluster,
ControlPlane: result.ControlPlane,
KubernetesUpgradeVersion: input.E2EConfig.GetVariable(KubernetesVersionUpgradeTo),
UpgradeMachineTemplate: ptr.To(fmt.Sprintf("%s-control-plane-old", clusterName)),
WaitForMachinesToBeUpgraded: input.E2EConfig.GetIntervals(specName, "wait-machine-upgrade"),
})

Expand All @@ -167,6 +168,7 @@ func ClusterUpgradeSpec(ctx context.Context, inputGetter func() ClusterUpgradeSp
Cluster: result.Cluster,
UpgradeVersion: input.E2EConfig.GetVariable(KubernetesVersionUpgradeTo),
MachineDeployments: result.MachineDeployments,
UpgradeMachineTemplate: ptr.To(fmt.Sprintf("%s-md-1.30-0", clusterName)),
WaitForMachinesToBeUpgraded: input.E2EConfig.GetIntervals(specName, "wait-worker-nodes"),
})

Expand Down
15 changes: 6 additions & 9 deletions test/e2e/cluster_upgrade_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,13 +25,10 @@ import (
)

var _ = Describe("Workload cluster upgrade [CK8s-Upgrade]", func() {
BeforeEach(func() {
// TODO(bschimke): Remove once we find a way to run e2e tests with other infrastructure providers that support snap.
Skip("Skipping the upgrade tests as snap does not work on CAPD.")
})

Context("Upgrading a cluster with 1 control plane", func() {
ClusterUpgradeSpec(ctx, func() ClusterUpgradeSpecInput {
// Skipping this test as in-place upgrades are not supported yet.
// TODO(ben): Remove this skip when in-place upgrades are supported.
//Context("Upgrading a cluster with 1 control plane", func() {
/* ClusterUpgradeSpec(ctx, func() ClusterUpgradeSpecInput {
return ClusterUpgradeSpecInput{
E2EConfig: e2eConfig,
ClusterctlConfigPath: clusterctlConfigPath,
Expand All @@ -42,8 +39,8 @@ var _ = Describe("Workload cluster upgrade [CK8s-Upgrade]", func() {
ControlPlaneMachineCount: ptr.To[int64](1),
WorkerMachineCount: ptr.To[int64](2),
}
})
})
}) */
//})

Context("Upgrading a cluster with HA control plane", func() {
ClusterUpgradeSpec(ctx, func() ClusterUpgradeSpecInput {
Expand Down
7 changes: 4 additions & 3 deletions test/e2e/config/ck8s-docker.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -51,9 +51,10 @@ providers:
- old: "imagePullPolicy: Always"
new: "imagePullPolicy: IfNotPresent"
files:
- sourcePath: "../data/infrastructure-docker/cluster-template.yaml"
- sourcePath: "../data/infrastructure-docker/cluster-template-kcp-remediation.yaml"
- sourcePath: "../data/infrastructure-docker/cluster-template-md-remediation.yaml"
- sourcePath: "../data/infrastructure-docker/cluster-template-upgrades.yaml"
- sourcePath: "../data/infrastructure-docker/cluster-template.yaml"
bschimke95 marked this conversation as resolved.
Show resolved Hide resolved
- name: ck8s
type: BootstrapProvider
versions:
Expand Down Expand Up @@ -84,8 +85,8 @@ providers:

variables:
KUBERNETES_VERSION_MANAGEMENT: "v1.28.0"
KUBERNETES_VERSION: "v1.30.0"
KUBERNETES_VERSION_UPGRADE_TO: "v1.30.1"
KUBERNETES_VERSION: "v1.29.6"
KUBERNETES_VERSION_UPGRADE_TO: "v1.30.3"
IP_FAMILY: "IPv4"
bschimke95 marked this conversation as resolved.
Show resolved Hide resolved
KIND_IMAGE_VERSION: "v1.28.0"

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ metadata:
spec:
template:
spec:
customImage: k8s-snap:dev
customImage: k8s-snap:dev-new
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
Expand Down Expand Up @@ -127,7 +127,7 @@ metadata:
spec:
template:
spec:
customImage: k8s-snap:dev
customImage: k8s-snap:dev-new
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: CK8sConfigTemplate
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ metadata:
spec:
template:
spec:
customImage: k8s-snap:dev
customImage: k8s-snap:dev-old
bschimke95 marked this conversation as resolved.
Show resolved Hide resolved
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
Expand Down Expand Up @@ -101,7 +101,7 @@ metadata:
spec:
template:
spec:
customImage: k8s-snap:dev
customImage: k8s-snap:dev-old
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: CK8sConfigTemplate
Expand Down
135 changes: 135 additions & 0 deletions test/e2e/data/infrastructure-docker/cluster-template-upgrades.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
# TODO: copied and modified from https://github.com/k3s-io/cluster-api-k3s/pull/93/files#diff-c4a336ec56832a2ff7aed26c94d0d67ae3a0e6139d30701cc53c0f0962fe8cca
# should be the same as samples/docker/quickstart.yaml in the future
# for testing the quickstart scenario
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: ${CLUSTER_NAME}
namespace: ${NAMESPACE}
spec:
clusterNetwork:
pods:
cidrBlocks:
- 10.1.0.0/16
services:
cidrBlocks:
- 10.152.0.0/16
serviceDomain: cluster.local
controlPlaneRef:
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: CK8sControlPlane
name: ${CLUSTER_NAME}-control-plane
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerCluster
name: ${CLUSTER_NAME}
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerCluster
metadata:
name: ${CLUSTER_NAME}
spec: {}
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta2
kind: CK8sControlPlane
metadata:
name: ${CLUSTER_NAME}-control-plane
namespace: ${NAMESPACE}
spec:
machineTemplate:
infrastructureTemplate:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
name: ${CLUSTER_NAME}-control-plane-old
spec:
airGapped: true
controlPlane:
extraKubeAPIServerArgs:
--anonymous-auth: "true"
replicas: ${CONTROL_PLANE_MACHINE_COUNT}
version: ${KUBERNETES_VERSION}
# Initial template for the machine deployment
bschimke95 marked this conversation as resolved.
Show resolved Hide resolved
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
name: ${CLUSTER_NAME}-control-plane-old
namespace: ${NAMESPACE}
spec:
template:
spec:
customImage: k8s-snap:dev-old
# After upgrade template for the machine deployment
bschimke95 marked this conversation as resolved.
Show resolved Hide resolved
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
name: ${CLUSTER_NAME}-control-plane-new
namespace: ${NAMESPACE}
spec:
template:
spec:
customImage: k8s-snap:dev-new
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: worker-md-0
namespace: ${NAMESPACE}
spec:
clusterName: ${CLUSTER_NAME}
replicas: ${WORKER_MACHINE_COUNT}
selector:
matchLabels:
cluster.x-k8s.io/cluster-name: ${CLUSTER_NAME}

# This label will be needed for upgrade test
# it will be used as a selector for only selecting
# machines belonging to this machine deployment
cluster.x-k8s.io/deployment-name: worker-md-0
template:
metadata:
labels:
cluster.x-k8s.io/deployment-name: worker-md-0
spec:
version: ${KUBERNETES_VERSION}
clusterName: ${CLUSTER_NAME}
bootstrap:
configRef:
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: CK8sConfigTemplate
name: ${CLUSTER_NAME}-md-0
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
name: ${CLUSTER_NAME}-md-old-0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
bschimke95 marked this conversation as resolved.
Show resolved Hide resolved
kind: DockerMachineTemplate
metadata:
name: ${CLUSTER_NAME}-md-old-0
namespace: ${NAMESPACE}
spec:
template:
spec:
customImage: k8s-snap:dev-old
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: DockerMachineTemplate
metadata:
name: ${CLUSTER_NAME}-md-new-0
namespace: ${NAMESPACE}
spec:
template:
spec:
customImage: k8s-snap:dev-new
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta2
kind: CK8sConfigTemplate
metadata:
name: ${CLUSTER_NAME}-md-0
namespace: ${NAMESPACE}
spec:
template:
spec:
airGapped: true
Loading
Loading