Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: mass pod deletion #15

Open
wants to merge 18 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 48 additions & 0 deletions .github/workflows/ci-multiubuntu-push.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
name: ci-latest-multiubuntu-release

on:
push:
branches: [main]
paths: ["examples/multiubuntu/build/**"]

permissions: read-all

env:
PLATFORM: linux/amd64,linux/arm64/v8

jobs:
multiubuntu-release:
name: Build & Push KubeArmor Operator
if: github.repository == 'kubearmor/kubearmor'
runs-on: ubuntu-20.04
permissions:
id-token: write
timeout-minutes: 90
steps:
- uses: actions/checkout@v3

- name: Set up QEMU
uses: docker/setup-qemu-action@v2

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
platforms: linux/amd64,linux/arm64/v8

- name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_AUTHTOK }}

- name: Build and push multi-architecture image
uses: docker/build-push-action@v6
with:
context: examples/multiubuntu/build
file: examples/multiubuntu/build/Dockerfile
push: true
tags: kubearmor/ubuntu-w-utils:latest
platforms: linux/amd64,linux/arm64/v8

- name: Logout from Docker Hub
run: docker logout
3 changes: 2 additions & 1 deletion .github/workflows/ci-test-controllers.yml
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,8 @@ jobs:
kubectl rollout status --timeout=5m daemonset -l kubearmor-app=kubearmor -n kubearmor
kubectl rollout status --timeout=5m deployment -n kubearmor -l kubearmor-app=kubearmor-controller -n kubearmor
kubectl get pods -A
done
done
docker system prune -a -f

- name: Test KubeArmor using Ginkgo
run: |
Expand Down
23 changes: 22 additions & 1 deletion .github/workflows/ci-test-ginkgo.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ on:
- ".github/workflows/ci-test-ginkgo.yml"
- "pkg/KubeArmorOperator/**"
- "deployments/helm/**"
- "examples/multiubuntu/build/**"
pull_request:
branches: [main]
paths:
Expand Down Expand Up @@ -51,6 +52,8 @@ jobs:
filters: |
controller:
- 'pkg/KubeArmorController/**'
multiubuntu:
- 'examples/multiubuntu/build/**'

- name: Install the latest LLVM toolchain
run: ./.github/workflows/install-llvm.sh
Expand All @@ -76,12 +79,23 @@ jobs:
if: steps.filter.outputs.controller == 'true'
run: make -C pkg/KubeArmorController/ docker-build TAG=latest

- name: Build multiubuntu docker image
working-directory: examples/multiubuntu/build
if: steps.filter.outputs.multiubuntu == 'true'
run: docker build -t kubearmor/ubuntu-w-utils:latest .

- name: deploy pre existing pod
run: |
kubectl apply -f ./tests/k8s_env/ksp/pre-run-pod.yaml
sleep 60
kubectl get pods -A

- name: make changes in multiubuntu-deployment
working-directory: tests/k8s_env
if: steps.filter.outputs.multiubuntu == 'true'
run: |
grep -rl "kubearmor/ubuntu-w-utils:latest" ./ | while read -r file; do sed -i 's/imagePullPolicy: Always/imagePullPolicy: Never/g' "$file"; done

- name: Run KubeArmor
timeout-minutes: 7
run: |
Expand All @@ -94,6 +108,9 @@ jobs:
if [[ ${{ steps.filter.outputs.controller }} == 'true' ]]; then
docker save kubearmor/kubearmor-controller:latest | sudo k3s ctr images import -
fi
if [[ ${{ steps.filter.outputs.multiubuntu }} == 'true' ]]; then
docker save kubearmor/ubuntu-w-utils:latest | sudo k3s ctr images import -
fi
else
if [ ${{ matrix.runtime }} == "crio" ]; then
docker save kubearmor/kubearmor-test-init:latest | sudo podman load
Expand All @@ -108,11 +125,15 @@ jobs:
docker save kubearmor/kubearmor-controller:latest | sudo podman load
sudo podman tag localhost/latest:latest docker.io/kubearmor/kubearmor-controller:latest
fi
if [ ${{ steps.filter.outputs.multiubuntu }} == 'true' ]; then
docker save kubearmor/ubuntu-w-utils:latest | sudo podman load
sudo podman tag localhost/latest:latest docker.io/kubearmor/ubuntu-w-utils:latest
fi
fi
fi
docker system prune -a -f
docker buildx prune -a -f
helm upgrade --install kubearmor-operator ./deployments/helm/KubeArmorOperator -n kubearmor --create-namespace --set kubearmorOperator.image.tag=latest
helm upgrade --install kubearmor-operator ./deployments/helm/KubeArmorOperator -n kubearmor --create-namespace --set kubearmorOperator.image.tag=latest --set kubearmorOperator.annotateExisting=true
kubectl wait --for=condition=ready --timeout=5m -n kubearmor pod -l kubearmor-app=kubearmor-operator
kubectl get pods -A
if [[ ${{ steps.filter.outputs.controller }} == 'true' ]]; then
Expand Down
18 changes: 17 additions & 1 deletion .github/workflows/ci-test-ubi-image.yml
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,8 @@ jobs:
filters: |
controller:
- 'pkg/KubeArmorController/**'
multiubuntu:
- 'examples/multiubuntu/build/**'

- name: Install the latest LLVM toolchain
run: ./.github/workflows/install-llvm.sh
Expand All @@ -72,7 +74,12 @@ jobs:
working-directory: pkg/KubeArmorOperator
run: |
make docker-build


- name: Build multiubuntu docker image
working-directory: examples/multiubuntu/build
if: steps.filter.outputs.multiubuntu == 'true'
run: docker build -t kubearmor/ubuntu-w-utils:latest .

- name: Build KubeArmorController
if: steps.filter.outputs.controller == 'true'
run: make -C pkg/KubeArmorController/ docker-build TAG=latest
Expand All @@ -87,6 +94,9 @@ jobs:

if [ ${{ steps.filter.outputs.controller }} == 'true' ]; then
docker save kubearmor/kubearmor-controller:latest | sudo podman load
fi
if [[ ${{ steps.filter.outputs.multiubuntu }} == 'true' ]]; then
docker save kubearmor/ubuntu-w-utils:latest | sudo podman load
fi

helm upgrade --install kubearmor-operator ./deployments/helm/KubeArmorOperator -n kubearmor --create-namespace --set kubearmorOperator.image.tag=latest
Expand All @@ -113,6 +123,12 @@ jobs:
- name: Operator may take upto 10 sec to enable TLS, Sleep for 15Sec
run: |
sleep 15

- name: make changes in multiubuntu-deployment
working-directory: ./tests/k8s_env
if: steps.filter.outputs.multiubuntu == 'true'
run: |
grep -rl "kubearmor/ubuntu-w-utils:latest" ./ | while read -r file; do sed -i 's/imagePullPolicy: Always/imagePullPolicy: Never/g' "$file"; done

- name: Test KubeArmor using Ginkgo
run: |
Expand Down
6 changes: 4 additions & 2 deletions KubeArmor/BPF/enforcer.bpf.c
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ int BPF_PROG(enforce_proc, struct linux_binprm *bprm, int ret) {
struct data_t *dirval;
bool recursivebuthint = false;
bool fromSourceCheck = true;
bool goToDecision = false ;

// Extract full path of the source binary from the parent task structure
struct task_struct *parent_task = BPF_CORE_READ(t, parent);
Expand Down Expand Up @@ -114,7 +115,8 @@ int BPF_PROG(enforce_proc, struct linux_binprm *bprm, int ret) {
RULE_HINT)) { // true directory match and not a hint suggests
// there are no possibility of child dir
val = dirval;
goto decision;
goToDecision = true; // to please the holy verifier
break;
} else if (dirval->processmask &
RULE_RECURSIVE) { // It's a directory match but also a
// hint, it's possible that a
Expand All @@ -134,7 +136,7 @@ int BPF_PROG(enforce_proc, struct linux_binprm *bprm, int ret) {
}
}

if (recursivebuthint) {
if (recursivebuthint || goToDecision) {
match = true;
goto decision;
}
Expand Down
37 changes: 26 additions & 11 deletions KubeArmor/BPF/shared.h
Original file line number Diff line number Diff line change
Expand Up @@ -644,7 +644,13 @@ static inline int match_and_enforce_path_hooks(struct path *f_path, u32 id,
if (id == dpath) { // Path Hooks
if (match) {
if (val && (val->filemask & RULE_OWNER)) {
if (!is_owner_path(f_path->dentry)) {
struct dentry *dent ;
if(eventID == _FILE_MKNOD || eventID == _FILE_MKDIR){
dent = BPF_CORE_READ(f_path , dentry , d_parent);
} else {
dent = f_path->dentry ;
}
if (!is_owner_path(dent)) {
retval = -EPERM;
} else {
return 0;
Expand All @@ -655,13 +661,13 @@ static inline int match_and_enforce_path_hooks(struct path *f_path, u32 id,
}
}

if (retval == -EPERM) {
goto ringbuf;
}

bpf_map_update_elem(&bufk, &two, z, BPF_ANY);
pk->path[0] = dfile;
struct data_t *allow = bpf_map_lookup_elem(inner, pk);

if (retval == -EPERM && !(allow && !fromSourceCheck)) {
goto ringbuf;
}

if (allow) {
if (!match) {
Expand All @@ -677,6 +683,7 @@ static inline int match_and_enforce_path_hooks(struct path *f_path, u32 id,
if (val && (val->filemask & RULE_OWNER)) {
if (!is_owner_path(f_path->dentry)) {
retval = -EPERM;
goto ringbuf;
} else {
return 0;
}
Expand All @@ -690,14 +697,16 @@ static inline int match_and_enforce_path_hooks(struct path *f_path, u32 id,
}
}

if (retval == -EPERM) {
goto ringbuf;
}


bpf_map_update_elem(&bufk, &two, z, BPF_ANY);
pk->path[0] = dfile;
struct data_t *allow = bpf_map_lookup_elem(inner, pk);

if (retval == -EPERM && !(allow && !fromSourceCheck)) {
goto ringbuf;
}

if (allow) {
if (!match) {
if (allow->processmask == BLOCK_POSTURE) {
Expand All @@ -708,9 +717,15 @@ static inline int match_and_enforce_path_hooks(struct path *f_path, u32 id,
}
} else if (id == dfilewrite) { // file write
if (match) {
if (val && (val->filemask & RULE_DENY)) {
retval = -EPERM;
goto ringbuf;
if (val && (val->filemask & RULE_OWNER)) {
if (!is_owner_path(f_path->dentry)) {
retval = -EPERM;
goto ringbuf;
}
}
if (val && (val->filemask & RULE_READ) && !(val->filemask & RULE_WRITE)) {
retval = -EPERM;
goto ringbuf;
}
}

Expand Down
2 changes: 1 addition & 1 deletion KubeArmor/BPF/system_monitor.c
Original file line number Diff line number Diff line change
Expand Up @@ -640,7 +640,7 @@ static __always_inline int save_str_to_buffer(bufs_t *bufs_p, void *ptr) {
}

u32 str_pos = size_pos + sizeof(int);
if (str_pos >= MAX_BUFFER_SIZE || str_pos + MAX_STRING_SIZE > MAX_BUFFER_SIZE) {
if (str_pos >= MAX_BUFFER_SIZE -1 || str_pos + MAX_STRING_SIZE > MAX_BUFFER_SIZE -1) {
return 0;
}

Expand Down
Binary file modified KubeArmor/enforcer/bpflsm/enforcer_bpfeb.o
Binary file not shown.
Binary file modified KubeArmor/enforcer/bpflsm/enforcer_bpfel.o
Binary file not shown.
Binary file modified KubeArmor/enforcer/bpflsm/enforcer_path_bpfeb.o
Binary file not shown.
Binary file modified KubeArmor/enforcer/bpflsm/enforcer_path_bpfel.o
Binary file not shown.
2 changes: 1 addition & 1 deletion KubeArmor/go.mod
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
module github.com/kubearmor/KubeArmor/KubeArmor

go 1.23.5
go 1.23.6

replace (
github.com/kubearmor/KubeArmor => ../../
Expand Down
Binary file modified KubeArmor/presets/anonmapexec/anonmapexec_bpfeb.o
Binary file not shown.
Binary file modified KubeArmor/presets/anonmapexec/anonmapexec_bpfel.o
Binary file not shown.
Binary file modified KubeArmor/presets/filelessexec/filelessexec_bpfeb.o
Binary file not shown.
Binary file modified KubeArmor/presets/filelessexec/filelessexec_bpfel.o
Binary file not shown.
Binary file modified KubeArmor/presets/protectEnv/protectenv_bpfeb.o
Binary file not shown.
Binary file modified KubeArmor/presets/protectEnv/protectenv_bpfel.o
Binary file not shown.
Binary file modified KubeArmor/utils/bpflsmprobe/probe_bpfeb.o
Binary file not shown.
Binary file modified KubeArmor/utils/bpflsmprobe/probe_bpfel.o
Binary file not shown.
7 changes: 4 additions & 3 deletions contribution/k3s/install_k3s.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,10 @@
# Copyright 2021 Authors of KubeArmor

if [ "$RUNTIME" == "" ]; then
if [ -S /var/run/docker.sock ]; then
RUNTIME="docker"
elif [ -S /var/run/crio/crio.sock ]; then
# TODO: Enable the support for docker here. Currently docker runtime in k3s env is leading to the issue:
# https://github.com/kubearmor/KubeArmor/issues/1971
# Once this issue is fixed, we can support docker again in k3s
if [ -S /var/run/crio/crio.sock ]; then
RUNTIME="crio"
else # default
RUNTIME="containerd"
Expand Down
2 changes: 1 addition & 1 deletion deployments/controller/ka-updater-kured.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ spec:
- |
grep "bpf" /rootfs/sys/kernel/security/lsm >/dev/null
[[ $? -eq 0 ]] && echo "sysfs already has BPF enabled" && rm /sentinel/reboot-required && sleep infinity
grep "GRUB_CMDLINE_LINUX.*bpf" /rootfs/etc/default/grub >/dev/null
grep 'GRUB_CMDLINE_LINUX="[^"]*lsm=[^"]*\bbpf\b[^"]*"' /rootfs/etc/default/grub >/dev/null
[[ $? -eq 0 ]] && echo "grub already has BPF enabled" && sleep infinity
cat <<EOF >/rootfs/updater.sh
#!/bin/bash
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ webhooks:
- UPDATE
resources:
- pods
- pods/binding
sideEffects: NoneOnDryRun
objectSelector:
matchExpressions:
Expand Down
3 changes: 2 additions & 1 deletion deployments/get/objects.go
Original file line number Diff line number Diff line change
Expand Up @@ -526,6 +526,7 @@ func GetKubeArmorControllerDeployment(namespace string) *appsv1.Deployment {
Args: []string{
"--leader-elect",
"--health-probe-bind-address=:8081",
"--annotateExisting=false",
},
Command: []string{"/manager"},
Ports: []corev1.ContainerPort{
Expand Down Expand Up @@ -769,7 +770,7 @@ func GetKubeArmorControllerMutationAdmissionConfiguration(namespace string, caCe
Rule: admissionregistrationv1.Rule{
APIGroups: []string{""},
APIVersions: []string{"v1"},
Resources: []string{"pods"},
Resources: []string{"pods", "pods/binding"},
},
Operations: []admissionregistrationv1.OperationType{
admissionregistrationv1.Create,
Expand Down
2 changes: 1 addition & 1 deletion deployments/go.mod
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
module github.com/kubearmor/KubeArmor/deployments

go 1.23.5
go 1.23.6

replace (
github.com/kubearmor/KubeArmor => ../
Expand Down
12 changes: 11 additions & 1 deletion deployments/helm/KubeArmor/templates/RBAC/roles.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,17 @@ rules:
verbs:
- get
- list
- watch
- watch
- apiGroups:
- "apps"
resources:
- deployments
- statefulsets
- daemonsets
- replicasets
verbs:
- get
- update
- apiGroups:
- security.kubearmor.com
resources:
Expand Down
1 change: 1 addition & 0 deletions deployments/helm/KubeArmor/templates/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,7 @@ spec:
- args:
- --health-probe-bind-address=:8081
- --leader-elect
- --anotateExisting=false
command:
- /manager
image: {{printf "%s:%s" .Values.kubearmorController.image.repository .Values.kubearmorController.image.tag}}
Expand Down
3 changes: 2 additions & 1 deletion deployments/helm/KubeArmor/templates/secrets.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ webhooks:
- CREATE
- UPDATE
resources:
- pods
- pods
- pods/binding
scope: '*'
sideEffects: NoneOnDryRun
9 changes: 9 additions & 0 deletions deployments/helm/KubeArmorOperator/templates/NOTES.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
{{- if not .Values.kubearmorOperator.annotateExisting }}
⚠️ WARNING: Existing pods will not be annotated. Policy enforcement for already existing pods on Apparmor nodes will not work.
• To check enforcer present on nodes use:
➤ kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name} {.metadata.labels.kubearmor\.io/enforcer}{"\n"}{end}'
• To annotate existing pods use:
➤ helm upgrade --install {{ .Values.kubearmorOperator.name }} kubearmor/kubearmor-operator -n kubearmor --create-namespace --set annotateExisting=true
{{- end }}
ℹ️ Your release is named {{ .Release.Name }}.
💙 Thank you for installing KubeArmor.
Loading
Loading