The goal of the Kalm benchmark is to provide a proper baseline for comparing Kubernetes (workload) compliance/misconfiguration scanners.
This benchmark consists of two parts:
- a set of manifests, which contain misconfigurations, which scanners should detect
- a web UI to help with the comparison and analysis of the scanner results on this benchmark
To facilitate a fair comparison, a set of manifests is generated. By design the generated manifests follow all the best practices. A single manifest is supposed to trigger a single check of the scanner. The ID and further information for the check are provided as labels and annotations on the generated resources.
This way, it is easy to compare the expected results with the actual results of a scanner. An overview of all the checks in the benchmark can be found in Benchmark Checks.
The web application consists of two pages:
- an overview of various scanners checked with this benchmark
- an analysis page to inspect the results of a specific scanner in more detail
-
for Security and Ops teams:
- find a suitable scanner for your needs
- aids you in the development of your custom checks/policies/queries/rules for a tool
-
For scanner developers
- Python >= 3.9
- The manifests are generated using cdk8s, which depends on nodeJS
- Please ensure nodeJS is installed on your system
- Any scanner for which a scan should be triggered must be installed manually
- Poetry is used to manage the project itself
To install the benchmark along with its dependencies listed in the pyproject.toml
execute:
poetry install
To use this package run the CLI with the appropriate command:
poetry run cli <command>
For detailed information of the available commands or general help run:
poetry run cli --help
To generate manifests use the generate
command:
poetry run cli generate [--out <target directory>]
These manifests form the basis for the benchmark and will be placed in the directory specified with the --out
argument. The location defaults to the manifests
folder in the working directory.
Besides the CLI commands the tool also provides a web user interface to manage the scan(s) and analyse evaluation results. It can be started with the command:
poetry run cli serve
To scan either a cluster or manifest files with the specified tool use the scan
command.
Use either the -c
flag to specify the target cluster or the -f
flag to specify the target file/folder.
poetry run cli scan <tool> [-c | -f <target file or folder>]
❗️ Important executing a scan requires the respective tool to be installed on the system!
E.g., to scan manifests with the tool dummy
located in the manifests
folder execute:
poetry run cli scan dummy -f manifests
In order to save the results, add the -o
flag with the name of the destination folder:
poetry run cli scan <tool> [-c | -f <target file or folder>] -o <output-folder>
For the evaluation of a scanner the tool must be run beforehand and the results must be stored as <tool>_v<version>_<yyyy-mm-dd>.json
.
Then the stored results can be loaded by the tool for the evaulation.
poetry run cli evaluate <tool>
Some scanners only scan resources deployed in a Kubernetes cluster. You can find instructions on how to deploy the benchmark in a cluster here
As their description states, it focuses on infrastructure security and not workload security:
Checks whether Kubernetes is deployed according to security best practices as defined in the CIS Kubernetes Benchmark.
Thus, the tool is listed in the benchmark just for completness sake and is expected to perform poorly because it focuses on another domain, than what is covered by this benchmark!
KubiScan is not formally installed on a system. Instead, the project can be cloned and executed as Python script.
For a consistent interface, the benchmark assumes to that KubiScan can be called directly using the alias kubiscan
E.g. to use a dedicated virtual environment (called venv
inside the kubiscan directory) you can create the alias in Linux using this command
alias kubiscan="<PATH-TO-DIR>/venv/bin/python <PATH-TO-DIR>/KubiScan.py"
- ensure the
-t
flag is not used in the command. If it is,stdout
andstderr
are joined to juststdout
. This means errors can't be handled properly and it corrupts the results instdout
.
Want to contribute? Awesome! We welcome contributions of all kinds: new scanners, fixes to the existing implementations, bug reports, and feedback of any kind.
- See the contributing guide here.
- Guidelines on how to onboard a new scanner can be found in the Scanner Onboarding Guide
- Check out the Development Guide for more information.
- By contributing you agree to abide by the Code of Conduct.