Contents:
- Introduction to the auto-scaler evaluation platform
- Repository contents
- Platform setup and installation
- Running experiments
This platform for evaluating auto-scalers was created during the development of KubeScale auto-scaler. The platform presents a perfect testing ground for auto-scaler since it provides:
- workload generation based on templates executed by Gatling
- a web application with minimal start-up overhead based on the flask framework
- load balancing and related HTTP traffic metrics (used by most cloud auto-scalers) such as RPS, latency... using Envoy
- advanced monitoring using Prometheus, kube-eagle
- easy visualisation of relevant performance metrics using Grafana templae
A high level architecture of the platform is the following:
Any auto-scaler can thus easily integrate by:
- changing the Gatling workloads to those that test the auto-scaler
- using the existing web-application for CPU intensive work
- using the Prometheus monitoring system for storing and reading metrics
- using the Envoy load balancer for easier scaling, web traffic metrics and advanced load balancing
- ...
Example of auto-scalers in action shown through metrics produced
JSON templated Grafana dashboard and metrics captured and presented using the framework:
This repository contains the following folders:
-
- contains helm charts to install and configure the monitoring platform in Kubernetes: - prometheus - a monitoring setup based on Prometheus operator that ingests internal Kubernetes metrics including providing a Kube metrics server and Custom Metrics APIs - grafana - for user-friendly exploration and visualisation of metrics - auto-wired to connect to prometheus and comes with predefined dashboards - kube-eagle - installation for precise utilisation metrics - prometheus custom metrics adapter - prometheus push gateway - ...
-
- definitions for: services, deployments, config maps and secrets that define components running on the platform
- components
- load generator - gatling (cron job, config map)
- load balancer - envoy (deployment)
- web application that needs auto-scaling(service, deployment)
- auto-scaler (deployment, config map)
- storage (local storage provisioning)
- ...
-
- a guide to installing a Kubernetes cluster using Kubespray (specific instruction are limited to Ubuntu machines but KubeScale should work in various environments)
-
- Gatling load generator for sending requests to the web application based on pre-defined patterns
- contains the configs to get Gatling load generation tool running in a Docker container
- Gatling load generator for sending requests to the web application based on pre-defined patterns
-
- a stateless Python Flask web application with simple REST API endpoints that perform CPU intensive tasks and that would be scaled
and the following scripts:
-
script.sh
- instructions on how to deploy all components of the platform
-
experiment_runner.sh
- script for running experiments that test auto-scalers using workloads, applications and monitoring metrics on the evaluation platform
-
Setup a Kubernetes cluster (the repository contains the instructions on how to do this using KubeSpray inside kubespray-setup).
-
Install kubectl and create a namespace in your Kubernetes cluster The Kubernetes cluster creation document here contains the instructions to setup kubectl after the KubeSpray cluster instalation.
-
Setup a config map with the kubectl config (required for the auto-scaler to work)
-
Setup a secret with an email username/password combination (optional for kubes-scale autoscaler to send email notifications)
-
Setup authentication to a docker registry
- Resources on using private repositories (GCR):
- https://cloud.google.com/container-registry/docs/pushing-and-pulling
- https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
- To create the relevant in Kubernetes for exm this command:
kubectl create secret docker-registry gcr-json-key --docker-server=gcr.io --docker-username=_json_key --docker-password="$(cat /Users/dbg/code/IdeaProjects/act_project/kubernetes/secrets/our-rock-280920-009255fa8bb1.json)" [email protected]
- Resources on using private repositories (GCR):
-
Make sure the nodes are properly labeled as exemplified in the
kubespray-setup/labels.txt
file. -
Install helm (we remained using version 2)
- On MacOS:
brew install helm@2 echo 'export PATH="/usr/local/opt/helm@2/bin:$PATH"' >> ~/.bash_profile
- Helm assumes it's using the same config file as kubectl (
$HOME/.kube/config
) by default. - Update helm (to make sure client and Tiller server are same versions)
helm init --history-max 200 helm init --upgrade
- On MacOS:
First read through and then run the setup.sh script with the relevant Kubernetes namespace name:
./setup.sh
The experiments depend on several parameters:
- scaling metric
- workload
- initial number of instances
Run the experiments using:
experiment_runner.sh` <web application name> <scaling metric> <workload type> <num instances>
In the experiments, workload is generated based on Gatling patterns each 30 minutes and is repeated 7 times (in total lasting 3.5 hours).
Each experiment ran the chosen workload against:
- 1 - a static number of instances
- 2 - the Kubescale auto-scaler
- 3 - just the reactive component of the Kubescale auto-scaler
- 4 - the Kubernetes horizontal pod autoscaler