We need a container platform to run our Docker images, so we can serve our reference architecture applications to TELUS customers.
Kubernetes is Google's insanely popular container platform. It forms the backbone for our current RedHat OpenShift platform, which extends upon it, adding many features (build bootstrapping, image repository, etc.)
Kubernetes allows us to orchestrate our Docker containers, and use shared "minion" hosts as generic infrastructure for our autoscaling applications. Rather than building custom infrastructure for each application, by combining Docker and Kubernetes, it is much easier to deploy and scale our applications (see: "pets vs cattle"). Every time a new application is deployed, it runs as if it is an entirely new application "server", built from scratch.
To configure your cluster, Kubernetes uses "declarative programming", which means that you simply define a state that you want your application to be in, e.g. "deploy these containers with these environment variables, and expose them on this port". Kubernetes doesn't ask HOW, it simply calculates the best way to do the rollout, depending on your configuration. So to deploy an application, you simply apply a YAML configuration onto the Kubernetes cluster.
In the Kubernetes YAML configuration, there are several configuration objects that we use:
A Kubernetes namespace is a virtual boundary. Within the namespace, we get a private network, where containers in this namespace can communicate with each other.
A Kubernetes "deployment" is a declarative YAML configuration, which allows you to deploy an application of a given name, to a given namespace.
The deployment file defines a declarative state of an application in your Kubernetes cloud, and how that state is composed. It defines how many replicas to deploy, and anything that we would define in the docker-compose.yml
file would be defined in here as well (environment variables, volumes, etc). Your deployment must also define the resources that your application requires (CPU/RAM), as well as health checks (liveness/readiness probes) which ensure your app is running. Kubernetes then automates the work in deploying and scaling your application. It also saves the previous deployment in its history, in order to facilitate rollbacks.
The deployment defines our Kubernetes "pods" which are, more or less, containers of containers (you can deploy one or more Docker images in a single pod, so that they are always deployed on the same host). Containers in the same pod are all deployed to the same minion server, and can talk to each other on localhost
.
A service load balances between pods, exposes specific ports, and allows communication between pods within the namespace.
Despite the complex name, this simple Kubernetes feature will automatically deploy new pods, and add them to the service, when they reach a specific CPU threshold (by default we use 80%).
Since we do not put our secrets, such as SSH keys or passwords in git, nor do we bake them into our Docker containers, we need some way of sharing them with the running application. Kubernetes uses secrets, which can be mounted into the Docker containers either as environment variables or volumes.
To configure secrets on Kubernetes, you'll need to use Vault to store them. See secrets.md for more information.
@delivery