stackfeed/k8s-ops cloud tools docker image
Docker image which bundles cloud automation software used for operating kubernetes. Apart from that the container bundles lots of useful tools to provide you a ready-to-go container workstation without need to install anything on your host machine.
Note: This image requires uid/gid in the container match the user installing the container. This means that macOS and Windows docker daemons will likely have problems since they run in hidden VMs and so the uid/gid of the user comes from the VM, not macOS or Windows. Stick to Linux to avoid these troubles or install tools with this knowledge -- either install natively or by modifying this container or creating your own.
List of software bundled into this container:
- Terraform - infrastructure managment which works with almost any cloud provider
- Terragrunt - a thin terraform wrapper-tool which meant to make experience smoother when working with multiple terraform stages and environments
- KOPS - the easiest way to get a production grade Kubernetes cluster up and running
- kubectl - kubernetes CLI tool
- Helm - the package manager for Kubernetes
- Helmfile - is a declarative spec for deploying helm charts
- AWS CLI - AWS CLI tool (available in AWS container flavour)
- Heptio Ark - is an utility for managing disaster recovery, specifically for your Kubernetes cluster resources and persistent volumes (available in AWS container flavour).
Name | Description | Default |
---|---|---|
ZSH_THEME | Zsh theme to use. | cloud |
ZSH_PLUGINS | Zsh plugins enabled. | aws helm kops kubectl terraform |
HELM_FORCE_TLS | Specify y/yes/enabled/enable/true to force Helm --tls option. |
no |
This container uses fixuid a go binary to change Docker container user/group and file permissions at runtime. That's why it's recommended to run as unprivileged user matching your host UID, GID.
Typical workstation container initialization looks like:
docker run -ti --name myproject -u $(id -u):$(id -g) -v /some/path:/code -w /code stackfeed/k8s-ops:aws
Here above we create a container and name it myproject
, provide UID/GID of the host system and also we can pass any volumes which might be required.
docker stop myproject
docker start myproject
To get any number of consoles running you can simply exec into the running container as simple as this:
docker exec -ti myproject zsh
also note that if you want to get fancy colors and proper terminal width and height you have to enhance the docker exec by providing additional options:
docker exec -ti --env COLUMNS=`tput cols` --env LINES=`tput lines` myproject zsh
But better make yourself an alias: alias deti="docker exec -ti --env COLUMNS=
tput cols --env LINES=
tput lines"
Here are a few best practices. When working on the workstation almost any application makes use of the home directory. Tools bundled into this container are no exception, for example kubectl uses ~/.kube
directory and helm uses ~/.helm
.
In case if you recreate container to update the tools all the configuration will be lost!
That's why the first rule is to always pre-createa a volume for the container user home. The second rule is not to forget to pass volumes with the code.
# create volume to store home directory files
docker volume create myproject-home
# we admit that the code we work with is in ~/code, so we keep this in when we initiate the container
docker run -ti --name myproject --hostname myproject \
-u $(id -u):$(id -g) \
-v myproject-home:/home/fixuid \
-v ~/code:/home/fixuid/code \
stackfeed/k8s-ops:aws
# create container with CAP_NET_ADMIN privilege to enable iptables usage
docker run -ti --name myproject --hostname myproject \
--cap-add=NET_ADMIN \
-u $(id -u):$(id -g) \
-v myproject-home:/home/fixuid \
-v ~/code:/home/fixuid/code \
stackfeed/k8s-ops:aws