This is the set of terraform, helm, and docker configurations required to manage, operate, and deploy to a no-nonsense version of Kubernetes we call Coopernetes. This project is still in very early alpha developement, and is currently only being used by Colab Coop (https://colab.coop) and itme (https://itme.company). If you are interested in hosting containers and applicaitons on a managed Kubernetes cluster using Coopernetes, or you are interested in deploying the infrastructure yourself, please reach out to [email protected].
master
is our primary working branch. It is intended to be generic, and can be cloned and used by anyone to launch a cluster from scracth.itme
andcolab
correspond to the configurations of the two organizations currently using coopernetes. We each have slightly different needs and architectures, so we're using branches to track the individual changes until we can merge them back to master.1::wq
All commands can be installed with brew install
, except for helm plugins which use helm plugin install
To manage the AWS infrastructure:
terraform
awscli
wget
To manage and deploy applications on kubernetes:
kubectl
helm
helmfile
- the helm-diff plugin:
helm plugin install https://github.com/databus23/helm-diff
- the helm-secrets plugin:
helm plugin install https://github.com/zendesk/helm-secrets
gnu-getopt
: used by helm-secretsvelero
: used for backup and restore
Based on the example at https://github.com/terraform-aws-modules/terraform-aws-eks/blob/7de18cd9cd882f6ad105ca375b13729537df9e68/examples/managed_node_groups/main.tf
- Create a
terraform.tfvars
file based on the sample in the terraform repo to configure your cluster. - From inside the
terraform/eks
folderterraform apply
- Terraform generates a couple of config files needed by helmfile / helm. They are put in
terraform/eks/generated
, and the other programs that rely on them link to the files there. - The first of these files is
helmfile.yaml
which is a set of non-secret values generated or configured in terraform that are needed by helmfile. This file is imported as the default environment in helmfile, granting charts and configuration access to terraform values. Anything non-secret you want to pass from terraform to helmfile should live here. - Secrets are encrypted with helm-secrets, which is confiured using a
.sops.yaml
file in the root folder. In this repo, that file is symlinked toterraform/eks/generated/sops.yaml
, since the kms key is generated by terraform.
- Configure kubectl with the generated kubeconfig:
aws eks --region us-west-2 --profile=<AWS_PROFILE> update-kubeconfig --name <CLUSTER_NAME>
helmfile apply
in the root folder.- Once you deploy your first application with an Ingress, run
kubectl get ingress --all-namespaces
to list the address associated with the ingress. That is the load balancer for all inbound requests on the clster. You should create a DNS entry pointing to this load balancer for all services you want to create. - Port forward into kibana by running the command from below, then go to Discover menu item, configure the index to
kubernetes_cluster*
, choose a@timestamp
and Kibana is ready. - Once the velero client is installed, you need to run a couple commands to configure and setup backups:
- Run
velero client config set namespace=system-backup
. This tells velero what namespace we installed it it. - Run
velero backup create test-backup
to test the backup functionality
- Once prometheus-operator is installed, you should add the following dashboard to grafana: https://grafana.com/grafana/dashboards/8670.
- You can run the grafan dashboard by finding the grafana pod in the system-monitoring namespace, and then running:
kubectl port-forward <GRAFANA_POD> -n system-monitoring 3000:3000
- You can log in with the user
admin
and the passworprom-operator
. Since you need access to the cluster to port forward, these account credentials can be shared freely.
All deployment related files, including the chart, helmfile, and Dockfile, should all live in a folder called .deploy
in the root of the repository.
To deploy, simply launch the coopernetes-deploy
container in CircleCI and use coopctl
to deploy
- Builds a docker image using the Dockerfile at
.deploy/Dockerfile
and the project root as the context. - Calls
helfile apply .deploy/helmfile.yaml
. - Run the following velero commands:
# backup hourly and retain (ttl) for 72 hours
velero schedule create $NAMESPACE-hourly-72 --schedule="0 * * * *" --ttl 72h0m0s --include-namespaces $NAMESPACE
# backup daily and retain (ttl) for 30 days
velero schedule create $NAMESPACE-daily-30 --schedule="0 0 * * *" --ttl 730h0m0s --include-namespaces $NAMESPACE
# backup monthly and retain (ttl) for 12 months
velero schedule create $NAMESPACE-monthly-12 --schedule="0 8 1 * *" --ttl 8760h0m0s --include-namespaces $NAMESPACE
# backup yearly and retain (ttl) for 7 years
velero schedule create $NAMESPACE-yearly-7 --schedule="0 16 1 5 *" --ttl 61320h0m0s --include-namespaces $NAMESPACE
- Keep in mind, nodes have a maximum number of pods they can support, as indicated on the following list: https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt
If you are using a custom chart for the project, we recommend putting it at .deploy/chart/
.
- https://cert-manager.io/docs/tutorials/acme/ingress/
- https://cert-manager.io/docs/installation/kubernetes/
- logs:
kubectl port-forward deployment/efk-kibana 5601 -n system-logging
- kubecost:
kubectl port-forward deployment/kubecost-cost-analyzer 9090 -n kubecost