EKS cluster bootstrap with batteries included
- Supports creation of multiple node groups of different types with communication enabled between them
- Taint and label your nodegroups
- Authorize IAM users for cluster access
- Manage IAM policies that will be attached to your nodes
- Easily configure docker repository secrets to allow pulling private images
- Manage Route53 DNS records to point at your Kubernetes services
- Export nodegroups to SpotInst Elastigroups
- Auto resolving AMIs by region & instance types (GPU enabled AMIs)
- Supports both kubernetes 1.12 and 1.13
- Configuration is saved on S3 for easy collaboration
$ gem install eks_cli
$ eks create --kubernetes-version 1.13 --cluster-name my-eks-cluster --s3-bucket my-eks-config-bucket
$ eks create-nodegroup --cluster-name my-eks-cluster --group-name nodes --ssh-key-name <my-ssh-key> --s3-bucket my-eks-config-bucket --yes
$ eks delete-cluster --cluster-name my-eks-cluster --s3-bucket my-eks-config-bucket
You can type eks
in your shell to get the full synopsis of available commands
Commands:
eks add-iam-user IAM_ARN # adds an IAM user as an authorized member on the EKS cluster
eks create # creates a new EKS cluster
eks create-default-storage-class # creates default storage class on a new k8s cluster
eks create-dns-autoscaler # creates kube dns autoscaler
eks create-nodegroup # creates all nodegroups on environment
eks delete-cluster # deletes a cluster, including nodegroups
eks delete-nodegroup # deletes cloudformation stack for nodegroup
eks enable-gpu # installs nvidia plugin as a daemonset on the cluster
eks export-nodegroup # exports nodegroup auto scaling group to spotinst
eks help [COMMAND] # Describe available commands or one specific command
eks scale-nodegroup # scales a nodegroup
eks set-docker-registry-credentials USERNAME PASSWORD EMAIL # sets docker registry credentials
eks set-iam-policies --policies=one two three # sets IAM policies to be attached to created nodegroups
eks set-inter-vpc-networking TO_VPC_ID TO_SG_ID # creates a vpc peering connection, sets route tables and allows network access on SG
eks show-config # print cluster configuration
eks update-auth # update aws auth configmap to allow all nodegroups to connect to control plane
eks update-cluster-cni # updates cni with warm ip target
eks update-dns HOSTNAME K8S_SERVICE_NAME # alters route53 CNAME records to point to k8s service ELBs
eks version # prints eks_cli version
eks wait-for-cluster # waits until cluster responds to HTTP requests
Options:
c, [--cluster-name=CLUSTER_NAME] # eks cluster name (env: EKS_CLI_CLUSTER_NAME)
s3, [--s3-bucket=S3_BUCKET] # s3 bucket name to save configurtaion and state (env: EKS_CLI_S3_BUCKET)
- Ruby
- kubectl version >= 10 on your
PATH
- aws-iam-authenticator on your
PATH
- aws-cli version >= 1.16.18 on your
PATH
- S3 bucket with write/read permissions to store configuration
You are encouraged to export both EKS_CLI_CLUSTER_NAME
and EKS_CLI_S3_BUCKET
environment variables instead of using the corresponding flags on each command. It makes the command clearer and reduces the chance for typos.
The following selected commands assumes you have exported both environment variables:
export EKS_CLI_S3_BUCKET=my-eks-config-bucket
export EKS_CLI_CLUSTER_NAME=my-eks-cluster
EKS_CLI_S3_BUCKET
can be safely put in your ~/.bash_profile
and EKS_CLI_CLUSTER_NAME
may be exported on a cluster basis
Nodegroups are created separately from the cluster.
You can use eks create-nodegroup
multiple times to create several nodegroups with different instance types and number of workers.
Nodes in different nodegroups may communicate freely thanks to a shared Security Group.
Scale nodegroups up and down using
$ eks scale-nodegroup --group-name nodes --min 1 --max 10
$ eks add-iam-user arn:aws:iam::XXXXXXXX:user/XXXXXXXX --yes
Edits aws-auth
configmap and updates it on EKS to allow an IAM user access the cluster via kubectl
$ eks set-iam-policies --policies=AmazonS3FullAccess AmazonDynamoDBFullAccess
Sets IAM policies to be attached to nodegroups once created.
This settings does not work retro-actively - only affects future eks create-nodegroup
commands.
$ eks update-dns my-cool-service.my-company.com cool-service --route53-hosted-zone-id=XXXXX --elb-hosted-zone-id=XXXXXX
Takes the ELB endpoint from cool-service
and puts it as an alias record of my-cool-service.my-company.com
on Route53
$ eks enable-gpu
Installs the nvidia device plugin required to have your GPUs exposed
Assumptions:
- You have a nodegroup using EKS GPU AMI
- This nodegroup uses a GPU instance (p2.x / p3.x etc)
$ eks set-docker-registry-credentials <dockerhub-user> <dockerhub-password> <dockerhub-email>
Adds your dockerhub credentials as a secret and attaches it to the default ServiceAccount's imagePullSecrets
$ eks create-default-storage-class
Creates a standard gp2 default storage class named gp2
$ eks create-dns-autoscaler
Creates coredns autoscaler with production defaults
$ eks set-inter-vpc-networking VPC_ID SG_ID
Assuming you have some shared resources on another VPC (an RDS instance for example), this command opens communication between your new EKS cluster and your old VPC:
- Creating and accepting a VPC peering connection from your EKS cluster VPC to the old VPC
- Setting route tables on both directions to allow communication
- Adding an ingress rule to SG_ID to accept all communication from your new cluster nodes.
$ eks export-nodegroup --group-name=other-nodes
Exports the corresponding Auto Scaling Group to a Spotinst Elastigroup
Requires the following environment variables to be set:
- SPOTINST_ACCOUNT_ID
- SPOTINST_API_TOKEN
Is more than welcome! ;)