Skip to content
This repository has been archived by the owner on May 14, 2020. It is now read-only.

Latest commit

 

History

History
163 lines (105 loc) · 5.2 KB

running-a-cluster.md

File metadata and controls

163 lines (105 loc) · 5.2 KB

How to Run a Kubernetes Cluster

This page provides details on creating and running a Kubernetes cluster both locally for development, using Minikube and remotely on AWS, using kops.

Recommended Prerequisite Versions

Procedures described in this document require some prerequisites. Where required, the following prerequisite versions are recommended:

Prerequisite Version
Docker 17.0.9.1
Kubernetes 1.8.4
Kubectl 1.8.4
Helm 2.8.2
Kops 1.8.1
Minikube 0.25.0

Any discrepancies between the installed and recommended prerequisite version may cause your deployments to fail.

In AWS via Kops

Download Tools for AWS Deployment

  1. Install prerequisites for kops.

    a. Kubectl

    b. Helm Client

    c. AWS CLI (Note: install awscli, and not aws-shell).

  2. Install kops.

Set Up and Start Kops Cluster

  1. Create an SSH key.

    ssh-keygen -t rsa -b 4096 -C "anaxes_bastion" 
  2. Set Up Required Resources needed for your cluster.

    Note: Using a gossip-based cluster is much simpler than creating a DNS based cluster.

  3. Create the cluster using the SSH key created in step 1 and AWS s3 bucket created in step 2.

    Note this will take a few minutes to create the EC2 instances, set up Kubernetes and make the ELB available.

    export KOPS_NAME="<my kops name>"
    export KOPS_STATE_STORE="s3://<my s3 bucket name>"
    
    kops create cluster \
      --ssh-public-key ps-cluster.pub \
      --name $KOPS_NAME \
      --state $KOPS_STATE_STORE \
      --node-count 2 \
      --zones eu-west-1a,eu-west-1b \
      --master-zones eu-west-1a,eu-west-1b,eu-west-1c \
      --cloud aws \
      --node-size m4.xlarge \
      --master-size t2.medium \
      -v 10 \
      --kubernetes-version "1.8.4" \
      --bastion \
      --topology private \
      --networking weave \
      --yes
  4. Install (Helm) Tiller.

  5. Install the dashboard.

    If you're setting up a production environment use the recommended approach which is more secure.

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

    To access the dashboard view these instructions.

    If you're setting up a development environment use the alternative approach which makes access easier.

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/alternative/kubernetes-dashboard.yaml

    To access the dashboard view these instructions.

Provided all the steps were successful, deployed cluster topology should be similar to that of the Kops demo: Kops demo topology

Stop and Delete AWS Resources

  1. Delete the cluster.

    This deletes the EC2 instances and ELB.

  2. Delete the S3 Bucket.

    If you followed the advice to create a versioned bucket, you will need to delete all the versioned objects before deleting the bucket.

Locally via Minikube

Download Tools for Local Minikube Deployment

  1. Install Prerequisites for Minikube:

    a. Hypervisor

    b. Kubectl

    c. Helm Client

    d. Docker

  2. Install Minikube.

Start Minikube

  1. Start Kubernetes Cluster:
minikube start

Note: When starting Minikube it is recommended to give it plenty of memory for hosting containers. You can do this by adding parameter --memory=6144 to minikube start command.

  1. Install (Helm) Tiller.

  2. If you want to access the dashboard:

 minikube dashboard

Stop and Delete Minikube Resources

  1. Stop Kubernetes Cluster:
minikube stop
  1. Delete Kubernetes Cluster:
minikube delete

Notes:

  • Features that require a Cloud Provider will not work in Minikube. (e.g. LoadBalancers)
  • Minikube runs a single-node Kubernetes cluster inside a VM on your laptop.

Useful resource: Running Kubernetes Locally via Minikube.