Skip to content

Latest commit

 

History

History
147 lines (103 loc) · 4.83 KB

README.md

File metadata and controls

147 lines (103 loc) · 4.83 KB

Kubernixos

Build Status

This is the first second attempt to create a simple wrapper around the declarative parts of kubectl.

The Goal

We want to consider our declarative configuration in Nix (committed to git) as, the single (and only) source of truth.

This means that whenever we ask for state in our kubernetes clusters to be applied, any cluster object in any namespace that is managed by kubernixos should:

  1. (if not in cluster) be created.

  2. (if needed) be updated to match the state of our Nix closure.

  3. (if in cluster, but not in Nix closure) be terminated.

kubectl apply takes good care of bullet points 1 and 2, but kubectl apply --prune is unfortunately not mature enough to serve our needs. Thus; the prune part of the apply command in particular has been re-implemented in this project using the kube client apis for Golang.

How it works

Kubernetes objects are not managed by kubernixos, unless they are labelled with the "kubernixos" label.

The value of this label contains a sha256 checksum of the entire deployment JSON-blob. The purpose of this is to be able to identify objects for pruning. If the checksum of the current deployment does not match the object checksum, it is concidered "unwanted"; ready for pruning.

These are the stages involved in a kubernixos rollout to a cluster:

  1. Invoke the NixOS module system through nix eval to get the value of the module attribute config.kubernixos.

  2. Create a kubernetes auth config using the local machine kubeconfig. (~/.kube/config or KUBECONFIG in the environment)

  3. Query the total list of available API-resources on the Kubernetes API-server.

  4. Apply the value of config.kubernixos.manifests through kubectl apply.

  5. Get all kubernetes objects in all namespaces of all types, having the label "kubernixos".

  6. Compare the "kubernixos"-label value with that of config.kubernixos.checksum, and delete (prune) objects that has the label, but with a different value.

Usage

kubernixos: Everything is eval'ed and checked, but nothing is outputted nor mutated.

kubernixos dump: Will dump the JSON-blob generated by kubernixos, ready for kubectl apply. This will mutate nothing.

kubernixos apply: Like kubectl apply. This will apply kubernixos state to the cluster. Unwanted objects will not be pruned, unless "prune" is added to the command explicitly.

kubernixos prune: This will prune unwanted objects from the cluster, which are not part of the kubernixos nix closure.

If --show-trace is passed to the kubernixos, this flag will be passed on to nix eval.

If anything else is passed to kubernixos, these args will be forwarded to kubectl apply. This can be used to get more verbose kubectl output or perform a dry-run, e.g.: kubernixos apply --dry-run=true -v=6.

Consecutive keywords can be passed to kubernixos, like kubernixos dump apply prune.

The module

Kubernixos comes with a NixOS module (/lib/module.nix). kubernixos.manifests can hold a list of kubernetes manifests (as attrsets), which will get serialized into JSON.

Example:

{
  kubernixos.manifests = [{
    kind = "Namespace";
    apiVersion = "v1";
    metadata = {
      name = "development";
      labels = {
        name = "development";
      };
    };
  }];
}

Including Kubernixos in your NixOS config

The kubernixos script accepts two variables to be set at runtime:

  1. PACKAGES
  2. MODULES

If kubernixos is built using the derivation defined in /default.nix, PACKAGES will be set default set to whichever nixpkgs-set is used to build the derivation. It can however still be overwritten at runtime.

As for MODULES, it is important to be able to set modules at runtime, since this can be used to apply altering options for different environment.

shell.nix example implementation for morph-managed NixOS config repo:

{ pkgs ? import ../../common/nixpkgs.nix { version = "18.09"; } }:

with pkgs.lib;
pkgs.stdenv.mkDerivation {
  name = "kubernixos";

  buildInputs =
  let
    upstream =  with builtins; with pkgs;
      callPackage (fetchFromGitHub {
        owner = "DBCDK";
        repo = "kubernixos";
        rev = "f7d930ef5474255f44af5489eed49648537a386b";
        sha256 = "1wansvk8wqz2c8s4mhvnmyqiyz5mdisris3hvs81bq4zkd2smhbi";
      }) {};

    kubernixos = pkgs.writeShellScriptBin "kubernixos" ''
      if [ ! -f "$1" ]; then
        echo "Hostgroup $1 does not exist!" >&2
        exit 1
      fi

      export PACKAGES="(import $1).network.pkgs"
      export MODULES="(import $1).network.modules"
      shift

      ${upstream}/bin/kubernixos $@
    '';
  in
    singleton kubernixos;
}

The above example wraps the kubernixos script such that it's first arg becomes a path to a network-file (similar to hostgroup files used by Morph or NixOps).