Skip to content

Obmondo/terraform-provider-kops

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

terraform-provider-kops

License Go Report Card

terraform-provider-kops brings kOps into terraform in a fully managed way, enabling idempotency through direct integration with the kOps api:

  • No local_exec
  • No yaml templating
  • No CLI invocations

... just pure go code.

Currently using kOps v1.26.4 and compatible with terraform 0.15 and higher.

NOTES

  • For now, provisioning the network is not supported. The network must be created separately and given to the provider through cluster attribute network_id and subnets attributes provider_id.
  • The provider has only been tested with AWS, Calico and Cilium networking. If you use it with another cloud or networking provider, please let us know so that we can help troubleshooting if necessary and update the docs.

Why use it

kOps is an amazing tool but it can be challenging to integrate in an IAC (infrastructure as code) stack.

Typical solutions usually involve running kOps CLI in shell scripts or generating kOps templates manually and force syncing them with the kOps store.

In most cases, getting something idempotent is difficult because you need to somewhat keep the state of the cluster and are responsible for deleting obsolete instange groups for example.

This is where terraform shines in, state management. This provider takes care of creating, updating and deleting instance groups as they evolve over time.

Even if kOps provides kops update cluster --target terraform to create the terraform configuration for a kOps cluster, it is still necessary to run kops rolling-update cluster to recycle instance groups when something changes in the cluster. With this provider, this is all taken care of and you should never need to invoke kOps manually.

How does it work

The provider declares resources to declare the state of the cluster:

The provider also declares data sources to fetch the state of the cluster and use it in your terraform code:

Finally, a special resource takes care of the cluster lifecyle:

Provider configuration holds cloud provider authentication settings, currently only AWS is supported.

Docs

The full documentation is available in the docs folder or on the terraform registry provider page.

Installing the provider

To install the provider, add it in the terraform required_providers set.

terraform {
  required_providers {
    kops = {
      source  = "eddycharly/kops"
    }
  }
}

Building the provider

To build the provider, clone this repository and run the following command:

make all

If you want to install the built provider after building it, run the following command instead (working on linux and macos):

make install

Using the provider

To use the provider you will need to register it in your terraform code:

terraform {
  required_providers {
    kops = {
      source   = "github/eddycharly/kops"
      versions = ["0.0.1"]
    }
  }
}

provider "kops" {
  state_store = "s3://cluster.example.com"
  // optionally set up your cloud provider access config
  aws {
    profile = "example_profile"
  }
}

Example usage

locals {
  masterType  = "t3.medium"
  nodeType    = "t3.medium"
  clusterName = "cluster.example.com"
  dnsZone     = "example.com"
  vpcId       = "vpc-id"
  privateSubnets = [
    { subnetId = "private-subnet-0", zone = "zone-0" },
    { subnetId = "private-subnet-1", zone = "zone-1" },
    { subnetId = "private-subnet-2", zone = "zone-2" }
  ]
  utilitySubnets = [
    { subnetId = "utility-subnet-0", zone = "zone-0" },
    { subnetId = "utility-subnet-1", zone = "zone-1" },
    { subnetId = "utility-subnet-2", zone = "zone-2" }
  ]
}

resource "kops_cluster" "cluster" {
  name               = local.clusterName
  admin_ssh_key      = file("${path.module}/../dummy_ssh.pub")
  kubernetes_version = "stable"
  dns_zone           = local.dnsZone
  network_id         = local.vpcId

  cloud_provider {
    aws {}
  }

  iam {
    allow_container_registry = true
  }

  networking {
    calico {}
  }

  topology {
    masters = "private"
    nodes   = "private"
    dns {
      type = "Private"
    }
  }

  # private subnets
  subnet {
    name        = "private-0"
    type        = "Private"
    provider_id = local.privateSubnets[0].subnetId
    zone        = local.privateSubnets[0].zone
  }
  subnet {
    name        = "private-1"
    type        = "Private"
    provider_id = local.privateSubnets[1].subnetId
    zone        = local.privateSubnets[1].zone
  }
  subnet {
    name        = "private-2"
    type        = "Private"
    provider_id = local.privateSubnets[2].subnetId
    zone        = local.privateSubnets[2].zone
  }
  subnet {
    name        = "utility-0"
    type        = "Utility"
    provider_id = local.utilitySubnets[0].subnetId
    zone        = local.utilitySubnets[0].zone
  }
  subnet {
    name        = "utility-1"
    type        = "Utility"
    provider_id = local.utilitySubnets[1].subnetId
    zone        = local.utilitySubnets[1].zone
  }
  subnet {
    name        = "utility-2"
    type        = "Utility"
    provider_id = local.utilitySubnets[2].subnetId
    zone        = local.utilitySubnets[2].zone
  }

  # etcd clusters
  etcd_cluster {
    name = "main"
    member {
      name           = "master-0"
      instance_group = "master-0"
    }
    member {
      name           = "master-1"
      instance_group = "master-1"
    }
    member {
      name           = "master-2"
      instance_group = "master-2"
    }
  }
  etcd_cluster {
    name = "events"
    member {
      name           = "master-0"
      instance_group = "master-0"
    }
    member {
      name           = "master-1"
      instance_group = "master-1"
    }
    member {
      name           = "master-2"
      instance_group = "master-2"
    }
  }
}

resource "kops_instance_group" "master-0" {
  cluster_name = kops_cluster.cluster.id
  name         = "master-0"
  role         = "Master"
  min_size     = 1
  max_size     = 1
  machine_type = local.masterType
  subnets      = ["private-0"]
}

resource "kops_instance_group" "master-1" {
  cluster_name = kops_cluster.cluster.id
  name         = "master-1"
  role         = "Master"
  min_size     = 1
  max_size     = 1
  machine_type = local.masterType
  subnets      = ["private-1"]
}

resource "kops_instance_group" "master-2" {
  cluster_name = kops_cluster.cluster.id
  name         = "master-2"
  role         = "Master"
  min_size     = 1
  max_size     = 1
  machine_type = local.masterType
  subnets      = ["private-2"]
}

resource "kops_instance_group" "node-0" {
  cluster_name = kops_cluster.cluster.id
  name         = "node-0"
  role         = "Node"
  min_size     = 1
  max_size     = 2
  machine_type = local.nodeType
  subnets      = ["private-0"]
}

resource "kops_instance_group" "node-1" {
  cluster_name = kops_cluster.cluster.id
  name         = "node-1"
  role         = "Node"
  min_size     = 1
  max_size     = 2
  machine_type = local.nodeType
  subnets      = ["private-1"]
}

resource "kops_instance_group" "node-2" {
  cluster_name = kops_cluster.cluster.id
  name         = "node-2"
  role         = "Node"
  min_size     = 1
  max_size     = 2
  machine_type = local.nodeType
  subnets      = ["private-2"]
}

resource "kops_cluster_updater" "updater" {
  cluster_name = kops_cluster.cluster.id

  keepers = {
    cluster  = kops_cluster.cluster.revision
    master-0 = kops_instance_group.master-0.revision
    master-1 = kops_instance_group.master-1.revision
    master-2 = kops_instance_group.master-2.revision
    node-0   = kops_instance_group.node-0.revision
    node-1   = kops_instance_group.node-1.revision
    node-2   = kops_instance_group.node-2.revision
  }
}

More examples are available in the /examples folder:

Importing an existing cluster

You can import an existing cluster by creating a kops_cluster configuration and running the terraform import command:

  1. Create a terraform configuration:

    provider "kops" {
      state_store = "s3://cluster.example.com"
    }
    
    resource "kops_cluster" "cluster" {
      name        = "cluster.example.com"
      
      // ....
    }
  2. Run terraform import:

    terraform import kops_cluster.cluster cluster.example.com

Importing an existing instance group

You can import an existing cluster by creating a kops_instance_group configuration and running the terraform import command:

  1. Create a terraform configuration:

    provider "kops" {
      state_store = "s3://cluster.example.com"
    }
    
    resource "kops_instance_group" "ig-0" {
      cluster_name = "cluster.example.com"
      name         = "ig-0"
      
      // ....
    }
  2. Run terraform import:

    terraform import kops_instance_group.ig-0 cluster.example.com/ig-0

NOTE: the id of the instance group to be imported must be given in the cluster name/instance group name format.

Getting kubeconfig file

To retrieve the kubeconfig file for the cluster, run the following command:

kops export kubecfg --admin --name cluster.example.com --state s3://cluster.example.com

About

Brings kOps into terraform in a fully managed way

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Go 99.2%
  • Other 0.8%