This is the CloudCasa provider for Terraform.
This provider allows you to install the CloudCasa agent and manage backups in your Kubernetes cluster using Terraform.
- Terraform v1.x
- A CloudCasa API key - Visit CloudCasa to sign up and create an API key under Configuration -> API Keys
- Go v1.18.x (to build the provider plugin)
- kubectl (for CloudCasa agent installation)
Below is a small example of how to initialize the provider, install the CloudCasa agent, and take a snaoshot of a cluster. For more details and examples check the docs
directory.
In your terraform manifest, create and configure the provider:
terraform {
required_providers {
cloudcasa = {
version = "1.0.0"
source = "cloudcasa.io/cloudcasa/cloudcasa"
}
}
}
provider "cloudcasa" {
apikey = "API_KEY_HERE"
}
A cloudcasa_kubecluster resource represents a Kubernetes cluster. You can import an existing CloudCasa
cluster using terraform import
or define a new cluster.
To automatically install the CloudCasa agent on a cluster set auto_install
to true
. The provider
will apply the agent spec using the environment variable KUBECONFIG
to find the cluster context.
resource "cloudcasa_kubecluster" "testcluster" {
name = "test_terraform_cluster"
auto_install = true
}
The cloudcasa_kubebackup resource refers to both snapshots and copy backups. Cluster ID of a valid CloudCasa kubecluster is required.
If run_after_create
is True, the backup will be considered Adhoc and does not require a policy ID. With this setting the backup will run any time we run terraform apply
, even if the backup has already been created. Terraform will wait up to 5 minutes for the job to complete.
You can set most options that are available in the CloudCasa UI.
For example, here is a simple Adhoc snapshot job:
resource "cloudcasa_kubebackup" "adhoc_snapshot_example" {
name = "cloudcasa_adhoc_snapshot_example"
kubecluster_id = resource.cloudcasa_kubecluster.example.id
all_namespaces = true
snapshot_persistent_volumes = true
copy_persistent_volumes = false
run_after_create = true
}
For more examples see the docs
directory.
Policies are required for backups that do not have run_after_create set. They are created by defining a Cron schedule for the job:
resource "cloudcasa_policy" "testpolicy" {
name = "test_terraform_policy"
timezone = "America/New_York"
schedules = [
{
retention = 30,
cron_spec = "30 0 * * MON,FRI",
locked = false,
}
]
}
You can import existing CloudCasa resources to manage them in Terraform using terraform import
. For example, assume we have created a policy named "test_manual_policy" in CloudCasa UI. First create an empty resource for this policy:
resource "cloudcasa_policy" "importtest" {
name = "test_manual_policy" # Name of the policy resource in CloudCasa
}
Get the ID from the CloudCasa UI (or casa) and use terraform import <resource_state_path> <CC ID>
:
terraform import cloudcasa_policy.importtest 64948e5160a55cbabb5625f5
After importing, Terraform will try to apply any local changes to the CloudCasa resource the next time you apply. Check the resource in Terraform using terraform state show <resource_state_path>
and update the configuration values to match, and make any desired changes. Changes in Terraform will always supercede changes in CloudCasa!
For Terraform v1.4+, you can add an import
block directly to the Terraform config to avoid using terraform import
manually each time:
import {
provider =
to = cloudcasa_policy.importtest
id = "64948e5160a55cbabb5625f5"
}
resource "cloudcasa_policy" "importtest" {
name = "test_manual_policy"
}