diff --git a/v0.9.0/search/search_index.json b/v0.9.0/search/search_index.json index 864c54bfe..f74d2acc8 100644 --- a/v0.9.0/search/search_index.json +++ b/v0.9.0/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"What is Claudie","text":"

Claudie is a platform for managing multi-cloud and hybrid-cloud Kubernetes clusters. These Kubernetes clusters can mix and match nodepools from various cloud providers, e.g. a single cluster can have a nodepool in AWS, another in GCP and another one on-premises. This is our opinionated way to build multi-cloud and hybrid-cloud Kubernetes infrastructure. On top of that Claudie supports Cluster Autoscaler on the managed clusters.

"},{"location":"#vision","title":"Vision","text":"

The purpose of Claudie is to become the final Kubernetes engine you'll ever need. It aims to build clusters that leverage features and costs across multiple cloud vendors and on-prem datacenters. A Kubernetes that you won't ever need to migrate away from.

"},{"location":"#use-cases","title":"Use cases","text":"

Claudie has been built as an answer to the following Kubernetes challenges:

You can read more here.

"},{"location":"#features","title":"Features","text":"

Claudie covers you with the following features functionalities:

See more in How Claudie works sections.

"},{"location":"#what-to-do-next","title":"What to do next","text":"

In case you are not sure where to go next, you can just simply start with our Getting Started Guide or read our documentation sitemap.

If you need help or want to have a chat with us, feel free to join our channel on kubernetes Slack workspace (get invite here).

"},{"location":"CHANGELOG/changelog-0.1.x/","title":"Claudie v0.1","text":"

The first official release of Claudie

"},{"location":"CHANGELOG/changelog-0.1.x/#deployment","title":"Deployment","text":"

To deploy the Claudie v0.1.X, please:

  1. Download the archive and checksums from the release page

  2. Verify the archive with the sha256 (optional)

    sha256sum -c --ignore-missing checksums.txt\n

    If valid, output is, depending on the archive downloaded

    claudie.tar.gz: OK\n

    or

    claudie.zip: OK\n

    or both.

  3. Lastly, unpack the archive and deploy using kubectl

    We strongly recommend changing the default credentials for MongoDB, MinIO and DynamoDB before you deploy it. To do this, change contents of the files in mongo/secrets, minio/secrets and dynamo/secrets respectively.

    kubectl apply -k .\n
"},{"location":"CHANGELOG/changelog-0.1.x/#v013","title":"v0.1.3","text":""},{"location":"CHANGELOG/changelog-0.1.x/#features","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.1.x/#bugfixes","title":"Bugfixes","text":"

No bugfixes since the last release.

"},{"location":"CHANGELOG/changelog-0.1.x/#known-issues","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.1.x/#v012","title":"v0.1.2","text":""},{"location":"CHANGELOG/changelog-0.1.x/#features_1","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.1.x/#bugfixes_1","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.1.x/#known-issues_1","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.1.x/#v011","title":"v0.1.1","text":""},{"location":"CHANGELOG/changelog-0.1.x/#features_2","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.1.x/#bugfixes_2","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.1.x/#known-issues_2","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.1.x/#v010","title":"v0.1.0","text":""},{"location":"CHANGELOG/changelog-0.1.x/#features_3","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.1.x/#bugfixes_3","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.1.x/#known-issues_3","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.2.x/","title":"Claudie v0.2","text":"

Due to a breaking change in the input manifest schema, the v0.2.x will not be backwards compatible with v0.1.x.

"},{"location":"CHANGELOG/changelog-0.2.x/#deployment","title":"Deployment","text":"

To deploy the Claudie v0.2.X, please:

  1. Download the archive and checksums from the release page

  2. Verify the archive with the sha256 (optional)

    sha256sum -c --ignore-missing checksums.txt\n

    If valid, output is, depending on the archive downloaded

    claudie.tar.gz: OK\n

    or

    claudie.zip: OK\n

    or both.

  3. Lastly, unpack the archive and deploy using kubectl

    We strongly recommend changing the default credentials for MongoDB, MinIO and DynamoDB before you deploy it. To do this, change contents of the files in mongo/secrets, minio/secrets and dynamo/secrets respectively.

    kubectl apply -k .\n
"},{"location":"CHANGELOG/changelog-0.2.x/#v020","title":"v0.2.0","text":""},{"location":"CHANGELOG/changelog-0.2.x/#features","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.2.x/#bugfixes","title":"Bugfixes","text":"

No bugfixes since the last release.

"},{"location":"CHANGELOG/changelog-0.2.x/#known-issues","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.2.x/#v021","title":"v0.2.1","text":""},{"location":"CHANGELOG/changelog-0.2.x/#features_1","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.2.x/#bugfixes_1","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.2.x/#known-issues_1","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.2.x/#v022","title":"v0.2.2","text":""},{"location":"CHANGELOG/changelog-0.2.x/#features_2","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.2.x/#bugfixes_2","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.3.x/","title":"Claudie v0.3","text":"

Due to a breaking change in the input manifest schema, the v0.3.x will not be backwards compatible with v0.2.x

"},{"location":"CHANGELOG/changelog-0.3.x/#deployment","title":"Deployment","text":"

To deploy the Claudie v0.3.X, please:

  1. Download the archive and checksums from the release page

  2. Verify the archive with the sha256 (optional)

    sha256sum -c --ignore-missing checksums.txt\n

    If valid, output is, depending on the archive downloaded

    claudie.tar.gz: OK\n

    or

    claudie.zip: OK\n

    or both.

  3. Lastly, unpack the archive and deploy using kubectl

    We strongly recommend changing the default credentials for MongoDB, MinIO and DynamoDB before you deploy it. To do this, change contents of the files in mongo/secrets, minio/secrets and dynamo/secrets respectively.

    kubectl apply -k .\n
"},{"location":"CHANGELOG/changelog-0.3.x/#v030","title":"v0.3.0","text":""},{"location":"CHANGELOG/changelog-0.3.x/#features","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.3.x/#bugfixes","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.3.x/#known-issues","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.3.x/#v031","title":"v0.3.1","text":""},{"location":"CHANGELOG/changelog-0.3.x/#features_1","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.3.x/#bugfixes_1","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.3.x/#known-issues_1","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.3.x/#v032","title":"v0.3.2","text":""},{"location":"CHANGELOG/changelog-0.3.x/#features_2","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.3.x/#bugfixes_2","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.3.x/#known-issues_2","title":"Known issues","text":"

No known issues since the last release.

"},{"location":"CHANGELOG/changelog-0.4.x/","title":"Claudie v0.4","text":"

Due to a breaking change in the input manifest schema, the v0.4.x will not be backwards compatible with v0.3.x

"},{"location":"CHANGELOG/changelog-0.4.x/#deployment","title":"Deployment","text":"

To deploy the Claudie v0.4.X, please:

  1. Download the archive and checksums from the release page

  2. Verify the archive with the sha256 (optional)

    sha256sum -c --ignore-missing checksums.txt\n

    If valid, output is, depending on the archive downloaded

    claudie.tar.gz: OK\n

    or

    claudie.zip: OK\n

    or both.

  3. Lastly, unpack the archive and deploy using kubectl

    We strongly recommend changing the default credentials for MongoDB, MinIO and DynamoDB before you deploy it. To do this, change contents of the files in mongo/secrets, minio/secrets and dynamo/secrets respectively.

    kubectl apply -k .\n
"},{"location":"CHANGELOG/changelog-0.4.x/#v040","title":"v0.4.0","text":""},{"location":"CHANGELOG/changelog-0.4.x/#features","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.4.x/#bugfixes","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.4.x/#known-issues","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.4.x/#v041","title":"v0.4.1","text":""},{"location":"CHANGELOG/changelog-0.4.x/#features_1","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.4.x/#bugfixes_1","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.4.x/#known-issues_1","title":"Known issues","text":"

No known issues since the last release

"},{"location":"CHANGELOG/changelog-0.4.x/#v042","title":"v0.4.2","text":""},{"location":"CHANGELOG/changelog-0.4.x/#features_2","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.4.x/#bugfixes_2","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.4.x/#knownissues","title":"KnownIssues","text":"

No new known issues since the last release

"},{"location":"CHANGELOG/changelog-0.5.x/","title":"Claudie v0.5","text":"

Due to a breaking change in swapping the CNI used in the Kubernetes cluster, the v0.5.x will not be backwards compatible with v0.4.x

"},{"location":"CHANGELOG/changelog-0.5.x/#deployment","title":"Deployment","text":"

To deploy Claudie v0.5.X, please:

  1. Download the archive and checksums from the release page

  2. Verify the archive with the sha256 (optional)

    sha256sum -c --ignore-missing checksums.txt\n

    If valid, output is, depending on the archive downloaded

    claudie.tar.gz: OK\n

    or

    claudie.zip: OK\n

    or both.

  3. Lastly, unpack the archive and deploy using kubectl

    We strongly recommend changing the default credentials for MongoDB, MinIO and DynamoDB before you deploy it. To do this, change contents of the files in mongo/secrets, minio/secrets and dynamo/secrets respectively.

    kubectl apply -k .\n
"},{"location":"CHANGELOG/changelog-0.5.x/#v050","title":"v0.5.0","text":""},{"location":"CHANGELOG/changelog-0.5.x/#features","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.5.x/#known-issues","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.5.x/#v051","title":"v0.5.1","text":""},{"location":"CHANGELOG/changelog-0.5.x/#bug-fixes","title":"Bug fixes","text":""},{"location":"CHANGELOG/changelog-0.6.x/","title":"Claudie v0.6","text":"

Due to a breaking change in the terraform files the v0.6.x will not be backwards compatible with v0.5.x

"},{"location":"CHANGELOG/changelog-0.6.x/#deployment","title":"Deployment","text":"

To deploy Claudie v0.6.X, please:

  1. Download the archive and checksums from the release page

  2. Verify the archive with the sha256 (optional)

    sha256sum -c --ignore-missing checksums.txt\n

If valid, output is, depending on the archive downloaded

```sh\nclaudie.tar.gz: OK\n```\n

or

```sh\nclaudie.zip: OK\n```\n

or both.

  1. Lastly, unpack the archive and deploy using kubectl

We strongly recommend changing the default credentials for MongoDB, MinIO and DynamoDB before you deploy it. To do this, change contents of the files in mongo/secrets, minio/secrets and dynamo/secrets respectively.

```sh\nkubectl apply -k .\n```\n
"},{"location":"CHANGELOG/changelog-0.6.x/#v060","title":"v0.6.0","text":""},{"location":"CHANGELOG/changelog-0.6.x/#features","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.6.x/#bugfixes","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.6.x/#other","title":"Other","text":""},{"location":"CHANGELOG/changelog-0.6.x/#v061","title":"v0.6.1","text":""},{"location":"CHANGELOG/changelog-0.6.x/#bugfixes_1","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.6.x/#v062","title":"v0.6.2","text":""},{"location":"CHANGELOG/changelog-0.6.x/#features_1","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.6.x/#bugfixes_2","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.6.x/#v063","title":"v0.6.3","text":""},{"location":"CHANGELOG/changelog-0.6.x/#features_2","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.6.x/#bugfixes_3","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.6.x/#v064","title":"v0.6.4","text":""},{"location":"CHANGELOG/changelog-0.6.x/#features_3","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.6.x/#bugfixes_4","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.6.x/#v065","title":"v0.6.5","text":""},{"location":"CHANGELOG/changelog-0.6.x/#features_4","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.6.x/#bugfixes_5","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.6.x/#v066","title":"v0.6.6","text":""},{"location":"CHANGELOG/changelog-0.6.x/#features_5","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.6.x/#bugfixes_6","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.7.x/","title":"Claudie v0.7","text":"

Due to using the latest version of longhorn the v0.7.x will not be backwards compatible with v0.6.x

"},{"location":"CHANGELOG/changelog-0.7.x/#deployment","title":"Deployment","text":"

To deploy Claudie v0.7.X, please:

  1. Download claudie.yaml from release page

  2. Verify the checksum with sha256 (optional)

    We provide checksums in claudie_checksum.txt you can verify the downloaded yaml files againts the provided checksums.

  3. Install claudie using kubectl

We strongly recommend changing the default credentials for MongoDB, MinIO and DynamoDB before you deploy it.

kubectl apply -f https://github.com/berops/claudie/releases/latest/download/claudie.yaml\n

To further harden claudie, you may want to deploy our pre-defined network policies:

# for clusters using cilium as their CNI\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy-cilium.yaml\n
# other\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy.yaml\n

"},{"location":"CHANGELOG/changelog-0.7.x/#v070","title":"v0.7.0","text":"

Upgrade procedure: Before upgrading Claudie, upgrade Longhorn to 1.6.x as per this guide. In most cases this will boil down to running the following command: kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml.

"},{"location":"CHANGELOG/changelog-0.7.x/#features","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.7.x/#bugfixes","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.7.x/#v071","title":"v0.7.1","text":"

Migrate from the legacy package repositories apt.kubernetes.io, yum.kubernetes.io to the Kubernetes community-hosted repositories pkgs.k8s.io. A detailed how to can be found in https://kubernetes.io/blog/2023/08/31/legacy-package-repository-deprecation/

Kubernetes version 1.24 is no longer supported. 1.25.x 1.26.x 1.27.x are the currently supported versions.

"},{"location":"CHANGELOG/changelog-0.7.x/#bugfixes_1","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.7.x/#v072","title":"v0.7.2","text":""},{"location":"CHANGELOG/changelog-0.7.x/#features_1","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.7.x/#v073","title":"v0.7.3","text":""},{"location":"CHANGELOG/changelog-0.7.x/#bugfixes_2","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.7.x/#v074","title":"v0.7.4","text":""},{"location":"CHANGELOG/changelog-0.7.x/#bugfixes_3","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.7.x/#v075","title":"v0.7.5","text":""},{"location":"CHANGELOG/changelog-0.7.x/#features_2","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.7.x/#bugifxes","title":"Bugifxes","text":""},{"location":"CHANGELOG/changelog-0.8.x/","title":"Claudie v0.8","text":"

Due to updating terraform files the v0.8.x clusters build with claudie version v0.7.x will be forced to be recreated.

Nodepool/cluster names that do not meet the required length of 14 characters for nodepool names and 28 characters for cluster names must be adjusted or the new length validation will fail. You can achieve a rolling update by adding new nodepools with the new names and then removing the old nodepools before updating to version 0.8.

Before updating make backups of your data\"

"},{"location":"CHANGELOG/changelog-0.8.x/#deployment","title":"Deployment","text":"

To deploy Claudie v0.8.X, please:

  1. Download claudie.yaml from release page

  2. Verify the checksum with sha256 (optional)

We provide checksums in claudie_checksum.txt you can verify the downloaded yaml files againts the provided checksums.

  1. Install claudie using kubectl

We strongly recommend changing the default credentials for MongoDB, MinIO and DynamoDB before you deploy it.

kubectl apply -f https://github.com/berops/claudie/releases/latest/download/claudie.yaml\n

To further harden claudie, you may want to deploy our pre-defined network policies:

# for clusters using cilium as their CNI\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy-cilium.yaml\n
# other\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy.yaml\n

"},{"location":"CHANGELOG/changelog-0.8.x/#v080","title":"v0.8.0","text":""},{"location":"CHANGELOG/changelog-0.8.x/#features","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.8.x/#v081","title":"v0.8.1","text":"

Nodepools with genesis cloud provider will trigger a recreation of the cluster due to the change in terraform files. Make a backup of your data if your cluster constains genesis cloud nodepools.

"},{"location":"CHANGELOG/changelog-0.8.x/#features_1","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.8.x/#bugfixes","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.9.x/","title":"Claudie v0.9","text":"

Due to changes to the core of how Claudie works with terraform files and representation of the data in persistent storage the v0.9.x version will not be backwards compatible with clusters build using previous Claudie versions.

"},{"location":"CHANGELOG/changelog-0.9.x/#most-notable-changes-tldr","title":"Most notable changes (TL;DR)","text":""},{"location":"CHANGELOG/changelog-0.9.x/#experimental","title":"Experimental","text":"

Currently the HTTP proxy is experimental, it is made available by modifying the HTTP_PROXY_MODE in the Claudie config map in the claudie namespace. The possible values are (on|off|default). Default means that if a kubernetes cluster uses Hetzner nodepools, it will automatically switch to using the proxy, as we have encountered the most bad IP issues with Hetzner. By default the proxy is turned off.

It should be noted that the proxy is still in an experimental phase, where the API for interacting with the proxy may change in the future. Therefore, clusters using this feature in this release run the risk of being backwards incompatible with future 0.9.x releases, which will further stabilise the proxy API.

"},{"location":"CHANGELOG/changelog-0.9.x/#deployment","title":"Deployment","text":"

To deploy Claudie v0.9.X, please:

  1. Download claudie.yaml from release page

  2. Verify the checksum with sha256 (optional)

We provide checksums in claudie_checksum.txt you can verify the downloaded yaml files againts the provided checksums.

  1. Install Claudie using kubectl

We strongly recommend changing the default credentials for MongoDB, MinIO and DynamoDB before you deploy it.

kubectl apply -f https://github.com/berops/claudie/releases/latest/download/claudie.yaml\n

To further harden Claudie, you may want to deploy our pre-defined network policies:

# for clusters using cilium as their CNI\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy-cilium.yaml\n
# other\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy.yaml\n

"},{"location":"CHANGELOG/changelog-0.9.x/#v090","title":"v0.9.0","text":""},{"location":"CHANGELOG/changelog-0.9.x/#whats-changed","title":"What's changed","text":""},{"location":"CHANGELOG/changelog-0.9.x/#experimental_1","title":"Experimental","text":""},{"location":"CHANGELOG/changelog-0.9.x/#bug-fixes","title":"Bug fixes","text":""},{"location":"autoscaling/autoscaling/","title":"Autoscaling in Claudie","text":"

Claudie supports autoscaling by installing Cluster Autoscaler for Claudie-made clusters, with a custom implementation of external gRPC cloud provider, in Claudie context called autoscaler-adapter. This, together with Cluster Autoscaler is automatically managed by Claudie, for any clusters, which have at least one node pool defined with autoscaler field. Whats more, you can change the node pool specification freely from autoscaler configuration to static count or vice versa. Claudie will seamlessly configure Cluster Autoscaler, or even remove it when it is no longer needed.

"},{"location":"autoscaling/autoscaling/#what-triggers-a-scale-up","title":"What triggers a scale up","text":"

The scale up is triggered if there are pods in the cluster, which are unschedulable and

However, if pods' resource requests are larger than any new node would offer, the scale up will not be triggered. The cluster is scanned every 10 seconds for these pods, to assure quick response to the cluster needs. For more information, please have a look at official Cluster Autoscaler documentation.

"},{"location":"autoscaling/autoscaling/#what-triggers-a-scale-down","title":"What triggers a scale down","text":"

The scale down is triggered, if all following conditions are met

For more information, please have a look at official Cluster Autoscaler documentation.

"},{"location":"autoscaling/autoscaling/#architecture","title":"Architecture","text":"

As stated earlier, Claudie deploys Cluster Autoscaler and Autoscaler Adapter for every Claudie-made cluster which enables it. These components are deployed within the same cluster as Claudie.

"},{"location":"autoscaling/autoscaling/#considerations","title":"Considerations","text":"

As Claudie just extends Cluster Autoscaler, it is important that you follow their best practices. Furthermore, as number of nodes in autoscaled node pools can be volatile, you should carefully plan out how you will use the storage on such node pools. Longhorn support of Cluster Autoscaler is still in experimental phase (longhorn documentation).

"},{"location":"claudie-workflow/claudie-workflow/","title":"Claudie","text":""},{"location":"claudie-workflow/claudie-workflow/#a-single-platform-for-multiple-clouds","title":"A single platform for multiple clouds","text":""},{"location":"claudie-workflow/claudie-workflow/#microservices","title":"Microservices","text":""},{"location":"claudie-workflow/claudie-workflow/#data-stores","title":"Data stores","text":""},{"location":"claudie-workflow/claudie-workflow/#tools-used","title":"Tools used","text":""},{"location":"claudie-workflow/claudie-workflow/#manager","title":"Manager","text":"

Manger is the brain and main entry point for claudie. To build clusters users/services submit their configs to the manager service. The manager creates the desired state and schedules a number of jobs to be executed in order to achieve the desired state based on the current state. The jobs are then picked up by the builder service.

For the API see the GRPC definitions.

"},{"location":"claudie-workflow/claudie-workflow/#flow","title":"Flow","text":"

Each newly created manifest starts in the Pending state. Pending manifests are periodically checked and based on the specification provided in the applied configs, the desired state for each cluster, along with the tasks to be performed to achieve the desired state are created, after which the manifest is moved to the scheduled state. Tasks from Scheduled manifests are picked up by builder services gradually building the desired state. From this state, the manifest can end up in the Done or Error state. Any changes to the input manifest while it is in the Scheduled state will be reflected after it is moved to the Done state. After which the cycle repeats.

Each cluster has a current state and desired state based on which tasks are created. The desired state is created only once, when changes to the configuration are detected. Several tasks can be created that will gradually converge the current state to the desired state. Each time a task is picked up by the builder service the relevant state from the current state is transferred to the task so that each task has up-to-date information about current infrastructure and its up to the builder service to build/modify/delete the missing pieces in the picked up task.

Once a task is done building, either in error or successfully, the current state should be updated by the builder service so that the manager has the actual information about the current state of the infrastructure. When the manager receives a request for the update of the current state it transfers relevant information to the desired state that was created at the beginning, before the tasks were scheduled. This is the only point where the desired state is updated, and we only transfer information from current state (such as newly build nodes, ips, etc...). After all tasks have finished successfully the current and desired state should match.

"},{"location":"claudie-workflow/claudie-workflow/#rolling-updates","title":"Rolling updates","text":"

Unless otherwise specified, the default is to use the external templates located at https://github.com/berops/claudie-config to build the infrastructure for the dynamic nodepools. The templates provide reasonable defaults that anyone can use to build multi-provider clusters.

As we understand that someone may need more specific scenarios, we allow these external templates to be overridden by the user, see https://docs.claudie.io/latest/input-manifest/external-templates/ for more information. By providing the ability to specify the templates that should be used when building the infrastructure of the InputManifest, there is one common scenario that we decided should be handled by the manager service, which is rolling updates.

Rolling updates of nodepools are performed when a change to a provider's external templates is registered. The manager then checks that the external repository of the new templates exists and uses them to perform a rolling update of the already built infrastructure. The rolling update is performed in the following steps

If a failure occurs during the rolling update of a single Nodepool, the state is rolled back to the last possible working state. Rolling updates have a retry strategy that results in endless processing of rolling updates until it succeeds.

If the rollback to the last working state fails, it will also be retried indefinitely, in which case it is up to the claudie user to repair the cluster so that the rolling update can continue.

The individual states of the Input Manifest and how they are processed within manager are further visually described in the following sections.

"},{"location":"claudie-workflow/claudie-workflow/#pending-state","title":"Pending State","text":""},{"location":"claudie-workflow/claudie-workflow/#scheduled-state","title":"Scheduled State","text":""},{"location":"claudie-workflow/claudie-workflow/#doneerror-state","title":"Done/Error State","text":""},{"location":"claudie-workflow/claudie-workflow/#builder","title":"Builder","text":"

Processed tasks scheduled by the manager gradually building the desired state of the infrastructure. It communicates with terraformer, ansibler, kube-eleven and kuber services in order to manage the infrastructure.

"},{"location":"claudie-workflow/claudie-workflow/#flow_1","title":"Flow","text":""},{"location":"claudie-workflow/claudie-workflow/#terraformer","title":"Terraformer","text":"

Terraformer creates or destroys infrastructure via Terraform calls.

For the API see the GRPC definitions.

"},{"location":"claudie-workflow/claudie-workflow/#ansibler","title":"Ansibler","text":"

Ansibler uses Ansible to:

For the API see the GRPC definitions.

"},{"location":"claudie-workflow/claudie-workflow/#kube-eleven","title":"Kube-eleven","text":"

Kube-eleven uses KubeOne to spin up a kubernetes clusters, out of the spawned and pre-configured infrastructure.

For the API see the GRPC definitions.

"},{"location":"claudie-workflow/claudie-workflow/#kuber","title":"Kuber","text":"

Kuber manipulates the cluster resources using kubectl.

For the API see the GRPC definitions.

"},{"location":"claudie-workflow/claudie-workflow/#claudie-operator","title":"Claudie-operator","text":"

Claudie-operator is a layer between the user and Claudie. It is a InputManifest Custom Resource Definition controller, that will communicate with the manager service to communicate changes to the config made by the user.

"},{"location":"claudie-workflow/claudie-workflow/#flow_2","title":"Flow","text":""},{"location":"commands/commands/","title":"Command Cheat Sheet","text":"

In this section, we'll describe kubectl commands to interact with Claudie.

"},{"location":"commands/commands/#monitoring-the-cluster-state","title":"Monitoring the cluster state","text":"

Watch the cluster state in the InputManifest that is provisioned.

watch -n 2 'kubectl get inputmanifests.claudie.io manifest-name -ojsonpath='{.status}' | jq .'\n{\n  \"clusters\": {\n    \"my-super-cluster\": {\n      \"phase\": \"NONE\",\n      \"state\": \"DONE\"\n    }\n  },\n  \"state\": \"DONE\"\n}   \n

"},{"location":"commands/commands/#viewing-the-cluster-metadata","title":"Viewing the cluster metadata","text":"

Each secret created by Claudie has following labels:

Key Value claudie.io/project Name of the project. claudie.io/cluster Name of the cluster. claudie.io/cluster-id ID of the cluster. claudie.io/output Output type, either kubeconfig or metadata.

Claudie creates kubeconfig secret in claudie namespace:

kubectl get secrets -n claudie -l claudie.io/output=kubeconfig\n
NAME                                  TYPE     DATA   AGE\nmy-super-cluster-6ktx6rb-kubeconfig   Opaque   1      134m\n

You can recover kubeconfig for your cluster with the following command:

kubectl get secrets -n claudie -l claudie.io/output=kubeconfig,claudie.io/cluster=$YOUR-CLUSTER-NAME -o jsonpath='{.items[0].data.kubeconfig}' | base64 -d > my-super-cluster-kubeconfig.yaml\n

If you want to connect to your dynamic k8s nodes via SSH, you can recover private SSH key for each nodepool:

kubectl get secrets -n claudie -l claudie.io/output=metadata,claudie.io/cluster=$YOUR-CLUSTER-NAME -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq '.dynamic_nodepools | map_values(.nodepool_private_key)'\n

To recover public IP of your dynamic k8s nodes to connect to via SSH:

kubectl get secrets -n claudie -l claudie.io/output=metadata,claudie.io/cluster=$YOUR-CLUSTER-NAME -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq '.dynamic_nodepools | map_values(.node_ips)'\n

You can display all dynamic load balancer nodes metadata by:

kubectl get secrets -n claudie -l claudie.io/output=metadata,claudie.io/cluster=$YOUR-CLUSTER-NAME -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq -r .dynamic_load_balancer_nodepools\n

In case you want to connect to your dynamic load balancer nodes via SSH, you can recover private SSH key:

kubectl get secrets -n claudie -l claudie.io/output=metadata,claudie.io/cluster=$YOUR-CLUSTER-NAME -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq '.dynamic_load_balancer_nodepools | .[]'\n

To recover public IP of your dynamic load balancer nodes to connect to via SSH:

kubectl get secrets -n claudie -l claudie.io/output=metadata,claudie.io/cluster=$YOUR-CLUSTER-NAME -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq '.dynamic_load_balancer_nodepools | .[] | map_values(.node_ips)'\n

You can display all static load balancer nodes metadata by:

kubectl get secrets -n claudie -l claudie.io/output=metadata,claudie.io/cluster=$YOUR-CLUSTER-NAME -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq -r .static_load_balancer_nodepools\n

In order to display public IPs and private SSH keys of your static load balancer nodes by:

kubectl get secrets -n claudie -l claudie.io/output=metadata,claudie.io/cluster=$YOUR-CLUSTER-NAME -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq -r '.static_load_balancer_nodepools | .[] | map_values(.node_info)'\n

To connect to one of your static load balancer nodes via SSH, you can recover private SSH key:

kubectl get secrets -n claudie -l claudie.io/output=metadata,claudie.io/cluster=$YOUR-CLUSTER-NAME -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq -r '.static_load_balancer_nodepools | .[]'\n
"},{"location":"contributing/contributing/","title":"Contributing","text":""},{"location":"contributing/contributing/#bug-reports","title":"Bug reports","text":"

When you encounter a bug, please create a new issue and use our bug template. Before you submit, please check:

be careful not to include your cloud credentials

"},{"location":"contributing/local-testing/","title":"Local testing of Claudie","text":"

In order to speed up the development, Claudie can be run locally for initial testing purposes. However, it's important to note that running Claudie locally has limitations compared to running it in a Kubernetes cluster.

"},{"location":"contributing/local-testing/#limitations-of-claudie-when-running-locally","title":"Limitations of Claudie when running locally","text":""},{"location":"contributing/local-testing/#claudie-operatorcrd-testing","title":"Claudie Operator/CRD testing","text":"

The Operator component as well as CRDs heavily relies on the Kubernetes cluster. However, with a little hacking, you can test them, by creating local cluster (minikube/kind/...), and exporting environment variable KUBECONFIG pointing to the local cluster Kubeconfig. Once you start the Claudie Operator, it should pick up the Kubeconfig and you can use local cluster to deploy and test CRDs.

"},{"location":"contributing/local-testing/#autoscaling-testing","title":"Autoscaling testing","text":"

Testing or simulating the Claudie autoscaling is not feasible when running Claudie locally because it dynamically deploys Cluster Autoscaler and Autoscaler Adapter in the management cluster.

"},{"location":"contributing/local-testing/#claudie-outputs","title":"Claudie outputs","text":"

Since Claudie generates two types of output per cluster (node metadata and kubeconfig), testing these outputs is not possible because they are created as Kubernetes Secrets.

"},{"location":"contributing/local-testing/#requirements-to-run-claudie-locally","title":"Requirements to run Claudie locally","text":"

As Claudie uses number of external tools to build and manage clusters, it is important these tools are installed on your local system.

"},{"location":"contributing/local-testing/#how-to-run-claudie-locally","title":"How to run Claudie locally","text":"

To simplify the deployment of Claudie into local system, we recommend to use rules defined in Makefile.

To start all the datastores, simply run make datastoreStart, which will create containers for each required datastore with preconfigured port-forwarding.

To start all services, run make <service name>, in separate shells. In case you will make some changes to the code, to apply them, please kill the process and start it again using make <service name>.

"},{"location":"contributing/local-testing/#how-to-test-claudie-locally","title":"How to test Claudie locally","text":"

Once Claudie is up and running, there are three main ways to test it locally.

"},{"location":"contributing/local-testing/#test-claudie-using-testing-framework","title":"Test Claudie using Testing-framework","text":"

You can test Claudie deployed locally via custom made testing framework. It was designed to support testing from local so the code itself does not require any changes. However, in order to supply testing input manifest, you have to create directory called test-sets in the ./testing-framework, which will contain the input manifests. Bear in mind that these manifest are not CRDs, rather they are raw YAML file which is described in /internal/manifest/manifest.go.

This way of testing brings benefits like automatic verification of Longhorn deployment or automatic clean up of the infrastructure upon failure.

To run the Testing-framework locally, use make test rule which will start the testing. If you wish to disable the automatic clean up, set the environment variable AUTO_CLEAN_UP to FALSE.

Example of directory structure:

services/testing-framework/\n\u251c\u2500\u2500 ...\n\u2514\u2500\u2500 test-sets\n    \u2514\u2500\u2500 test-set-dev\n        \u251c\u2500\u2500 1.yaml\n        \u251c\u2500\u2500 2.yaml\n        \u2514\u2500\u2500 3.yaml\n

Example of raw YAML input manifest:

name: TestSetDev\n\nproviders:\n  hetzner:\n    - name: hetzner-1\n      credentials: \"api token\"\n  gcp:\n    - name: gcp-1\n      credentials: |\n        service account key as JSON\n      gcpProject: \"project id\"\n  oci:\n    - name: oci-1\n      privateKey: |\n        -----BEGIN RSA PRIVATE KEY-----\n        ..... put the private key here ....\n        -----END RSA PRIVATE KEY-----\n      keyFingerprint: \"key fingerprint\"\n      tenancyOcid: \"tenancy ocid\"\n      userOcid: \"user ocid\"\n      compartmentOcid: \"compartment ocid\"\n  aws:\n    - name: aws-1\n      accessKey: \"access key\"\n      secretKey: \"secret key\"\n  azure:\n    - name: azure-1\n      subscriptionId: \"subscription id\"\n      tenantId: \"tenant id\"\n      clientId: \"client id\"\n      clientSecret: \"client secret\"\n  hetznerdns:\n    - name: hetznerdns-1\n      apiToken: \"api token\"\n  cloudflare:\n    - name: cloudflare-1\n      apiToken: \"api token\"\n\nnodePools:\n  dynamic:\n    - name: htz-compute\n      providerSpec:\n        name: hetzner-1\n        region: nbg1\n        zone: nbg1-dc3\n      count: 1\n      serverType: cpx11\n      image: ubuntu-22.04\n      storageDiskSize: 50\n\n    - name: hetzner-lb\n      providerSpec:\n        name: hetzner-1\n        region: nbg1\n        zone: nbg1-dc3\n      count: 1\n      serverType: cpx11\n      image: ubuntu-22.04\n\n  static:\n    - name: static-pool\n      nodes:\n        - endpoint: \"192.168.52.1\"\n          username: root\n          privateKey: |\n            -----BEGIN RSA PRIVATE KEY-----\n            ...... put the private key here .....\n            -----END RSA PRIVATE KEY-----\n        - endpoint: \"192.168.52.2\"\n          username: root\n          privateKey: |\n            -----BEGIN RSA PRIVATE KEY-----\n            ...... put the private key here .....\n            -----END RSA PRIVATE KEY-----\n\nkubernetes:\n  clusters:\n    - name: dev-test\n      version: v1.27.0\n      network: 192.168.2.0/24\n      pools:\n        control:\n          - static-pool\n        compute:\n          - htz-compute\n\nloadBalancers:\n  roles:\n    - name: apiserver-lb\n      protocol: tcp\n      port: 6443\n      targetPort: 6443\n      targetPools: \n        - static-pool\n  clusters:\n    - name: miro-lb\n      roles:\n        - apiserver-lb\n      dns:\n        dnsZone: zone.com\n        provider: cloudflare-1\n      targetedK8s: dev-test\n      pools:\n        - hetzner-lb\n
"},{"location":"contributing/local-testing/#test-claudie-using-manual-manifest-injection","title":"Test Claudie using manual manifest injection","text":"

To test Claudie in a more \"manual\" way, you can use the specified GRPC API to inject/delete/modify an input manifest.

When using this technique, you most likely will omit the initial step of the InputManifest being passed through the operator. If this is the case, you will need to add templates to the providers listed in the InputManifest otherwise the workflow will panic at an early stage due to unset templates.

To specify templates you add them to the provider definition as shown in the snippet below:

  hetzner:\n    - name: hetzner-1\n      credentials: \"api token\"\n      templates:\n        repository: \"https://github.com/berops/claudie-config\"\n        path: \"templates/terraformer/hetzner\"\n

We provide ready-to-use terraform templates, which can be used by claudie at https://github.com/berops/claudie-config, If you would like to use your own, you can fork the repo, or write your own templates and modify the provider definition in the InputManifest to point to your templates

"},{"location":"contributing/local-testing/#deploy-claudie-in-the-local-cluster-for-testing","title":"Deploy Claudie in the local cluster for testing","text":"

Claudie can be also tested on a local cluster by following these steps.

  1. Spin up a local cluster using a tool like Kind, Minikube, or any other preferred method.

  2. Build the images for Claudie from the current source code by running the command make containerimgs. This command will build all the necessary images for Claudie and assign a new tag; a short hash from the most recent commit.

  3. Update the new image tag in the relevant kustomization.yaml files. These files can be found in the ./manifests directory. Additionally, set the imagePullPolicy to Never.

  4. Import the built images into your local cluster. This step will vary depending on the specific tool you're using for the local cluster. Refer to the documentation of the cluster tool for instructions on importing custom images.

  5. Apply the Claudie manifests to the local cluster.

By following these steps, you can set up and test Claudie on a local cluster using the newly built images. Remember, these steps are going to be repeated if you will make changes to the source code.

"},{"location":"contributing/release/","title":"How to release a new version of Claudie","text":"

The release process of Claudie consists of a few manual steps and a few automated steps.

"},{"location":"contributing/release/#manual-steps","title":"Manual steps","text":"

Whoever is responsible for creating a new release has to:

  1. Write a new entry to a relevant Changelog document
  2. Add release notes to the Releases page
  3. Publish a release
"},{"location":"contributing/release/#automated-steps","title":"Automated steps","text":"

After a new release is published, a release pipeline and a release-docs pipeline runs.

A release pipeline consists of the following steps:

  1. Build new images tagged with the release tag
  2. Push them to the container registry where anyone can pull them
  3. Add Claudie manifest files to the release assets, with image tags referencing this release

A release-docs pipeline consists of the following steps:

  1. If there is a new Changelog file:
    1. Checkout to a new feature branch
    2. Add reference to the new Changelog file in mkdocs.yml
    3. Create a PR to merge changes from new feature branch to master (PR needs to be created to update changes in master branch and align with branch protection)
  2. Deploy new version of docs on docs.claudie.io
"},{"location":"creating-claudie-backup/creating-claudie-backup/","title":"Creating Claudie Backup","text":"

In this section we'll explain where the state of Claudie is and backing up the necessary components and restoring them on a completely new cluster.

"},{"location":"creating-claudie-backup/creating-claudie-backup/#claudie-state","title":"Claudie state","text":"

Claudie stores its state in 3 different places.

These are the only services that will have a PVC attached to it, the other are stateless.

"},{"location":"creating-claudie-backup/creating-claudie-backup/#backing-up-claudie","title":"Backing up Claudie","text":""},{"location":"creating-claudie-backup/creating-claudie-backup/#using-velero","title":"Using Velero","text":"

This is the primary backup and restore method.

Velero does not support HostPath volumes. If the PVCs in your management cluster are attached to such volumes (e.g. when running on Kind or MiniKube), the backup will not work. In this case, use the below backup method.

All resources that are deployed or created by Claudie can be identified with the following label:

    app.kubernetes.io/part-of: claudie\n

If you want to include your deployed Input Manifests to be part of the backup you'll have to add the same label to them.

We'll walk through the following scenario step-by-step to back up claudie and then restore it.

Claudie is already deployed on an existing Management Cluster and at least 1 Input Manifest has been applied. The state is backed up and the Management Cluster is replaced by a new one on which we restore the state.

To back up the resources we'll be using Velero version v1.11.0.

The following steps will all be executed with the existing Management Cluster in context.

  1. To create a backup, Velero needs to store the state to external storage. The list of supported providers for the external storage can be found in the link. In this guide we'll be using AWS S3 object storage for our backup.

  2. Prepare the S3 bucket by following the first two steps in this setup guide, excluding the installation step, as this will be different for our use-case.

If you do not have the aws CLI locally installed, follow the user guide to set it up.

  1. Execute the following command to install Velero on the Management Cluster.
    velero install \\\n--provider aws \\\n--plugins velero/velero-plugin-for-aws:v1.6.0 \\\n--bucket $BUCKET \\\n--secret-file ./credentials-velero \\\n--backup-location-config region=$REGION \\\n--snapshot-location-config region=$REGION \\\n--use-node-agent \\\n--default-volumes-to-fs-backup\n

Following the instructions in step 2, you should have a credentials-velero file with the access and secret keys for the aws setup. The env variables $BUCKET and $REGION should be set to the name and region for the bucket created in AWS S3.

By default Velero will use your default config $HOME/.kube/config, if this is not the config that points to your Management Cluster, you can override it with the --kubeconfig argument.

  1. Backup claudie by executing
    velero backup create claudie-backup --selector app.kubernetes.io/part-of=claudie\n

To track the progress of the backup execute

velero backup describe claudie-backup --details\n

From this point the new Management Cluster for Claudie is in context. We expect that your default kubeconfig points to the new Management Cluster, if it does not, you can override it in the following commands using --kubeconfig ./path-to-config.

  1. Repeat the step to install Velero, but now on the new Management Cluster.
  2. Install cert manager to the new Management Cluster by executing:
    kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml\n
  3. To restore the state that was stored in the S3 bucket execute
    velero restore create --from-backup claudie-backup\n

Once all resources are restored, you should be able to deploy new input manifests and also modify existing infrastructure without any problems.

"},{"location":"creating-claudie-backup/creating-claudie-backup/#manual-backup","title":"Manual backup","text":"

Claudie is already deployed on an existing Management Cluster and at least 1 Input Manifest has been applied.

Create a directory where the backup of the state will be stored.

mkdir claudie-backup\n

Put your Claudie inputmanifests into the created folder, e.g. kubectl get InputManifest -A -oyaml > ./claudie-backup/all.yaml

We will now back up the state of the respective input manifests from MongoDB and MinIO.

kubectl get pods -n claudie\n\nNAME                                READY   STATUS      RESTARTS      AGE\nansibler-6f4557cf74-b4dts           1/1     Running     0             18m\nbuilder-5d68987c86-qdfd5            1/1     Running     0             18m\nclaudie-operator-6d9ddc7f8b-hv84c   1/1     Running     0             18m\nmanager-5d75bfffc6-d9qfm            1/1     Running     0             18m\ncreate-table-job-ghb9f              0/1     Completed   1             18m\ndynamodb-6d65df988-c626j            1/1     Running     0             18m\nkube-eleven-556cfdfd98-jq6hl        1/1     Running     0             18m\nkuber-7f8cd4cd89-6ds2w              1/1     Running     0             18m\nmake-bucket-job-9mjft               0/1     Completed   0             18m\nminio-0                             1/1     Running     0             18m\nminio-1                             1/1     Running     0             18m\nminio-2                             1/1     Running     0             18m\nminio-3                             1/1     Running     0             18m\nmongodb-6ccb5f5dff-ptdw2            1/1     Running     0             18m\nterraformer-66c6f67d98-pwr9t        1/1     Running     0             18m\n

To backup state from MongoDB execute the following command

kubectl exec -n claudie mongodb-<your-mongdb-pod> -- sh -c 'mongoexport --uri=mongodb://$MONGO_INITDB_ROOT_USERNAME:$MONGO_INITDB_ROOT_PASSWORD@localhost:27017/claudie -c inputManifests --authenticationDatabase admin' > claudie-backup/inputManifests\n

Next we need to backup the state from MinIO. Port-forward the MinIO service so that it is accessible from localhost.

kubectl port-forward -n claudie svc/minio 9000:9000\n

Setup an alias for the mc command line tool.

mc alias set claudie-minio http://127.0.0.1:9000 <ACCESSKEY> <SECRETKEY>\n

Provide the access and secret key for minio. The default can be found in the github repository in the manifests/claudie/minio/secrets folder. If you have not changed them, we strongly encourage you to do so!

Download the state into the backup folder

mc mirror claudie-minio/claudie-tf-state-files ./claudie-backup\n

You now have everything you need to restore your input manifests to a new management cluster.

These files will contain your credentials, DO NOT STORE THEM OUT IN THE PUBLIC!

To restore the state on your new management cluster you can follow these commands. We expect that your default kubeconfig points to the new Management Cluster, if it does not, you can override it in the following commands using --kubeconfig ./path-to-config.

Copy the collection into the MongoDB pod.

kubectl cp ./claudie-backup/inputManifests mongodb-<your-mongodb-pod>:/tmp/inputManifests -n claudie\n

Import the state to MongoDB.

kubectl exec -n claudie mongodb-<your-mongodb-pod> -- sh -c 'mongoimport --uri=mongodb://$MONGO_INITDB_ROOT_USERNAME:$MONGO_INITDB_ROOT_PASSWORD@localhost:27017/claudie -c inputManifests --authenticationDatabase admin --file /tmp/inputManifests'\n

Don't forget to delete the /tmp/inputManifests file

Port-forward the MinIO service and import the backed up state.

mc cp --recursive ./claudie-backup/<your-folder-name-downloaded-from-minio> claudie-minio/claudie-tf-state-files\n

You can now apply your Claudie inputmanifests which will be immediately in the DONE stage. You can verify this with

kubectl get inputmanifests -A\n

Now you can make any new changes to your inputmanifests on the new management cluster and the state will be re-used.

The secrets for the clusters, namely kubeconfig and cluster-metadata, are re-created after the workflow with the changes has finished.

Alternatively you may also use any GUI clients for MongoDB and Minio for more straightforward backup of the state. All you need to backup is the bucket claudie-tf-state-files in MinIO and the collection inputManifests from MongoDB

Once all data is restored, you should be able to deploy new input manifests and also modify existing infrastructure without any problems.

"},{"location":"docs-guides/deployment-workflow/","title":"Documentation deployment","text":"

Our documentation is hosted on GitHub Pages. Whenever a new push to gh-pages branch happens, it will deploy a new version of the doc. All the commits and pushes to this branch are automated through our release-docs.yml pipeline with the usage of mike tool.

That's also the reason, why we do not recommend making any manual changes in gh-pages branch. However, in case you have to, use the commands below.

"},{"location":"docs-guides/deployment-workflow/#generate-a-new-version-of-the-docs","title":"Generate a new version of the docs","text":"
mike deploy <version>\n
mike deploy <version> --push\n
mike set-default <version>\n
"},{"location":"docs-guides/deployment-workflow/#deploy-docs-manually-from-some-older-github-tags","title":"Deploy docs manually from some older GitHub tags","text":"
git checkout tags/<tag>\n

To find out how, follow the mkdocs documentation

python3 -m venv ./venv\n
source ./venv/bin/activate\n
pip install -r requirements.txt\n
mike deploy <version> --push\n
"},{"location":"docs-guides/deployment-workflow/#deploy-docs-for-a-new-release-manually","title":"Deploy docs for a new release manually","text":"

In case the release-docs.yml fails, you can deploy the new version manually by following this steps:

git checkout tags/<release tag>\n
python3 -m venv ./venv\n
source ./venv/bin/activate\n
pip install -r requirements.txt\n
mike deploy <release tag> latest --push -u\n

Don't forget to use the latest tag in the last command, because otherwise the new version will not be loaded as default one, when visiting docs.claudie.io

Find more about how to work with mike.

"},{"location":"docs-guides/deployment-workflow/#automatic-update-of-the-latest-documentation-version","title":"Automatic update of the latest documentation version","text":"

The automatic-docs-update.yml pipeline will update the docs automatically, in case you add the label refresh-docs or comment /refresh-docs on your PR. In order to trigger this pipeline again you have to re-add refresh-docs label or once again comment /refresh-docs in your PR.

[!NOTE] /refresh-docs comment triggers automatic update only when the automatic-docs-update.yml file is in the default branch.

"},{"location":"docs-guides/development/","title":"Development of the Claudie official docs","text":"

First of all, it is worth to mention, that we are using MkDocs to generate HTML documents from markdown ones. To make our documentation prettier, we have used Material theme for MkDocs. Regarding the version of our docs we are using mike.

"},{"location":"docs-guides/development/#how-to-run","title":"How to run","text":"

First install the dependencies from requirements.txt in your local machine. However before doing that we recommend creating a virtual environment by running the command below.

python3 -m venv ./venv\n

After that you want to activate that newly create virtual environment by running:

source ./venv/bin/activate\n

Now, we can install the docs dependencies, which we mentioned before.

pip install -r requirements.txt\n

After successfull instalation, you can run command below, which generates HTML files for the docs and host in on your local server.

mkdocs serve\n
"},{"location":"docs-guides/development/#how-to-test-changes","title":"How to test changes","text":"

Whenever you make some changes in docs folder or in mkdocs.yml file, you can see if the changes were applied as you expected by running the command below, which starts the server with newly generated docs.

mkdocs serve\n

Using this command you will not see the docs versioning, because we are using mike tool for this.

In case you want to test the docs versioning, you will have to run:

mike serve\n

Keep in mind, that mike takes the docs versions from gh-pages branch. That means, you will not be able to see your changes, in case you didn't run the command below before.

mike deploy <version>\n

Be careful, because this command creates a new version of the docs in your local gh-pages branch.

"},{"location":"faq/FAQ/","title":"Frequently Asked Question","text":"

We have prepared some of our most frequently asked question to help you out!

"},{"location":"faq/FAQ/#does-claudie-make-sense-as-a-pure-k8s-orchestration-on-a-single-cloud-provider-iaas","title":"Does Claudie make sense as a pure K8s orchestration on a single cloud-provider IaaS?","text":"

Since Claudie specializes in multicloud, you will likely face some drawbacks, such as the need for a public IPv4 address for each node. Otherwise it works well in a single-provider mode. Using Claudie will also give you some advantages, such as scaling to multi-cloud as your needs change, or the autoscaler that Claudie provides.

"},{"location":"faq/FAQ/#which-scenarios-make-sense-for-using-claudie-and-which-dont","title":"Which scenarios make sense for using Claudie and which don't?","text":"

Claudie aims to address the following scenarios, described in more detail on the use-cases page:

Using Claudie doesn't make sense when you rely on specific features of a cloud provider and necessarily tying yourself to that cloud provider.

"},{"location":"faq/FAQ/#is-there-any-networking-performance-impact-due-to-the-introduction-of-the-vpn-layer","title":"Is there any networking performance impact due to the introduction of the VPN layer?","text":"

We compared the use of the VPN layer with other solutions and concluded that the impact on performance is negligible. \u2028If you are interested in performed benchmarks, we summarized the results in our blog post.

"},{"location":"faq/FAQ/#what-is-the-performance-impact-of-a-geographically-distributed-control-plane-in-claudie","title":"What is the performance impact of a geographically distributed control plane in Claudie?","text":"

We have performed several tests and problems start to appear when the control nodes are geographically about 600 km apart. Although this is not an answer that fits all scenarios and should only be taken as a reference point.

If you are interested in the tests we have run and a more detailed answer, you can read more in our blog post.

"},{"location":"faq/FAQ/#does-the-cloud-provider-traffic-egress-bill-represent-a-significant-part-on-the-overall-running-costs","title":"Does the cloud provider traffic egress bill represent a significant part on the overall running costs?","text":"

Costs are individual and depend on the cost of the selected cloud provider and the type of workload running on the cluster based on the user's needs. Networking expenses can exceed 50% of your provider bill, therefore we recommend making your workload geography and provider aware (e.g. using taints and affinities).

"},{"location":"faq/FAQ/#should-i-be-worried-about-giving-claudie-provider-credentials-including-ssh-keys","title":"Should I be worried about giving Claudie provider credentials, including ssh keys?","text":"

Provider credentials are created as secrets in the Management Cluster for Claudie which you then reference when creating the input manifest, that is passed to Claudie. Claudie only uses the credentials to create a connection to nodes in the case of static nodepools or to provision the required infrastructure in the case of dynamic nodepools. The credentials are as secure as your secret management allows.

We are transparent and all of our code is open-sourced, if in doubt you can always check for yourself.

"},{"location":"faq/FAQ/#does-each-node-need-a-public-ip-address","title":"Does each node need a public IP address?","text":"

For dynamic nodepools, nodes created by Claudie in specified cloud providers, each node needs a public IP, for static nodepools no public IP is needed.

"},{"location":"faq/FAQ/#is-a-guicliclusterapi-providerterraform-provider-planned","title":"Is a GUI/CLI/ClusterAPI provider/Terraform provider planned?","text":"

A GUI is not actively considered at this point in time. Other possibilities are openly discussed in this github issue.

"},{"location":"faq/FAQ/#what-is-the-roadmap-for-adding-support-for-new-cloud-iaas-providers","title":"What is the roadmap for adding support for new cloud IaaS providers?","text":"

Adding support for a new cloud provider is an easy task. Let us know your needs.

"},{"location":"feedback/feedback-form/","title":"Feedback form","text":"Your message: Send"},{"location":"getting-started/detailed-guide/","title":"Detailed guide","text":"

This detailed guide for Claudie serves as a resource for providing an overview of Claudie's features, installation instructions, customization options, and its role in provisioning and managing clusters. We'll start by guiding you through the process of setting up a management cluster, where Claudie will be installed, enabling you to effortlessly monitor and control clusters across multiple hyperscalers.

Tip!

Claudie offers extensive customization options for your Kubernetes cluster across multiple hyperscalers. This detailed guide assumes you have AWS and Hetzner accounts. You can customize your deployment across different supported providers. If you wish to use different providers, we recommend to follow this guide anyway and create your own input manifest file based on the provided example. Refer to the supported provider table for the input manifest configuration of each provider.

"},{"location":"getting-started/detailed-guide/#supported-providers","title":"Supported providers","text":"Supported Provider Node Pools DNS AWS Azure GCP OCI Hetzner Cloudflare N/A GenesisCloud N/A

For adding support for other cloud providers, open an issue or propose a PR.

"},{"location":"getting-started/detailed-guide/#prerequisites","title":"Prerequisites","text":"
  1. Install Kind by following the Kind documentation.
  2. Install kubectl tool to communicate with your management cluster by following the Kubernetes documentation.
  3. Install Kustomize by following Kustomize documentation.
  4. Install Docker by following Docker documentation.
"},{"location":"getting-started/detailed-guide/#claudie-deployment","title":"Claudie deployment","text":"
  1. Create a Kind cluster where you will deploy Claudie, also referred to as the Management Cluster.

    kind create cluster --name=claudie\n

    Management cluster consideration.

    We recommend using a non-ephemeral management cluster! Deleting the management cluster prevents autoscaling of Claudie node pools as well as loss of state! We recommended to use a managed Kubernetes offerings to ensure management cluster resiliency. Kind cluster is sufficient for this guide.

  2. Check if have the correct current kubernetes context. The context should be kind-claudie.

    kubectl config current-context\n
  3. If context is not kind-claudie, switch to it:

    kubectl config use-context kind-claudie\n
  4. One of the prerequisites is cert-manager, deploy it with the following command:

    kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml\n
  5. Download latest Claudie release:

    wget https://github.com/berops/claudie/releases/latest/download/claudie.yaml\n

    Tip!

    For the initial attempt, it's highly recommended to enable debug logs, especially when creating a large cluster with DNS. This helps identify and resolve any permission issues that may occur across different hyperscalers. Locate ConfigMap with GOLANG_LOG variable in claudie.yaml file, and change GOLANG_LOG: info to GOLANG_LOG: debug to enable debug logging, for more customization refer to this table.

  6. Deploy Claudie using Kustomize plugin:

    kubectl apply -f claudie.yaml\n

    Claudie Hardening

    By default network policies are not included in claudie.yaml, instead they're provided as standalone to be deployed separately as the Management cluster to where Claudie is deployed may use different CNI plugin. You can deploy our predefined network policies to further harden claudie:

    # for clusters using cilium as their CNI\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy-cilium.yaml\n
    # other\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy.yaml\n

    1. Claudie will be deployed into claudie namespace, you can view if all pods are running:

    kubectl get pods -n claudie \n
    NAME                           READY   STATUS      RESTARTS        AGE\nansibler-5c6c776b75-82c2q      1/1     Running     0               8m10s\nbuilder-59f9d44596-n2qzm       1/1     Running     0               8m10s\nmanager-5d76c89b4d-tb6h4       1/1     Running     1 (6m37s ago)   8m10s\ncreate-table-job-jvs9n         0/1     Completed   1               8m10s\ndynamodb-68777f9787-8wjhs      1/1     Running     0               8m10s\nclaudie-operator-5755b7bc69-5l84h      1/1     Running     0               8m10s\nkube-eleven-64468cd5bd-qp4d4   1/1     Running     0               8m10s\nkuber-698c4564c-dhsvg          1/1     Running     0               8m10s\nmake-bucket-job-fb5sp          0/1     Completed   0               8m10s\nminio-0                        1/1     Running     0               8m10s\nminio-1                        1/1     Running     0               8m10s\nminio-2                        1/1     Running     0               8m10s\nminio-3                        1/1     Running     0               8m10s\nmongodb-67bf769957-9ct5z       1/1     Running     0               8m10s\nterraformer-fd664b7ff-dd2h7    1/1     Running     0               8m9s\n

    Changing the namespace

    By default, Claudie will monitor all namespaces, and it will watch for Input Manifest and provider Secrets in the cluster. If you would like limit the namespaces to watch - overwrite CLAUDIE_NAMESPACES environment variable in claudie-operator deployment. Example:

    env:\n  - name: CLAUDIE_NAMESPACES\n    value: \"claudie,different-namespace\"\n

    Troubleshoot!

    If you experience problems refer to our troubleshooting guide.

  7. Let's create a AWS high availability cluster which we'll expand later on with Hetzner bursting capacity. Let's start by creating providers secrets for the infrastructure, and next we will reference them in inputmanifest-bursting.yaml.

    # AWS provider requires the secrets to have fields: accesskey and secretkey\nkubectl create secret generic aws-secret-1 --namespace=mynamespace --from-literal=accesskey='SLDUTKSHFDMSJKDIALASSD' --from-literal=secretkey='iuhbOIJN+oin/olikDSadsnoiSVSDsacoinOUSHD'\nkubectl create secret generic aws-secret-dns --namespace=mynamespace --from-literal=accesskey='ODURNGUISNFAIPUNUGFINB' --from-literal=secretkey='asduvnva+skd/ounUIBPIUjnpiuBNuNipubnPuip'    \n
    # inputmanifest-bursting.yaml\n\napiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: cloud-bursting\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: aws-1\n      providerType: aws\n      secretRef:\n        name: aws-secret-1\n        namespace: mynamespace\n    - name: aws-dns\n      providerType: aws\n      secretRef:\n        name: aws-secret-dns\n        namespace: mynamespace    \n  nodePools:\n    dynamic:\n      - name: aws-control\n        providerSpec:\n            name: aws-1\n            region: eu-central-1\n            zone: eu-central-1a\n        count: 3\n        serverType: t3.medium\n        image: ami-0965bd5ba4d59211c\n      - name: aws-worker\n        providerSpec:\n            name: aws-1\n            region: eu-north-1\n            zone: eu-north-1a\n        count: 3\n        serverType: t3.medium\n        image: ami-03df6dea56f8aa618\n        storageDiskSize: 200\n      - name: aws-lb\n        providerSpec:\n            name: aws-1\n            region: eu-central-2\n            zone: eu-central-2a\n        count: 2\n        serverType: t3.small\n        image: ami-0e4d1886bf4bb88d5\n  kubernetes:\n    clusters:\n      - name: my-super-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n            control:\n            - aws-control\n            compute:\n            - aws-worker\n  loadBalancers:\n    roles:\n      - name: apiserver\n        protocol: tcp\n        port: 6443\n        targetPort: 6443\n        targetPools:\n            - aws-control\n    clusters:\n      - name: loadbalance-me\n        roles:\n            - apiserver\n        dns:\n            dnsZone: domain.com # hosted zone domain name where claudie creates dns records for this cluster\n            provider: aws-dns\n            hostname: supercluster # the sub domain of the new cluster\n        targetedK8s: my-super-cluster\n        pools:\n            - aws-lb\n

    Tip!

    In this example, two AWS providers are used \u2014 one with access to compute resources and the other with access to DNS. However, it is possible to use a single AWS provider with permissions for both services.

  8. Apply the InputManifest crd with your cluster configuration file:

    kubectl apply -f ./inputmanifest-bursting.yaml\n

    Tip!

    InputManifests serve as a single source of truth for both Claudie and the user, which makes creating infrastructure via input manifests as infrastructure as a code and can be easily integrated into a GitOps workflow.

    Errors in input manifest

    Validation webhook will reject the InputManifest at this stage if it finds errors within the manifest. Refer to our API guide for details.

  9. View logs from claudie-operator service to see the InputManifest reconcile process:

    View the InputManifest state with kubectl

    kubectl get inputmanifests.claudie.io cloud-bursting -o jsonpath={.status} | jq .\n
    Here\u2019s an example of .status fields in the InputManifest resource type:

      {\n    \"clusters\": {\n      \"my-super-cluster\": {\n        \"message\": \" installing VPN\",\n        \"phase\": \"ANSIBLER\",\n        \"state\": \"IN_PROGRESS\"\n      }\n    },\n    \"state\": \"IN_PROGRESS\"\n  }\n

    Claudie architecture

    Claudie utilizes multiple services for cluster provisioning, refer to our workflow documentation as to how it works under the hood.

    Provisioning times may vary!

    Please note that cluster creation time may vary due to provisioning capacity and machine provisioning times of selected hyperscalers.

    After finishing the InputManifest state reflects that the cluster is provisioned.

    kubectl get inputmanifests.claudie.io cloud-bursting -o jsonpath={.status} | jq .\n  {\n    \"clusters\": {\n      \"my-super-cluster\": {\n        \"phase\": \"NONE\",\n        \"state\": \"DONE\"\n      }\n    },\n    \"state\": \"DONE\"\n  }    \n
  10. Claudie creates kubeconfig secret in claudie namespace:

    kubectl get secrets -n claudie -l claudie.io/output=kubeconfig\n
    NAME                                  TYPE     DATA   AGE\nmy-super-cluster-6ktx6rb-kubeconfig   Opaque   1      134m\n

    You can recover kubeconfig for your cluster with the following command:

    kubectl get secrets -n claudie -l claudie.io/output=kubeconfig -o jsonpath='{.items[0].data.kubeconfig}' | base64 -d > my-super-cluster-kubeconfig.yaml\n

    If you want to connect to your dynamic k8s nodes via SSH, you can recover private SSH key:

    kubectl get secrets -n claudie -l claudie.io/output=metadata -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq '.dynamic_nodepools | map_values(.nodepool_private_key)'\n

    To recover public IP of your dynamic k8s nodes to connect to via SSH:

    kubectl get secrets -n claudie -l claudie.io/output=metadata -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq -r .dynamic_nodepools.node_ips\n

    In case you want to connect to your dynamic load balancer nodes via SSH, you can recover private SSH key:

    kubectl get secrets -n claudie -l claudie.io/output=metadata -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq '.dynamic_load_balancer_nodepools | .[]'\n

    To recover public IP addresses of your dynamic load balancer nodes to connect to via SSH:

    kubectl get secrets -n claudie -l claudie.io/output=metadata -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq -r '.dynamic_load_balancer_nodepools[] | .node_ips'\n

    Each secret created by Claudie has following labels:

    Key Value claudie.io/project Name of the project. claudie.io/cluster Name of the cluster. claudie.io/cluster-id ID of the cluster. claudie.io/output Output type, either kubeconfig or metadata.
  11. Use your new kubeconfig to see what\u2019s in your new cluster

    kubectl get pods -A --kubeconfig=my-super-cluster-kubeconfig.yaml\n
  12. Let's add a bursting autoscaling node pool in Hetzner cloud. In order to use other hyperscalers, we'll need to add a new provider with appropriate credentials. First we will create a provider secret for Hetzner Cloud, then we open inputmanifest-bursting.yaml input manifest again and append the new Hetzner node pool configuration.

    # Hetzner provider requires the secrets to have field: credentials\nkubectl create secret generic hetzner-secret-1 --namespace=mynamespace --from-literal=credentials='kslISA878a6etYAfXYcg5iYyrFGNlCxcICo060HVEygjFs21nske76ksjKko21lp'\n

    Claudie autoscaling

    Autoscaler in Claudie is deployed in Claudie management cluster and provisions additional resources remotely at the time of need. For more information check out how Claudie autoscaling works.

    # inputmanifest-bursting.yaml\n\napiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: cloud-bursting\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: hetzner-1         # add under nodePools.dynamic section\n      providerType: hetzner\n      secretRef:\n        name: hetzner-secret-1\n        namespace: mynamespace        \n  nodePools:\n    dynamic:\n    ...\n      - name: hetzner-worker  # add under nodePools.dynamic section\n        providerSpec:\n            name: hetzner-1   # use your new hetzner provider hetzner-1 to create these nodes\n            region: hel1\n            zone: hel1-dc2\n        serverType: cpx51\n        image: ubuntu-22.04\n        autoscaler:           # this node pool uses a claudie autoscaler instead of static count of nodes\n            min: 1\n            max: 10\n    kubernetes:\n      clusters:\n      - name: my-super-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n            control:\n            - aws-control\n            compute:\n            - aws-worker\n            - hetzner-worker  # add it to the compute list here\n...\n
  13. Update the crd with the new InputManifest to incorporate the desired changes.

    Deleting existing secrets!

    Deleting or replacing existing input manifest secrets triggers cluster deletion! To add new components to your existing clusters, generate a new secret value and apply it using the following command.

    kubectl apply -f ./inputmanifest-bursting.yaml\n
  14. You can also passthrough additional ports from load balancers to control plane and or worker node pools by adding additional roles under roles.

    # inputmanifest-bursting.yaml\n\napiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: cloud-bursting\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  ...\n  loadBalancers:\n    roles:\n      - name: apiserver\n        protocol: tcp\n        port: 6443\n        targetPort: 6443\n        targetPools: # only loadbalances for port 6443 for the aws-control nodepool\n            - aws-control\n      - name: https\n        protocol: tcp\n        port: 443\n        targetPort: 443\n        targetPools: # only loadbalances for port 443 for the aws-worker nodepool\n            - aws-worker\n            # possible to add other nodepools, hetzner-worker, for example\n    clusters:\n      - name: loadbalance-me\n        roles:\n            - apiserver\n            - https # define it here\n        dns:\n            dnsZone: domain.com\n            provider: aws-dns\n            hostname: supercluster\n        targetedK8s: my-super-cluster\n        pools:\n            - aws-lb\n
    !!! note Load balancing Please refer how our load balancing works by reading our documentation.

  15. Update the InputManifest again with the new configuration.

    kubectl apply -f ./inputmanifest-bursting.yaml\n

  16. To delete the cluster just simply delete the secret and wait for Claudie to destroy it.

    kubectl delete -f ./inputmanifest-bursting.yaml\n

    Removing clusters

    Deleting Claudie or the management cluster does not remove the Claudie managed clusters. Delete the secret first to initiate Claudie's deletion process.

  17. After Claudie-operator finished deletion workflow delete minikube cluster

    kind delete cluster\n

"},{"location":"getting-started/detailed-guide/#general-tips","title":"General tips","text":""},{"location":"getting-started/detailed-guide/#control-plane-considerations","title":"Control plane considerations","text":""},{"location":"getting-started/detailed-guide/#egress-traffic","title":"Egress traffic","text":"

Hyperscalers charge for outbound data and multi-region infrastructure.

Example

Consider a scenario where you have a workload that involves processing extensive datasets from GCP storage using Claudie managed AWS GPU instances. To minimize egress network traffic costs, it is recommended to host the datasets in an S3 bucket and limit egress traffic from GCP and keep the workload localised.

"},{"location":"getting-started/detailed-guide/#on-your-own-path","title":"On your own path","text":"

Once you've gained a comprehensive understanding of how Claudie operates through this guide, you can deploy it to a reliable management cluster, this could be a cluster that you already have. Tailor your input manifest file to suit your specific requirements and explore a detailed example showcasing providers, load balancing, and DNS records across various hyperscalers by visiting this comprehensive example.

"},{"location":"getting-started/detailed-guide/#claudie-customization","title":"Claudie customization","text":"

All of the customisable settings can be found in claudie/.env file.

Variable Default Type Description GOLANG_LOG info string Log level for all services. Can be either info or debug. HTTP_PROXY_MODE default string default, on or off. default utilizes HTTP proxy only when there's at least one node in the K8s cluster from the Hetzner cloud provider. on uses HTTP proxy even when the K8s cluster doesn't have any nodes from the Hetzner. off turns off the usage of HTTP proxy. If the value isn't set or differs from on or off it always works with the default. HTTP_PROXY_URL http://proxy.claudie.io:8880 string HTTP proxy URL used in kubeone proxy configuration to build the K8s cluster. DATABASE_HOSTNAME mongodb string Database hostname used for Claudie configs. MANAGER_HOSTNAME manager string Manager service hostname. TERRAFORMER_HOSTNAME terraformer string Terraformer service hostname. ANSIBLER_HOSTNAME ansibler string Ansibler service hostname. KUBE_ELEVEN_HOSTNAME kube-eleven string Kube-eleven service hostname. KUBER_HOSTNAME kuber string Kuber service hostname. MINIO_HOSTNAME minio string MinIO hostname used for state files. DYNAMO_HOSTNAME dynamo string DynamoDB hostname used for lock files. DYNAMO_TABLE_NAME claudie string Table name for DynamoDB lock files. AWS_REGION local string Region for DynamoDB lock files. DATABASE_PORT 27017 int Port of the database service. TERRAFORMER_PORT 50052 int Port of the Terraformer service. ANSIBLER_PORT 50053 int Port of the Ansibler service. KUBE_ELEVEN_PORT 50054 int Port of the Kube-eleven service. MANAGER_PORT 50055 int Port of the MANAGER service. KUBER_PORT 50057 int Port of the Kuber service. MINIO_PORT 9000 int Port of the MinIO service. DYNAMO_PORT 8000 int Port of the DynamoDB service."},{"location":"getting-started/get-started-using-claudie/","title":"Getting started","text":""},{"location":"getting-started/get-started-using-claudie/#get-started-using-claudie","title":"Get started using Claudie","text":""},{"location":"getting-started/get-started-using-claudie/#prerequisites","title":"Prerequisites","text":"

Before you begin, please make sure you have the following prerequisites installed and set up:

  1. Claudie needs to be installed on an existing Kubernetes cluster, referred to as the Management Cluster, which it uses to manage the clusters it provisions. For testing, you can use ephemeral clusters like Minikube or Kind. However, for production environments, we recommend using a more resilient solution since Claudie maintains the state of the infrastructure it creates.

  2. Claudie requires the installation of cert-manager in your Management Cluster. To install cert-manager, use the following command:

    kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml\n

"},{"location":"getting-started/get-started-using-claudie/#supported-providers","title":"Supported providers","text":"Supported Provider Node Pools DNS AWS Azure GCP OCI Hetzner Cloudflare N/A GenesisCloud N/A

For adding support for other cloud providers, open an issue or propose a PR.

"},{"location":"getting-started/get-started-using-claudie/#install-claudie","title":"Install Claudie","text":"
  1. Deploy Claudie to the Management Cluster:
    kubectl apply -f https://github.com/berops/claudie/releases/latest/download/claudie.yaml\n

To further harden claudie, you may want to deploy our pre-defined network policies:

# for clusters using cilium as their CNI\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy-cilium.yaml\n
# other\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy.yaml\n

"},{"location":"getting-started/get-started-using-claudie/#deploy-your-cluster","title":"Deploy your cluster","text":"
  1. Create Kubernetes Secret resource for your provider configuration.

    kubectl create secret generic example-aws-secret-1 \\\n  --namespace=mynamespace \\\n  --from-literal=accesskey='myAwsAccessKey' \\\n  --from-literal=secretkey='myAwsSecretKey'\n

    Check the supported providers for input manifest examples. For an input manifest spanning all supported hyperscalers checkout out this example.

  2. Deploy InputManifest resource which Claudie uses to create infrastructure, include the created secret in .spec.providers as follows:

    kubectl apply -f - <<EOF\napiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: examplemanifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n      - name: aws-1\n      providerType: aws\n      secretRef:\n          name: example-aws-secret-1 # reference the secret name\n          namespace: mynamespace     # reference the secret namespace\n  nodePools:\n      dynamic:\n      - name: control-aws\n          providerSpec:\n            name: aws-1\n            region: eu-central-1\n            zone: eu-central-1a\n          count: 1\n          serverType: t3.medium\n          image: ami-0965bd5ba4d59211c\n      - name: compute-1-aws\n          providerSpec:\n            name: aws-1\n            region: eu-west-3\n            zone: eu-west-3a\n          count: 2\n          serverType: t3.medium\n          image: ami-029c608efaef0b395\n          storageDiskSize: 50\n  kubernetes:\n      clusters:\n      - name: aws-cluster\n          version: 1.27.0\n          network: 192.168.2.0/24\n          pools:\n            control:\n                - control-aws\n            compute:\n                - compute-1-aws        \nEOF\n

    Deleting existing InputManifest resource deletes provisioned infrastructure!

"},{"location":"getting-started/get-started-using-claudie/#connect-to-your-cluster","title":"Connect to your cluster","text":"

Claudie outputs base64 encoded kubeconfig secret <cluster-name>-<cluster-hash>-kubeconfig in the namespace where it is deployed:

  1. Recover kubeconfig of your cluster by running:
    kubectl get secrets -n claudie -l claudie.io/output=kubeconfig -o jsonpath='{.items[0].data.kubeconfig}' | base64 -d > your_kubeconfig.yaml\n
  2. Use your new kubeconfig:
    kubectl get pods -A --kubeconfig=your_kubeconfig.yaml\n
"},{"location":"getting-started/get-started-using-claudie/#cleanup","title":"Cleanup","text":"
  1. To remove your cluster and its associated infrastructure, delete the cluster definition block from the InputManifest:
    kubectl apply -f - <<EOF\napiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: examplemanifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n      - name: aws-1\n      providerType: aws\n      secretRef:\n          name: example-aws-secret-1 # reference the secret name\n          namespace: mynamespace     # reference the secret namespace\n  nodePools:\n      dynamic:\n      - name: control-aws\n          providerSpec:\n            name: aws-1\n            region: eu-central-1\n            zone: eu-central-1a\n          count: 1\n          serverType: t3.medium\n          image: ami-0965bd5ba4d59211c\n      - name: compute-1-aws\n          providerSpec:\n            name: aws-1\n            region: eu-west-3\n            zone: eu-west-3a\n          count: 2\n          serverType: t3.medium\n          image: ami-029c608efaef0b395\n          storageDiskSize: 50\n  kubernetes:\n    clusters:\n#      - name: aws-cluster\n#          version: 1.27.0\n#          network: 192.168.2.0/24\n#          pools:\n#            control:\n#                - control-aws\n#            compute:\n#                - compute-1-aws         \nEOF\n
  2. To delete all clusters defined in the input manifest, delete the InputManifest. This triggers the deletion process, removing the infrastructure and all data associated with the manifest.

    kubectl delete inputmanifest examplemanifest\n
"},{"location":"hardening/hardening/","title":"Claudie Hardening","text":"

In this section we'll describe how to further configure security hardening of the default deployment for claudie.

"},{"location":"hardening/hardening/#passwords","title":"Passwords","text":"

When deploying the default manifests claudie uses simple passwords for MongoDB, DynamoDB and MinIO.

You can find the passwords at these paths:

manifests/claudie/mongo/secrets\nmanifests/claudie/minio/secrets\nmanifests/claudie/dynamo/secrets\n

It is highly recommended that you change these passwords to more secure ones.

"},{"location":"hardening/hardening/#network-policies","title":"Network Policies","text":"

The default deployment of claudie comes without any network policies, as based on the CNI on the Management cluster the network policies may not be fully supported.

We have a set of network policies pre-defined that can be found in:

manifests/network-policies\n

Currently, we have a cilium specific network policy that's using CiliumNetworkPolicy and another that uses NetworkPolicy which should be supported by most network plugins.

To install network policies you can simply execute one the following commands:

# for clusters using cilium as their CNI\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy-cilium.yaml\n
# other\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy.yaml\n
"},{"location":"http-proxy/http-proxy/","title":"Usage of HTTP proxy","text":"

In this section, we'll describe the default HTTP proxy setup and its the further customization.

"},{"location":"http-proxy/http-proxy/#default-setup","title":"Default setup","text":"

By default HTTP_PROXY_MODE is set to default see, thus Claudie utilizes the HTTP proxy in building the K8s cluster only when there is at least one node from the Hetzner cloud provider. This means, that if you have a cluster with one master node in Azure and one worker node in AWS Claudie won't use the HTTP proxy to build the K8s cluster. However, if you add another worker node from Hetzner the whole process of building the K8s cluster will utilize the HTTP proxy.

This approach was implemented to address the following issues:

"},{"location":"http-proxy/http-proxy/#further-customization","title":"Further customization","text":"

In case you don't want to utilize the HTTP proxy at all (even when there are nodes in the K8s cluster from the Hetzner cloud provider) you can turn off the HTTP proxy by setting the HTTP_PROXY_MODE to off (see). On the other hand, if you wish to use the HTTP proxy whenever building a K8s cluster (even when there aren't any nodes in the K8s cluster from the Hetzner cloud provider) you can set the HTTP_PROXY_MODE to on (see).

If you want to utilize your own HTTP proxy you can set its URL in HTTP_PROXY_URL (see). By default, this value is set to http://proxy.claudie.io:8880. In case your HTTP proxy runs on myproxy.com and is exposed on port 3128 the HTTP_PROXY_URL has to be set to http://myproxy.com:3128. This means you always have to specify the whole URL with the protocol (HTTP), domain name, and port.

"},{"location":"input-manifest/api-reference/","title":"InputManifest API reference","text":"

InputManifest is a definition of the user's infrastructure. It contains cloud provider specification, nodepool specification, Kubernetes and loadbalancer clusters.

"},{"location":"input-manifest/api-reference/#status","title":"Status","text":"

Most recently observed status of the InputManifest

"},{"location":"input-manifest/api-reference/#spec","title":"Spec","text":"

Specification of the desired behavior of the InputManifest

Providers is a list of defined cloud provider configuration that will be used in infrastructure provisioning.

Describes nodepools used for either kubernetes clusters or loadbalancer cluster defined in this manifest.

List of Kubernetes cluster this manifest will manage.

List of loadbalancer clusters the Kubernetes clusters may use.

"},{"location":"input-manifest/api-reference/#providers","title":"Providers","text":"

Contains configurations for supported cloud providers. At least one provider needs to be defined.

The name of the provider specification. The name is limited to 15 characters. It has to be unique across all providers.

Type of a provider. The providerType defines mandatory fields that has to be included for a specific provider. A list of available providers can be found at providers section. Allowed values are:

Value Description aws AWS provider type azure Azure provider type cloudflare Cloudflare provider type gcp GCP provider type hetzner Hetzner provider type hetznerdns Hetzner DNS provider type oci OCI provider type genesiscloud GenesisCloud provider type

Represents a Secret Reference. It has enough information to retrieve secret in any namespace.

Support for more cloud providers is in the roadmap.

For static nodepools a provider is not needed, refer to the static section for more detailed information.

"},{"location":"input-manifest/api-reference/#secretref","title":"SecretRef","text":"

SecretReference represents a Kubernetes Secret Reference. It has enough information to retrieve secret in any namespace.

Name of the secret, which holds data for the particular cloud provider instance.

Namespace of the secret which holds data for the particular cloud provider instance.

"},{"location":"input-manifest/api-reference/#cloudflare","title":"Cloudflare","text":"

The fields that need to be included in a Kubernetes Secret resource to utilize the Cloudflare provider. To find out how to configure Cloudflare follow the instructions here

Credentials for the provider (API token).

"},{"location":"input-manifest/api-reference/#hetznerdns","title":"HetznerDNS","text":"

The fields that need to be included in a Kubernetes Secret resource to utilize the HetznerDNS provider. To find out how to configure HetznerDNS follow the instructions here

Credentials for the provider (API token).

"},{"location":"input-manifest/api-reference/#gcp","title":"GCP","text":"

The fields that need to be included in a Kubernetes Secret resource to utilize the GCP provider. To find out how to configure GCP provider and service account, follow the instructions here.

Credentials for the provider. Stringified JSON service account key.

Project id of an already existing GCP project where the infrastructure is to be created.

"},{"location":"input-manifest/api-reference/#genesiscloud","title":"GenesisCloud","text":"

The fields that need to be included in a Kubernetes Secret resource to utilize the Genesis Cloud provider. To find out how to configure Genesis Cloud provider, follow the instructions here.

API token for the provider.

"},{"location":"input-manifest/api-reference/#hetzner","title":"Hetzner","text":"

The fields that need to be included in a Kubernetes Secret resource to utilize the Hetzner provider. To find out how to configure Hetzner provider and service account, follow the instructions here.

Credentials for the provider (API token).

"},{"location":"input-manifest/api-reference/#oci","title":"OCI","text":"

The fields that need to be included in a Kubernetes Secret resource to utilize the OCI provider. To find out how to configure OCI provider and service account, follow the instructions here.

Private key used to authenticate to the OCI.

Fingerprint of the user-supplied private key.

OCID of the tenancy where privateKey is added as an API key

OCID of the user in the supplied tenancy

OCID of the compartment where VMs/VCNs/... will be created

"},{"location":"input-manifest/api-reference/#aws","title":"AWS","text":"

The fields that need to be included in a Kubernetes Secret resource to utilize the AWS provider. To find out how to configure AWS provider and service account, follow the instructions here.

Access key ID for your AWS account.

Secret key for the Access key specified above.

"},{"location":"input-manifest/api-reference/#azure","title":"Azure","text":"

The fields that need to be included in a Kubernetes Secret resource to utilize the Azure provider. To find out how to configure Azure provider and service account, follow the instructions here.

Subscription ID of your subscription in Azure.

Tenant ID of your tenancy in Azure.

Client ID of your client. The Claudie is design to use a service principal with appropriate permissions.

Client secret generated for your client.

"},{"location":"input-manifest/api-reference/#nodepools","title":"Nodepools","text":"

Collection of static and dynamic nodepool specification, to be referenced in the kubernetes or loadBalancer clusters.

List of dynamically to-be-created nodepools of not yet existing machines, used for Kubernetes or loadbalancer clusters.

These are only blueprints, and will only be created per reference in kubernetes or loadBalancer clusters. E.g. if the nodepool isn't used, it won't even be created. Or if the same nodepool is used in two different clusters, it will be created twice. In OOP analogy, a dynamic nodepool would be a class that would get instantiated N >= 0 times depending on which clusters reference it.

List of static nodepools of already existing machines, not provisioned by Claudie, used for Kubernetes (see requirements) or loadbalancer clusters. These can be baremetal servers or VMs with IPs assigned. Claudie is able to join them into existing clusters, or provision clusters solely on the static nodepools. Typically we'll find these being used in on-premises scenarios, or hybrid-cloud clusters.

"},{"location":"input-manifest/api-reference/#dynamic","title":"Dynamic","text":"

Dynamic nodepools are defined for cloud provider machines that Claudie is expected to provision.

Name of the nodepool. The name is limited by 14 characters. Each nodepool will have a random hash appended to the name, so the whole name will be of format <name>-<hash>.

Collection of provider data to be used while creating the nodepool.

Number of the nodes in the nodepool. Maximum value of 255. Mutually exclusive with autoscaler.

Type of the machines in the nodepool.

Currently, only AMD64 machines are supported.

Further describes the selected server type, if available by the cloud provider.

OS image of the machine.

Currently, only Ubuntu 22.04 AMD64 images are supported.

The size of the storage disk on the nodes in the node pool is specified in GB. The OS disk is created automatically with a predefined size of 100GB for Kubernetes nodes and 50GB for LoadBalancer nodes.

This field is optional; however, if a compute node pool does not define it, the default value will be used for the creation of the storage disk. Control node pools and LoadBalancer node pools ignore this field.

The default value for this field is 50, with a minimum value also set to 50. This value is only applicable to compute nodes. If the disk size is set to 0, no storage disk will be created for any nodes in the particular node pool.

Autoscaler configuration for this nodepool. Mutually exclusive with count.

Map of user defined labels, which will be applied on every node in the node pool. This field is optional.

To see the default labels Claudie applies on each node, refer to this section.

Map of user defined annotations, which will be applied on every node in the node pool. This field is optional.

You can use Kubernetes annotations to attach arbitrary non-identifying metadata. Clients such as tools and libraries can retrieve this metadata.

Array of user defined taints, which will be applied on every node in the node pool. This field is optional.

To see the default taints Claudie applies on each node, refer to this section.

"},{"location":"input-manifest/api-reference/#provider-spec","title":"Provider Spec","text":"

Provider spec is an additional specification built on top of the data from any of the provider instance. Here are provider configuration examples for each individual provider: aws, azure, gcp, cloudflare, hetzner and oci.

Name of the provider instance specified in providers

Region of the nodepool.

Zone of the nodepool.

"},{"location":"input-manifest/api-reference/#autoscaler-configuration","title":"Autoscaler Configuration","text":"

Autoscaler configuration on per nodepool basis. Defines the number of nodes, autoscaler will scale up or down specific nodepool.

Minimum number of nodes in nodepool.

Maximum number of nodes in nodepool.

"},{"location":"input-manifest/api-reference/#static","title":"Static","text":"

Static nodepools are defined for static machines which Claudie will not manage. Used for on premise nodes.

In case you want to use your static nodes in the Kubernetes cluster, make sure they meet the requirements.

Name of the static nodepool. The name is limited by 14 characters.

List of static nodes for a particular static nodepool.

Map of user defined labels, which will be applied on every node in the node pool. This field is optional.

To see the default labels Claudie applies on each node, refer to this section.

Map of user defined annotations, which will be applied on every node in the node pool. This field is optional.

You can use Kubernetes annotations to attach arbitrary non-identifying metadata. Clients such as tools and libraries can retrieve this metadata.

Array of user defined taints, which will be applied on every node in the node pool. This field is optional.

To see the default taints Claudie applies on each node, refer to this section.

"},{"location":"input-manifest/api-reference/#static-node","title":"Static node","text":"

Static node defines single static node from a static nodepool.

Endpoint under which Claudie will access this node.

Name of a user with root privileges, will be used to SSH into this node and install dependencies. This attribute is optional. In case it isn't specified a root username is used.

Secret from which private key will be taken used to SSH into the machine (as root or as a user specificed in the username attribute).

The field in the secret must be privatekey, i.e.

apiVersion: v1\ntype: Opaque\nkind: Secret\n  name: private-key-node-1\n  namespace: claudie-secrets\ndata:\n  privatekey: <base64 encoded private key>\n
"},{"location":"input-manifest/api-reference/#kubernetes","title":"Kubernetes","text":"

Defines Kubernetes clusters.

List of Kubernetes clusters Claudie will create.

"},{"location":"input-manifest/api-reference/#cluster-k8s","title":"Cluster-k8s","text":"

Collection of data used to define a Kubernetes cluster.

Name of the Kubernetes cluster. The name is limited by 28 characters. Each cluster will have a random hash appended to the name, so the whole name will be of format <name>-<hash>.

Kubernetes version of the cluster.

Version should be defined in format vX.Y. In terms of supported versions of Kubernetes, Claudie follows kubeone releases and their supported versions. The current kubeone version used in Claudie is 1.8. To see the list of supported versions, please refer to kubeone documentation.

Network range for the VPN of the cluster. The value should be defined in format A.B.C.D/mask.

List of nodepool names this cluster will use. Remember that nodepools defined in nodepools are only \"blueprints\". The actual nodepool will be created once referenced here.

"},{"location":"input-manifest/api-reference/#loadbalancer","title":"LoadBalancer","text":"

Defines loadbalancer clusters.

List of roles loadbalancers use to forward the traffic. Single role can be used in multiple loadbalancer clusters.

List of loadbalancer clusters used in the Kubernetes clusters defined under clusters.

"},{"location":"input-manifest/api-reference/#role","title":"Role","text":"

Role defines a concrete loadbalancer configuration. Single loadbalancer can have multiple roles.

Name of the role. Used as a reference in clusters.

Protocol of the rule. Allowed values are:

Value Description tcp Role will use TCP protocol udp Role will use UDP protocol

Port of the incoming traffic on the loadbalancer.

Port where loadbalancer forwards the traffic.

"},{"location":"input-manifest/api-reference/#cluster-lb","title":"Cluster-lb","text":"

Collection of data used to define a loadbalancer cluster.

Name of the loadbalancer. The name is limited by 28 characters.

List of roles the loadbalancer uses.

Specification of the loadbalancer's DNS record.

Name of the Kubernetes cluster targetted by this loadbalancer.

List of nodepool names this loadbalancer will use. Remember, that nodepools defined in nodepools are only \"blueprints\". The actual nodepool will be created once referenced here.

"},{"location":"input-manifest/api-reference/#dns","title":"DNS","text":"

Collection of data Claudie uses to create a DNS record for the loadbalancer.

DNS zone inside which the records will be created. GCP/AWS/OCI/Azure/Cloudflare/Hetzner DNS zone is accepted.

The record created in this zone must be accessible to the public. Therefore, a public DNS zone is required.

Name of provider to be used for creating an A record entry in defined DNS zone.

Custom hostname for your A record. If left empty, the hostname will be a random hash.

"},{"location":"input-manifest/api-reference/#default-labels","title":"Default labels","text":"

By default, Claudie applies following labels on every node in the cluster, together with those defined by the user.

Key Value claudie.io/nodepool Name of the node pool. claudie.io/provider Cloud provider name. claudie.io/provider-instance User defined provider name. claudie.io/node-type Type of the node. Either control or compute. topology.kubernetes.io/region Region where the node resides. topology.kubernetes.io/zone Zone of the region where node resides. kubernetes.io/os Os family of the node. kubernetes.io/arch Architecture type of the CPU. v1.kubeone.io/operating-system Os type of the node."},{"location":"input-manifest/api-reference/#default-taints","title":"Default taints","text":"

By default, Claudie applies only node-role.kubernetes.io/control-plane taint for control plane nodes, with effect NoSchedule, together with those defined by the user.

"},{"location":"input-manifest/example/","title":"Example yaml file","text":"example.yaml
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: ExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  # Providers field is used for defining the providers. \n  # It is referencing a secret resource in Kubernetes cluster.\n  # Each provider haves its own mandatory fields that are defined in the secret resource.\n  # Every supported provider has an example in this input manifest.\n  # providers:\n  #   - name: \n  #       providerType:   # Type of the provider secret [aws|azure|gcp|oci|hetzner|hetznerdns|cloudflare]. \n  #       templates:      # external templates used to build the infrastructure by that given provider. If omitted default templates will be used.\n  #         repository:   # publicly available git repository where the templates can be acquired\n  #         tag:          # optional tag. If set is used to checkout to a specific hash commit of the git repository.\n  #         path:         # path where the templates for the specific provider can be found.\n  #       secretRef:      # Secret reference specification.\n  #         name:         # Name of the secret resource.\n  #         namespace:    # Namespace of the secret resource.\n  providers:\n    # Hetzner DNS provider.\n    - name: hetznerdns-1\n      providerType: hetznerdns\n      templates:\n        repository: \"https://github.com/berops/claudie-config\"\n        path: \"templates/terraformer/hetznerdns\"\n      secretRef:\n        name: hetznerdns-secret-1\n        namespace: example-namespace\n\n    # Cloudflare DNS provider.\n    - name: cloudflare-1\n      providerType: cloudflare\n      # templates: ... using default templates\n      secretRef:\n        name: cloudflare-secret-1\n        namespace: example-namespace\n\n    # Hetzner Cloud provider.\n    - name: hetzner-1\n      providerType: hetzner\n      secretRef:\n        name: hetzner-secret-1\n        namespace: example-namespace\n\n    # GCP cloud provider.\n    - name: gcp-1\n      providerType: gcp\n      secretRef:\n        name: gcp-secret-1\n        namespace: example-namespace\n\n    # OCI cloud provider.\n    - name: oci-1\n      providerType: oci\n      secretRef:\n        name: oci-secret-1\n        namespace: example-namespace\n\n    # AWS cloud provider.\n    - name: aws-1\n      providerType: aws\n      secretRef:\n        name: aws-secret-1\n        namespace: example-namespace\n\n    # Azure cloud provider.\n    - name: azure-1\n      providerType: azure\n      secretRef:\n        name: azure-secret-1\n        namespace: example-namespace\n\n\n  # Nodepools field is used for defining the nodepool specification.\n  # You can think of them as a blueprints, not actual nodepools that will be created.\n  nodePools:\n    # Dynamic nodepools are created by Claudie, in one of the cloud providers specified.\n    # Definition specification:\n    # dynamic:\n    #   - name:             # Name of the nodepool, which is used as a reference to it. Needs to be unique.\n    #     providerSpec:     # Provider specification for this nodepool.\n    #       name:           # Name of the provider instance, referencing one of the providers define above.\n    #       region:         # Region of the nodepool.\n    #       zone:           # Zone of the nodepool.\n    #     count:            # Static number of nodes in this nodepool.\n    #     serverType:       # Machine type of the nodes in this nodepool.\n    #     image:            # OS image of the nodes in the nodepool.\n    #     storageDiskSize:  # Disk size of the storage disk for compute nodepool. (optional)\n    #     autoscaler:       # Autoscaler configuration. Mutually exclusive with Count.\n    #       min:            # Minimum number of nodes in nodepool.\n    #       max:            # Maximum number of nodes in nodepool.\n    #     labels:           # Map of custom user defined labels for this nodepool. This field is optional and is ignored if used in Loadbalancer cluster. (optional)\n    #     annotations:      # Map of user defined annotations, which will be applied on every node in the node pool. (optional)\n    #     taints:           # Array of custom user defined taints for this nodepool. This field is optional and is ignored if used in Loadbalancer cluster. (optional)\n    #       - key:          # The taint key to be applied to a node.\n    #         value:        # The taint value corresponding to the taint key.\n    #         effect:       # The effect of the taint on pods that do not tolerate the taint.\n    #\n    # Example definitions for each provider\n    dynamic:\n      - name: control-htz\n        providerSpec:\n          name: hetzner-1\n          region: hel1\n          zone: hel1-dc2\n        count: 3\n        serverType: cpx11\n        image: ubuntu-22.04\n        labels:\n          country: finland\n          city: helsinki\n        annotations:\n          node.longhorn.io/default-node-tags: '[\"finland\"]'\n        taints:\n          - key: country\n            value: finland\n            effect: NoSchedule\n\n      - name: compute-htz\n        providerSpec:\n          name: hetzner-1\n          region: hel1\n          zone: hel1-dc2\n        count: 2\n        serverType: cpx11\n        image: ubuntu-22.04\n        storageDiskSize: 50\n        labels:\n          country: finland\n          city: helsinki\n        annotations:\n          node.longhorn.io/default-node-tags: '[\"finland\"]'\n\n      - name: htz-autoscaled\n        providerSpec:\n          name: hetzner-1\n          region: hel1\n          zone: hel1-dc2\n        serverType: cpx11\n        image: ubuntu-22.04\n        storageDiskSize: 50\n        autoscaler:\n          min: 1\n          max: 5\n        labels:\n          country: finland\n          city: helsinki\n        annotations:\n          node.longhorn.io/default-node-tags: '[\"finland\"]'\n\n      - name: control-gcp\n        providerSpec:\n          name: gcp-1\n          region: europe-west1\n          zone: europe-west1-c\n        count: 3\n        serverType: e2-medium\n        image: ubuntu-os-cloud/ubuntu-2204-jammy-v20221206\n        labels:\n          country: germany\n          city: frankfurt\n        annotations:\n          node.longhorn.io/default-node-tags: '[\"germany\"]'\n\n      - name: compute-gcp\n        providerSpec:\n          name: gcp-1\n          region: europe-west1\n          zone: europe-west1-c\n        count: 2\n        serverType: e2-small\n        image: ubuntu-os-cloud/ubuntu-2204-jammy-v20221206\n        storageDiskSize: 50\n        labels:\n          country: germany\n          city: frankfurt\n        taints:\n          - key: city\n            value: frankfurt\n            effect: NoExecute\n        annotations:\n          node.longhorn.io/default-node-tags: '[\"germany\"]'\n\n      - name: control-oci\n        providerSpec:\n          name: oci-1\n          region: eu-milan-1\n          zone: hsVQ:EU-MILAN-1-AD-1\n        count: 3\n        serverType: VM.Standard2.1\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n\n      - name: compute-oci\n        providerSpec:\n          name: oci-1\n          region: eu-milan-1\n          zone: hsVQ:EU-MILAN-1-AD-1\n        count: 2\n        serverType: VM.Standard2.1\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n        storageDiskSize: 50\n\n      - name: control-aws\n        providerSpec:\n          name: aws-1\n          region: eu-central-1\n          zone: eu-central-1c\n        count: 2\n        serverType: t3.medium\n        image: ami-0965bd5ba4d59211c\n\n      - name: compute-aws\n        providerSpec:\n          name: aws-1\n          region: eu-central-1\n          zone: eu-central-1c\n        count: 2\n        serverType: t3.medium\n        image: ami-0965bd5ba4d59211c\n        storageDiskSize: 50\n\n      - name: control-azure\n        providerSpec:\n          name: azure-1\n          region: West Europe\n          zone: \"1\"\n        count: 2\n        serverType: Standard_B2s\n        image: Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts:22.04.202212120\n\n      - name: compute-azure\n        providerSpec:\n          name: azure-1\n          region: West Europe\n          zone: \"1\"\n        count: 2\n        serverType: Standard_B2s\n        image: Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts:22.04.202212120\n        storageDiskSize: 50\n\n      - name: loadbalancer-1\n        provider:\n        providerSpec:\n          name: gcp-1\n          region: europe-west1\n          zone: europe-west1-c\n        count: 2\n        serverType: e2-small\n        image: ubuntu-os-cloud/ubuntu-2004-focal-v20220610\n\n      - name: loadbalancer-2\n        providerSpec:\n          name: hetzner-1\n          region: hel1\n          zone: hel1-dc2\n        count: 2\n        serverType: cpx11\n        image: ubuntu-20.04\n\n    # Static nodepools are created by user beforehand.\n    # In case you want to use them in the Kubernetes cluster, make sure they meet the requirements. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin\n    # Definition specification:\n    # static:\n    #   - name:             # Name of the nodepool, which is used as a reference to it. Needs to be unique.\n    #     nodes:            # List of nodes which will be access under this nodepool.\n    #       - endpoint:     # IP under which Claudie will access this node. Can be private as long as Claudie will be able to access it.\n    #         username:     # Username of a user with root privileges (optional). If not specified user with name \"root\" will be used\n    #         secretRef:    # Secret reference specification, holding private key which will be used to SSH into the node (as root or as a user specificed in the username attribute).\n    #           name:       # Name of the secret resource.\n    #           namespace:  # Namespace of the secret resource.\n    #     labels:           # Map of custom user defined labels for this nodepool. This field is optional and is ignored if used in Loadbalancer cluster. (optional)\n    #     annotations:      # Map of user defined annotations, which will be applied on every node in the node pool. (optional)\n    #     taints:           # Array of custom user defined taints for this nodepool. This field is optional and is ignored if used in Loadbalancer cluster. (optional)\n    #       - key:          # The taint key to be applied to a node.\n    #         value:        # The taint value corresponding to the taint key.\n    #         effect:       # The effect of the taint on pods that do not tolerate the taint.\n    #\n    # Example definitions\n    static:\n      - name: datacenter-1\n        nodes:\n          - endpoint: \"192.168.10.1\"\n            secretRef:\n              name: datacenter-1-key\n              namespace: example-namespace\n\n          - endpoint: \"192.168.10.2\"\n            secretRef:\n              name: datacenter-1-key\n              namespace: example-namespace\n\n          - endpoint: \"192.168.10.3\"\n            username: admin\n            secretRef:\n              name: datacenter-1-key\n              namespace: example-namespace\n        labels:\n          datacenter: datacenter-1\n        annotations:\n          node.longhorn.io/default-node-tags: '[\"datacenter-1\"]'   \n        taints:\n          - key: datacenter\n            effect: NoExecute\n\n\n  # Kubernetes field is used to define the kubernetes clusters.\n  # Definition specification:\n  #\n  # clusters:\n  #   - name:           # Name of the cluster. The name will be appended to the created node name.\n  #     version:        # Kubernetes version in semver scheme, must be supported by KubeOne.\n  #     network:        # Private network IP range.\n  #     pools:          # Nodepool names which cluster will be composed of. User can reuse same nodepool specification on multiple clusters.\n  #       control:      # List of nodepool names, which will be used as control nodes.\n  #       compute:      # List of nodepool names, which will be used as compute nodes.\n  #\n  # Example definitions:\n  kubernetes:\n    clusters:\n      - name: dev-cluster\n        version: 1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-htz\n            - control-gcp\n          compute:\n            - compute-htz\n            - compute-gcp\n            - compute-azure\n            - htz-autoscaled\n\n      - name: prod-cluster\n        version: 1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-htz\n            - control-gcp\n            - control-oci\n            - control-aws\n            - control-azure\n          compute:\n            - compute-htz\n            - compute-gcp\n            - compute-oci\n            - compute-aws\n            - compute-azure\n\n      - name: hybrid-cluster\n        version: 1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - datacenter-1\n          compute:\n            - compute-htz\n            - compute-gcp\n            - compute-azure\n\n  # Loadbalancers field defines loadbalancers used for the kubernetes clusters and roles for the loadbalancers.\n  # Definition specification for role:\n  #\n  # roles:\n  #   - name:         # Name of the role, used as a reference later. Must be unique.\n  #     protocol:     # Protocol, this role will use.\n  #     port:         # Port, where traffic will be coming.\n  #     targetPort:   # Port, where loadbalancer will forward traffic to.\n  #     targetPools:  # Targeted nodes on kubernetes cluster. Specify a nodepool that is used in the targeted K8s cluster.\n  #\n  # Definition specification for loadbalancer:\n  #\n  # clusters:\n  #   - name:         # Loadbalancer cluster name\n  #     roles:        # List of role names this loadbalancer will fulfil.\n  #     dns:          # DNS specification, where DNS records will be created.\n  #       dnsZone:    # DNS zone name in your provider.\n  #       provider:   # Provider name for the DNS.\n  #       hostname:   # Hostname for the DNS record. Keep in mind the zone will be included automatically. If left empty the Claudie will create random hash as a hostname.\n  #     targetedK8s:  # Name of the targeted kubernetes cluster\n  #     pools:        # List of nodepool names used for loadbalancer\n  #\n  # Example definitions:\n  loadBalancers:\n    roles:\n      - name: apiserver\n        protocol: tcp\n        port: 6443\n        targetPort: 6443\n        targetPools:\n            - control-htz # make sure that this nodepools is acutally used by the targeted `dev-cluster` cluster.\n    clusters:\n      - name: apiserver-lb-dev\n        roles:\n          - apiserver\n        dns:\n          dnsZone: dns-zone\n          provider: hetznerdns-1\n        targetedK8s: dev-cluster\n        pools:\n          - loadbalancer-1\n      - name: apiserver-lb-prod\n        roles:\n          - apiserver\n        dns:\n          dnsZone: dns-zone\n          provider: cloudflare-1\n          hostname: my.fancy.url\n        targetedK8s: prod-cluster\n        pools:\n          - loadbalancer-2\n
"},{"location":"input-manifest/external-templates/","title":"External Templates","text":"

Claudie allows to plug in your own templates for spawning the infrastructure. Specifying which templates are to be used is done at the provider level in the Input Manifest, for example:

apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: genesis-example\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: genesiscloud\n      providerType: genesiscloud\n      templates:\n        repository: \"https://github.com/berops/claudie-config\"\n        tag: \"v0.9.0\" # optional\n        path: \"templates/terraformer/genesiscloud\"\n      secretRef:\n        name: genesiscloud-secret\n        namespace: secrets\n...\n

The template repository need to follow a certain convention to work properly. For example: If we consider an external template repository accessible via a public git repository at:

https://github.com/berops/claudie-config\n

The repository can either contain only the necessary template files, or they can be stored in a subtree. To handle this, you need to pass a path within the public git repository, such as

templates/terraformer/gcp\n

This denotes that the necessary templates for Google Cloud Platform can be found in the subtree at:

claudie-config/templates/terraformer/gcp\n

To only deal with the necessary template files a sparse-checkout is used when downloading the external repository to have a local mirror present which will then be used to generate the terraform files. When using the template files for generation the subtree present at the above given example claudie-config/templates/terraformer/gcp the directory is traversed and the following rules apply:

The complete structure of a subtree for a single provider for external templates located at claudie-config/templates/terraformer/gcp can look as follows:

\u2514\u2500\u2500 terraformer\n    |\u2500\u2500 gcp\n    \u2502   \u251c\u2500\u2500 dns\n    \u2502       \u2514\u2500\u2500 dns.tpl\n    \u2502   \u251c\u2500\u2500 networking\n    \u2502       \u2514\u2500\u2500 networking.tpl\n    \u2502   \u251c\u2500\u2500 nodepool\n    \u2502       \u251c\u2500\u2500 node.tpl\n    \u2502       \u2514\u2500\u2500 node_networking.tpl\n    \u2502   \u2514\u2500\u2500 provider\n    \u2502       \u2514\u2500\u2500 provider.tpl\n    ...\n

Examples of external templates can be found on: https://github.com/berops/claudie-config

"},{"location":"input-manifest/external-templates/#rolling-update","title":"Rolling update","text":"

To handle more specific scenarios where the default templates provided by claudie do not fit the use case, we allow these external templates to be changed/adapted by the user.

By providing this ability to specify the templates to be used when building the InputManifest infrastructure, there is one common scenario that should be handled by claudie, which is rolling updates.

Rolling updates of nodepools are performed when a change to a provider's external templates is registered. Claudie checks that the external repository of the new templates exists and uses them to perform a rolling update of the infrastructure already built. In the below example, when the templates of provider Hetzner-1 are changed the rolling update of all the nodepools which reference that provider will start by doing an update on a single nodepool at a time.

apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: HetznerExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: hetzner-1\n      providerType: hetzner\n      templates:\n-       repository: \"https://github.com/berops/claudie-config\"\n-       path: \"templates/terraformer/hetzner\"\n+       repository: \"https://github.com/YouRepository/claudie-config\"\n+       path: \"templates/terraformer/hetzner\"\n      secretRef:\n        name: hetzner-secret-1\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-htz\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-1\n          # Region of the nodepool.\n          region: hel1\n          # Datacenter of the nodepool.\n          zone: hel1-dc2\n        count: 1\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n\n      - name: compute-1-htz\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-1\n          # Region of the nodepool.\n          region: fsn1\n          # Datacenter of the nodepool.\n          zone: fsn1-dc14\n        count: 2\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n        storageDiskSize: 50\n\n      - name: compute-2-htz\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-1\n          # Region of the nodepool.\n          region: nbg1\n          # Datacenter of the nodepool.\n          zone: nbg1-dc3\n        count: 2\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: hetzner-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-htz\n          compute:\n            - compute-1-htz\n            - compute-2-htz\n
"},{"location":"input-manifest/gpu-example/","title":"GPUs example","text":"

We will follow the guide from Nvidia to deploy the gpu-operator into a claudie build kubernetes cluster. Make sure you fulfill the necessary listed requirements in prerequisites before continuing, if you decide to use a different cloud provider.

In this example we will be using GenesisCloud as our provider, with the following config:

apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: genesis-example\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: genesiscloud\n      providerType: genesiscloud\n      secretRef:\n        name: genesiscloud-secret\n        namespace: secrets\n\n  nodePools:\n    dynamic:\n    - name: gencloud-cpu\n      providerSpec:\n        name: genesiscloud\n        region: ARC-IS-HAF-1\n      count: 1\n      serverType: vcpu-2_memory-4g_disk-80g\n      image: \"Ubuntu 22.04\"\n      storageDiskSize: 50\n\n    - name: gencloud-gpu\n      providerSpec:\n        name: genesiscloud\n        region: ARC-IS-HAF-1\n      count: 2\n      serverType: vcpu-4_memory-12g_disk-80g_nvidia3080-1\n      image: \"Ubuntu 22.04\"\n      storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: gpu-example\n        version: v1.27.0\n        network: 172.16.2.0/24\n        pools:\n          control:\n            - gencloud-cpu\n          compute:\n            - gencloud-gpu\n

After the InputManifest was successfully build by claudie, we deploy the gpu-operator to the gpu-exameplkubernetes cluster.

  1. Create a namespace for the gpu-operator.
kubectl create ns gpu-operator\n
kubectl label --overwrite ns gpu-operator pod-security.kubernetes.io/enforce=privileged\n
  1. Add Nvidia Helm repository.
helm repo add nvidia https://helm.ngc.nvidia.com/nvidia \\\n    && helm repo update\n
  1. Install the operator.
helm install --wait --generate-name \\\n    -n gpu-operator --create-namespace \\\n    nvidia/gpu-operator\n
  1. Wait for the pods in the gpu-operator namespace to be ready.
NAME                                                              READY   STATUS      RESTARTS      AGE\ngpu-feature-discovery-4lrbz                                       1/1     Running     0              10m\ngpu-feature-discovery-5x88d                                       1/1     Running     0              10m\ngpu-operator-1708080094-node-feature-discovery-gc-84ff8f47tn7cd   1/1     Running     0              10m\ngpu-operator-1708080094-node-feature-discovery-master-757c27tm6   1/1     Running     0              10m\ngpu-operator-1708080094-node-feature-discovery-worker-495z2       1/1     Running     0              10m\ngpu-operator-1708080094-node-feature-discovery-worker-n8fl6       1/1     Running     0              10m\ngpu-operator-1708080094-node-feature-discovery-worker-znsk4       1/1     Running     0              10m\ngpu-operator-6dfb9bd487-2gxzr                                     1/1     Running     0              10m\nnvidia-container-toolkit-daemonset-jnqwn                          1/1     Running     0              10m\nnvidia-container-toolkit-daemonset-x9t56                          1/1     Running     0              10m\nnvidia-cuda-validator-l4w85                                       0/1     Completed   0              10m\nnvidia-cuda-validator-lqxhq                                       0/1     Completed   0              10m\nnvidia-dcgm-exporter-l9nzt                                        1/1     Running     0              10m\nnvidia-dcgm-exporter-q7c2x                                        1/1     Running     0              10m\nnvidia-device-plugin-daemonset-dbjjl                              1/1     Running     0              10m\nnvidia-device-plugin-daemonset-x5kfs                              1/1     Running     0              10m\nnvidia-driver-daemonset-dcq4g                                     1/1     Running     0              10m\nnvidia-driver-daemonset-sjjlb                                     1/1     Running     0              10m\nnvidia-operator-validator-jbc7r                                   1/1     Running     0              10m\nnvidia-operator-validator-q59mc                                   1/1     Running     0              10m\n

When all pods are ready you should be able to verify if the GPUs can be used

kubectl get nodes -o json | jq -r '.items[] | {name:.metadata.name, gpus:.status.capacity.\"nvidia.com/gpu\"}'\n
  1. Deploy an example manifest that uses one of the available GPUs from the worker nodes.
apiVersion: v1\nkind: Pod\nmetadata:\n  name: cuda-vectoradd\nspec:\n  restartPolicy: OnFailure\n  containers:\n    - name: cuda-vectoradd\n      image: \"nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda11.7.1-ubuntu20.04\"\n      resources:\n        limits:\n          nvidia.com/gpu: 1\n

From the logs of the pods you should be able to see

kubectl logs cuda-vectoradd\n[Vector addition of 50000 elements]\nCopy input data from the host memory to the CUDA device\nCUDA kernel launch with 196 blocks of 256 threads\nCopy output data from the CUDA device to the host memory\nTest PASSED\nDone\n
"},{"location":"input-manifest/providers/aws/","title":"AWS","text":"

AWS cloud provider requires you to input the credentials as an accesskey and a secretkey.

"},{"location":"input-manifest/providers/aws/#compute-and-dns-example","title":"Compute and DNS example","text":"
apiVersion: v1\nkind: Secret\nmetadata:\n  name: aws-secret\ndata:\n  accesskey: U0xEVVRLU0hGRE1TSktESUFMQVNTRA==\n  secretkey: aXVoYk9JSk4rb2luL29saWtEU2Fkc25vaVNWU0RzYWNvaW5PVVNIRA==\ntype: Opaque\n
"},{"location":"input-manifest/providers/aws/#create-aws-credentials","title":"Create AWS credentials","text":""},{"location":"input-manifest/providers/aws/#prerequisites","title":"Prerequisites","text":"
  1. Install AWS CLI tools by following this guide.
  2. Setup AWS CLI on your machine by following this guide.
  3. Ensure that the regions you're planning to use are enabled in your AWS account. You can check the available regions using this guide, and you can enable them using this guide. Otherwise, you may encounter a misleading error suggesting your STS token is invalid.
"},{"location":"input-manifest/providers/aws/#creating-aws-credentials-for-claudie","title":"Creating AWS credentials for Claudie","text":"
  1. Create a user using AWS CLI:

    aws iam create-user --user-name claudie\n

  2. Create a policy document with compute and DNS permissions required by Claudie:

    cat > policy.json <<EOF\n{\n   \"Version\":\"2012-10-17\",\n   \"Statement\":[\n      {\n         \"Effect\":\"Allow\",\n         \"Action\":[\n            \"ec2:*\"\n         ],\n         \"Resource\":\"*\"\n      },\n      {\n         \"Effect\":\"Allow\",\n         \"Action\":[\n            \"route53:*\"\n         ],\n         \"Resource\":\"*\"\n      }\n   ]\n}\nEOF\n

    DNS permissions

    Exclude route53 permissions from the policy document, if you prefer not to use AWS as the DNS provider.

  3. Attach the policy to the claudie user:

    aws iam put-user-policy --user-name claudie --policy-name ec2-and-dns-access --policy-document file://policy.json\n

  4. Create access keys for claudie user:

    aws iam create-access-key --user-name claudie\n
    {\n   \"AccessKey\":{\n      \"UserName\":\"claudie\",\n      \"AccessKeyId\":\"AKIAIOSFODNN7EXAMPLE\",\n      \"Status\":\"Active\",\n      \"SecretAccessKey\":\"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\",\n      \"CreateDate\":\"2018-12-14T17:34:16Z\"\n   }\n}\n

"},{"location":"input-manifest/providers/aws/#dns-setup","title":"DNS setup","text":"

If you wish to use AWS as your DNS provider where Claudie creates DNS records pointing to Claudie managed clusters, you will need to create a public hosted zone by following this guide.

AWS is not my domain registrar

If you haven't acquired a domain via AWS and wish to utilize AWS for hosting your zone, you can refer to this guide on AWS nameservers. However, if you prefer not to use the entire domain, an alternative option is to delegate a subdomain to AWS.

"},{"location":"input-manifest/providers/aws/#input-manifest-examples","title":"Input manifest examples","text":""},{"location":"input-manifest/providers/aws/#create-a-secret-for-aws-provider","title":"Create a secret for AWS provider","text":"

The secret for an AWS provider must include the following mandatory fields: accesskey and secretkey.

kubectl create secret generic aws-secret-1 --namespace=mynamespace --from-literal=accesskey='SLDUTKSHFDMSJKDIALASSD' --from-literal=secretkey='iuhbOIJN+oin/olikDSadsnoiSVSDsacoinOUSHD'\n
"},{"location":"input-manifest/providers/aws/#single-provider-multi-region-cluster-example","title":"Single provider, multi region cluster example","text":"
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: AWSExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n\n  providers:\n    - name: aws-1\n      providerType: aws\n      secretRef:\n        name: aws-secret-1\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-aws\n        providerSpec:\n          # Name of the provider instance.\n          name: aws-1\n          # Region of the nodepool.\n          region: eu-central-1\n          # Availability zone of the nodepool.\n          zone: eu-central-1a\n        count: 1\n        # Instance type name.\n        serverType: t3.medium\n        # AMI ID of the image.\n        # Make sure to update it according to the region. \n        image: ami-0965bd5ba4d59211c\n\n      - name: compute-1-aws\n        providerSpec:\n          # Name of the provider instance.\n          name: aws-1\n          # Region of the nodepool.\n          region: eu-central-2\n          # Availability zone of the nodepool.\n          zone: eu-central-2a\n        count: 2\n        # Instance type name.\n        serverType: t3.medium\n        # AMI ID of the image.\n        # Make sure to update it according to the region. \n        image: ami-0e4d1886bf4bb88d5\n        storageDiskSize: 50\n\n      - name: compute-2-aws\n        providerSpec:\n          # Name of the provider instance.\n          name: aws-1\n          # Region of the nodepool.\n          region: eu-central-2\n          # Availability zone of the nodepool.\n          zone: eu-central-2a\n        count: 2\n        # Instance type name.\n        serverType: t3.medium\n        # AMI ID of the image.\n        # Make sure to update it according to the region. \n        image: ami-0965bd5ba4d59211c\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: aws-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-aws\n          compute:\n            - compute-1-aws\n            - compute-2-aws\n
"},{"location":"input-manifest/providers/aws/#multi-provider-multi-region-clusters-example","title":"Multi provider, multi region clusters example","text":"
kubectl create secret generic aws-secret-1 --namespace=mynamespace --from-literal=accesskey='SLDUTKSHFDMSJKDIALASSD' --from-literal=secretkey='iuhbOIJN+oin/olikDSadsnoiSVSDsacoinOUSHD'\nkubectl create secret generic aws-secret-2 --namespace=mynamespace --from-literal=accesskey='ODURNGUISNFAIPUNUGFINB' --from-literal=secretkey='asduvnva+skd/ounUIBPIUjnpiuBNuNipubnPuip'\n
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: AWSExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n\n  providers:\n    - name: aws-1\n      providerType: aws\n      secretRef:\n        name: aws-secret-1\n        namespace: mynamespace\n    - name: aws-2\n      providerType: aws\n      secretRef:\n        name: aws-secret-2\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-aws-1\n        providerSpec:\n          # Name of the provider instance.\n          name: aws-1\n          region: eu-central-1\n          # Availability zone of the nodepool.\n          zone: eu-central-1a\n        count: 1\n        # Instance type name.\n        serverType: t3.medium\n        # AMI ID of the image.\n        # Make sure to update it according to the region. \n        image: ami-0965bd5ba4d59211c\n\n      - name: control-aws-2\n        providerSpec:\n          # Name of the provider instance.\n          name: aws-2\n          # Region of the nodepool.\n          region: eu-north-1\n          # Availability zone of the nodepool.\n          zone: eu-north-1a\n        count: 2\n        # Instance type name.\n        serverType: t3.medium\n        # AMI ID of the image.\n        # Make sure to update it according to the region. \n        image: ami-03df6dea56f8aa618\n\n      - name: compute-aws-1\n        providerSpec:\n          # Name of the provider instance.\n          name: aws-1\n          # Region of the nodepool.\n          region: eu-central-2\n          # Availability zone of the nodepool.\n          zone: eu-central-2a\n        count: 2\n        # Instance type name.\n        serverType: t3.medium\n        # AMI ID of the image.\n        # Make sure to update it according to the region. \n        image: ami-0e4d1886bf4bb88d5\n        storageDiskSize: 50\n\n      - name: compute-aws-2\n        providerSpec:\n          # Name of the provider instance.\n          name: aws-2\n          # Region of the nodepool.\n          region: eu-north-3\n          # Availability zone of the nodepool.\n          zone: eu-north-3a\n        count: 2\n        # Instance type name.\n        serverType: t3.medium\n        # AMI ID of the image.\n        # Make sure to update it according to the region. \n        image: ami-03df6dea56f8aa618\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: aws-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-aws-1\n            - control-aws-2\n          compute:\n            - compute-aws-1\n            - compute-aws-2\n
"},{"location":"input-manifest/providers/azure/","title":"Azure","text":"

Azure provider requires you to input clientsecret, subscriptionid, tenantid, and clientid.

"},{"location":"input-manifest/providers/azure/#compute-and-dns-example","title":"Compute and DNS example","text":"
apiVersion: v1\nkind: Secret\nmetadata:\n  name: azure-secret\ndata:\n  clientid: QWJjZH5FRmd+SDZJamtsc35BQkMxNXNFRkdLNTRzNzhYfk9sazk=\n  # all resources you define will be charged here\n  clientsecret: NmE0ZGZzZzctc2Q0di1mNGFkLWRzdmEtYWQ0djYxNmZkNTEy\n  subscriptionid: NTRjZGFmYTUtc2R2cy00NWRzLTU0NnMtZGY2NTFzZmR0NjE0\n  tenantid: MDI1NXNjMjMtNzZ3ZS04N2c2LTk2NGYtYWJjMWRlZjJnaDNs\ntype: Opaque\n
"},{"location":"input-manifest/providers/azure/#create-azure-credentials","title":"Create Azure credentials","text":""},{"location":"input-manifest/providers/azure/#prerequisites","title":"Prerequisites","text":"
  1. Install Azure CLI by following this guide.
  2. Login to Azure this guide.
"},{"location":"input-manifest/providers/azure/#creating-azure-credentials-for-claudie","title":"Creating Azure credentials for Claudie","text":"
  1. Login to Azure with the following command:

    az login\n

  2. Permissions file for the new role that claudie service principal will use:

    cat > policy.json <<EOF\n{\n   \"Name\":\"Resource Group Management\",\n   \"Id\":\"bbcd72a7-2285-48ef-bn72-f606fba81fe7\",\n   \"IsCustom\":true,\n   \"Description\":\"Create and delete Resource Groups.\",\n   \"Actions\":[\n      \"Microsoft.Resources/subscriptions/resourceGroups/write\",\n      \"Microsoft.Resources/subscriptions/resourceGroups/delete\"\n   ],\n   \"AssignableScopes\":[\"/\"]\n}\nEOF\n

  3. Create a role based on the policy document:

    az role definition create --role-definition policy.json\n

  4. Create a service account to access virtual machine resources as well as DNS:

    az ad sp create-for-rbac --name claudie-sp\n
    {\n  \"clientId\": \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\",\n  \"displayName\": \"claudie-sp\",\n  \"clientSecret\": \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\",\n  \"tenant\": \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\"\n}\n

  5. Assign required roles for the service principal:

    {\n  az role assignment create --assignee claudie-sp --role \"Virtual Machine Contributor\"\n  az role assignment create --assignee claudie-sp --role \"Network Contributor\"\n  az role assignment create --assignee claudie-sp --role \"Resource Group Management\"\n}\n

"},{"location":"input-manifest/providers/azure/#dns-requirements","title":"DNS requirements","text":"

If you wish to use Azure as your DNS provider where Claudie creates DNS records pointing to Claudie managed clusters, you will need to create a public DNS zone by following this guide.

Azure is not my domain registrar

If you haven't acquired a domain via Azure and wish to utilize Azure for hosting your zone, you can refer to this guide on Azure nameservers. However, if you prefer not to use the entire domain, an alternative option is to delegate a subdomain to Azure.

"},{"location":"input-manifest/providers/azure/#input-manifest-examples","title":"Input manifest examples","text":""},{"location":"input-manifest/providers/azure/#single-provider-multi-region-cluster-example","title":"Single provider, multi region cluster example","text":""},{"location":"input-manifest/providers/azure/#create-a-secret-for-azure-provider","title":"Create a secret for Azure provider","text":"

The secret for an Azure provider must include the following mandatory fields: clientsecret, subscriptionid, tenantid, and clientid.

kubectl create secret generic azure-secret-1 --namespace=mynamespace --from-literal=clientsecret='Abcd~EFg~H6Ijkls~ABC15sEFGK54s78X~Olk9' --from-literal=subscriptionid='6a4dfsg7-sd4v-f4ad-dsva-ad4v616fd512' --from-literal=tenantid='54cdafa5-sdvs-45ds-546s-df651sfdt614' --from-literal=clientid='0255sc23-76we-87g6-964f-abc1def2gh3l'\n
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: AzureExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: azure-1\n      providerType: azure\n      secretRef:\n        name: azure-secret-1\n        namespace: mynamespace\n  nodePools:\n    dynamic:\n      - name: control-az\n        providerSpec:\n          # Name of the provider instance.\n          name: azure-1\n          # Location of the nodepool.\n          region: West Europe\n          # Zone of the nodepool.\n          zone: \"1\"\n        count: 2\n        # VM size name.\n        serverType: Standard_B2s\n        # URN of the image.\n        image: Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts:22.04.202212120\n\n      - name: compute-1-az\n        providerSpec:\n          # Name of the provider instance.\n          name: azure-1\n          # Location of the nodepool.\n          region: Germany West Central\n          # Zone of the nodepool.\n          zone: \"1\"\n        count: 2\n        # VM size name.\n        serverType: Standard_B2s\n        # URN of the image.\n        image: Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts:22.04.202212120\n        storageDiskSize: 50\n\n      - name: compute-2-az\n        providerSpec:\n          # Name of the provider instance.\n          name: azure-1\n          # Location of the nodepool.\n          region: West Europe\n          # Zone of the nodepool.\n          zone: \"1\"\n        count: 2\n        # VM size name.\n        serverType: Standard_B2s\n        # URN of the image.\n        image: Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts:22.04.202212120\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: azure-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-az\n          compute:\n            - compute-2-az\n            - compute-1-az\n
"},{"location":"input-manifest/providers/azure/#multi-provider-multi-region-clusters-example","title":"Multi provider, multi region clusters example","text":"
kubectl create secret generic azure-secret-1 --namespace=mynamespace --from-literal=clientsecret='Abcd~EFg~H6Ijkls~ABC15sEFGK54s78X~Olk9' --from-literal=subscriptionid='6a4dfsg7-sd4v-f4ad-dsva-ad4v616fd512' --from-literal=tenantid='54cdafa5-sdvs-45ds-546s-df651sfdt614' --from-literal=clientid='0255sc23-76we-87g6-964f-abc1def2gh3l'\n\nkubectl create secret generic azure-secret-2 --namespace=mynamespace --from-literal=clientsecret='Efgh~ijkL~on43noi~NiuscviBUIds78X~UkL7' --from-literal=subscriptionid='0965bd5b-usa3-as3c-ads1-csdaba6fd512' --from-literal=tenantid='55safa5d-dsfg-546s-45ds-d51251sfdaba' --from-literal=clientid='076wsc23-sdv2-09cA-8sd9-oigv23npn1p2'\n
name: AzureExampleManifest\napiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: AzureExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: azure-1\n      providerType: azure\n      secretRef:\n        name: azure-secret-1\n        namespace: mynamespace\n\n    - name: azure-2\n      providerType: azure\n      secretRef:\n        name: azure-secret-2\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-az-1\n        providerSpec:\n          # Name of the provider instance.\n          name: azure-1\n          # Location of the nodepool.\n          region: West Europe\n          # Zone of the nodepool.\n          zone: \"1\"\n        count: 1\n        # VM size name.\n        serverType: Standard_B2s\n        # URN of the image.\n        image: Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts:22.04.202212120\n\n      - name: control-az-2\n        providerSpec:\n          # Name of the provider instance.\n          name: azure-2\n          # Location of the nodepool.\n          region: Germany West Central\n          # Zone of the nodepool.\n          zone: \"2\"\n        count: 2\n        # VM size name.\n        serverType: Standard_B2s\n        # URN of the image.\n        image: Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts:22.04.202212120\n\n      - name: compute-az-1\n        providerSpec:\n          # Name of the provider instance.\n          name: azure-1\n          # Location of the nodepool.\n          region: Germany West Central\n          # Zone of the nodepool.\n          zone: \"2\"\n        count: 2\n        # VM size name.\n        serverType: Standard_B2s\n        # URN of the image.\n        image: Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts:22.04.202212120\n        storageDiskSize: 50\n\n      - name: compute-az-2\n        providerSpec:\n          # Name of the provider instance.\n          name: azure-2\n          # Location of the nodepool.\n          region: West Europe\n          # Zone of the nodepool.\n          zone: \"3\"\n        count: 2\n        # VM size name.\n        serverType: Standard_B2s\n        # URN of the image.\n        image: Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts:22.04.202212120\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: azure-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-az-1\n            - control-az-2\n          compute:\n            - compute-az-1\n            - compute-az-2\n
"},{"location":"input-manifest/providers/cloudflare/","title":"Cloudflare","text":"

Cloudflare provider requires apitoken token field in string format.

"},{"location":"input-manifest/providers/cloudflare/#dns-example","title":"DNS example","text":"
apiVersion: v1\nkind: Secret\nmetadata:\n  name: cloudflare-secret\ndata:\n  apitoken: a3NsSVNBODc4YTZldFlBZlhZY2c1aVl5ckZHTmxDeGM=\ntype: Opaque\n
"},{"location":"input-manifest/providers/cloudflare/#create-cloudflare-credentials","title":"Create Cloudflare credentials","text":"

You can create Cloudflare API token by following this guide. The required permissions for the zone you want to use are:

Zone:Read\nDNS:Read\nDNS:Edit\n
"},{"location":"input-manifest/providers/cloudflare/#dns-setup","title":"DNS setup","text":"

If you wish to use Cloudflare as your DNS provider where Claudie creates DNS records pointing to Claudie managed clusters, you will need to create a public DNS zone by following this guide.

Cloudflare is not my domain registrar

If you haven't acquired a domain via Cloudflare and wish to utilize Cloudflare for hosting your zone, you can refer to this guide on Cloudflare nameservers. However, if you prefer not to use the entire domain, an alternative option is to delegate a subdomain to Cloudflare.

"},{"location":"input-manifest/providers/cloudflare/#input-manifest-examples","title":"Input manifest examples","text":""},{"location":"input-manifest/providers/cloudflare/#load-balancing-example","title":"Load balancing example","text":"

Showcase example

To make this example functional, you need to specify control plane and node pools. This current showcase will produce an error if used as is.

"},{"location":"input-manifest/providers/cloudflare/#create-a-secret-for-cloudflare-and-aws-providers","title":"Create a secret for Cloudflare and AWS providers","text":"

The secret for an Cloudflare provider must include the following mandatory fields: apitoken.

kubectl create secret generic cloudflare-secret-1 --namespace=mynamespace --from-literal=apitoken='kslISA878a6etYAfXYcg5iYyrFGNlCxc'\n

The secret for an AWS provider must include the following mandatory fields: accesskey and secretkey.

kubectl create secret generic aws-secret-1 --namespace=mynamespace --from-literal=accesskey='SLDUTKSHFDMSJKDIALASSD' --from-literal=secretkey='iuhbOIJN+oin/olikDSadsnoiSVSDsacoinOUSHD'\n

apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: CloudflareExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: cloudflare-1\n      providerType: cloudflare\n      secretRef:\n        name: cloudflare-secret-1\n        namespace: mynamespace\n\n    - name: aws-1\n      providerType: aws\n      secretRef:\n        name: aws-secret-1\n        namespace: mynamespace\n\n  nodePools: \n    dynamic:\n      - name: loadbalancer\n        providerSpec:\n          name: aws-1\n          region: eu-central-1\n          zone: eu-central-1c\n        count: 2\n        serverType: t3.medium\n        image: ami-0965bd5ba4d59211c\n\n  kubernetes:\n    clusters:\n      - name: cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control: []\n          compute: []\n\n  loadBalancers:\n    roles:\n      - name: apiserver\n        protocol: tcp\n        port: 6443\n        targetPort: 6443\n        targetPools: []\n    clusters:\n      - name: apiserver-lb-prod\n        roles:\n          - apiserver\n        dns:\n          dnsZone: dns-zone\n          provider: cloudflare-1\n          hostname: my.fancy.url\n        targetedK8s: prod-cluster\n        pools:\n          - loadbalancer\n
"},{"location":"input-manifest/providers/gcp/","title":"GCP","text":"

GCP provider requires you to input multiline credentials as well as specific GCP project ID gcpproject where to provision resources.

"},{"location":"input-manifest/providers/gcp/#compute-and-dns-example","title":"Compute and DNS example","text":"
apiVersion: v1\nkind: Secret\nmetadata:\n  name: gcp-secret\ndata:\n  credentials: >-\n    ewogICAgICAgICAidHlwZSI6InNlcnZpY2VfYWNjb3VudCIsCiAgICAgICAgICJwcm9qZWN0X2lkIjoicHJvamVjdC1jbGF1ZGllIiwKICAgICAgICAgInByaXZhdGVfa2V5X2lkIjoiYnNrZGxvODc1czkwODczOTQ3NjNlYjg0ZTQwNzkwM2xza2RpbXA0MzkiLAogICAgICAgICAicHJpdmF0ZV9rZXkiOiItLS0tLUJFR0lOIFBSSVZBVEUgS0VZLS0tLS1cblNLTE9vc0tKVVNEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpXG4tLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tXG4iLAogICAgICAgICAiY2xpZW50X2VtYWlsIjoiY2xhdWRpZUBwcm9qZWN0LWNsYXVkaWUtMTIzNDU2LmlhbS5nc2VydmljZWFjY291bnQuY29tIiwKICAgICAgICAgImNsaWVudF9pZCI6IjEwOTg3NjU0MzIxMTIzNDU2Nzg5MCIsCiAgICAgICAgICJhdXRoX3VyaSI6Imh0dHBzOi8vYWNjb3VudHMuZ29vZ2xlLmNvbS9vL29hdXRoMi9hdXRoIiwKICAgICAgICAgInRva2VuX3VyaSI6Imh0dHBzOi8vb2F1dGgyLmdvb2dsZWFwaXMuY29tL3Rva2VuIiwKICAgICAgICAgImF1dGhfcHJvdmlkZXJfeDUwOV9jZXJ0X3VybCI6Imh0dHBzOi8vd3d3Lmdvb2dsZWFwaXMuY29tL29hdXRoMi92MS9jZXJ0cyIsCiAgICAgICAgICJjbGllbnRfeDUwOV9jZXJ0X3VybCI6Imh0dHBzOi8vd3d3Lmdvb2dsZWFwaXMuY29tL3JvYm90L3YxL21ldGFkYXRhL3g1MDkvY2xhdWRpZSU0MGNsYXVkaWUtcHJvamVjdC0xMjM0NTYuaWFtLmdzZXJ2aWNlYWNjb3VudC5jb20iCiAgICAgIH0=\n  gcpproject: cHJvamVjdC1jbGF1ZGll # base64 created from GCP project ID\ntype: Opaque\n
"},{"location":"input-manifest/providers/gcp/#create-gcp-credentials","title":"Create GCP credentials","text":""},{"location":"input-manifest/providers/gcp/#prerequisites","title":"Prerequisites","text":"
  1. Install gcoud CLI on your machine by following this guide.
  2. Initialize gcloud CLI by following this guide.
  3. Authorize cloud CLI by following this guide
"},{"location":"input-manifest/providers/gcp/#creating-gcp-credentials-for-claudie","title":"Creating GCP credentials for Claudie","text":"
  1. Create a GCP project:

    gcloud projects create claudie-project\n

  2. Set the current project to claudie-project:

    gcloud config set project claudie-project\n

  3. Attach billing account to your project:

    gcloud alpha billing accounts projects link claudie-project (--account-id=ACCOUNT_ID | --billing-account=ACCOUNT_ID)\n

  4. Enable Compute Engine API and Cloud DNS API:

    {\n  gcloud services enable compute.googleapis.com\n  gcloud services enable dns.googleapis.com\n}\n

  5. Create a service account:

    gcloud iam service-accounts create claudie-sa\n

  6. Attach roles to the servcie account:

    {\n  gcloud projects add-iam-policy-binding claudie-project --member=serviceAccount:claudie-sa@claudie-project.iam.gserviceaccount.com --role=roles/compute.admin\n  gcloud projects add-iam-policy-binding claudie-project --member=serviceAccount:claudie-sa@claudie-project.iam.gserviceaccount.com --role=roles/dns.admin\n}\n

  7. Recover service account keys for claudie-sa:

    gcloud iam service-accounts keys create claudie-credentials.json --iam-account=claudie-sa@claudie-project.iam.gserviceaccount.com\n

"},{"location":"input-manifest/providers/gcp/#dns-setup","title":"DNS setup","text":"

If you wish to use GCP as your DNS provider where Claudie creates DNS records pointing to Claudie managed clusters, you will need to create a public DNS zone by following this guide.

GCP is not my domain registrar

If you haven't acquired a domain via GCP and wish to utilize GCP for hosting your zone, you can refer to this guide on GCP nameservers. However, if you prefer not to use the entire domain, an alternative option is to delegate a subdomain to GCP.

"},{"location":"input-manifest/providers/gcp/#input-manifest-examples","title":"Input manifest examples","text":""},{"location":"input-manifest/providers/gcp/#single-provider-multi-region-cluster-example","title":"Single provider, multi region cluster example","text":""},{"location":"input-manifest/providers/gcp/#create-a-secret-for-cloudflare-and-gcp-providers","title":"Create a secret for Cloudflare and GCP providers","text":"

The secret for an GCP provider must include the following mandatory fields: gcpproject and credentials.

# The ./claudie-credentials.json file is the file created in #Creating GCP credentials for Claudie step 7.\nkubectl create secret generic gcp-secret-1 --namespace=mynamespace --from-literal=gcpproject='project-claudie' --from-file=credentials=./claudie-credentials.json\n

apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: GCPExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: gcp-1\n      providerType: gcp\n      secretRef:\n        name: gcp-secret-1\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-gcp\n        providerSpec:\n          # Name of the provider instance.\n          name: gcp-1\n          # Region of the nodepool.\n          region: europe-west1\n          # Zone of the nodepool.\n          zone: europe-west1-c\n        count: 1\n        # Machine type name.\n        serverType: e2-medium\n        # OS image name.\n        image: ubuntu-os-cloud/ubuntu-2204-jammy-v20221206\n\n      - name: compute-1-gcp\n        providerSpec:\n          # Name of the provider instance.\n          name: gcp-1\n          # Region of the nodepool.\n          region: europe-west3\n          # Zone of the nodepool.\n          zone: europe-west3-a\n        count: 2\n        # Machine type name.\n        serverType: e2-medium\n        # OS image name.\n        image: ubuntu-os-cloud/ubuntu-2204-jammy-v20221206\n        storageDiskSize: 50\n\n      - name: compute-2-gcp\n        providerSpec:\n          # Name of the provider instance.\n          name: gcp-1\n          # Region of the nodepool.\n          region: europe-west2\n          # Zone of the nodepool.\n          zone: europe-west2-a\n        count: 2\n        # Machine type name.\n        serverType: e2-medium\n        # OS image name.\n        image: ubuntu-os-cloud/ubuntu-2204-jammy-v20221206\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: gcp-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-gcp\n          compute:\n            - compute-1-gcp\n            - compute-2-gcp\n
"},{"location":"input-manifest/providers/gcp/#multi-provider-multi-region-clusters-example","title":"Multi provider, multi region clusters example","text":""},{"location":"input-manifest/providers/gcp/#create-a-secret-for-cloudflare-and-gcp-providers_1","title":"Create a secret for Cloudflare and GCP providers","text":"

The secret for an GCP provider must include the following mandatory fields: gcpproject and credentials.

# The ./claudie-credentials.json file is the file created in #Creating GCP credentials for Claudie step 7.\nkubectl create secret generic gcp-secret-1 --namespace=mynamespace --from-literal=gcpproject='project-claudie' --from-file=credentials=./claudie-credentials.json\nkubectl create secret generic gcp-secret-2 --namespace=mynamespace --from-literal=gcpproject='project-claudie' --from-file=credentials=./claudie-credentials-2.json\n

apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: GCPExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: gcp-1\n      providerType: gcp\n      secretRef:\n        name: gcp-secret-1\n        namespace: mynamespace\n    - name: gcp-2\n      providerType: gcp\n      secretRef:\n        name: gcp-secret-2\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-gcp-1\n        providerSpec:\n          # Name of the provider instance.\n          name: gcp-1\n          # Region of the nodepool.\n          region: europe-west1\n          # Zone of the nodepool.\n          zone: europe-west1-c\n        count: 1\n        # Machine type name.\n        serverType: e2-medium\n        # OS image name.\n        image: ubuntu-os-cloud/ubuntu-2204-jammy-v20221206\n\n      - name: control-gcp-2\n        providerSpec:\n          # Name of the provider instance.\n          name: gcp-2\n          # Region of the nodepool.\n          region: europe-west1\n          # Zone of the nodepool.\n          zone: europe-west1-a\n        count: 2\n        # Machine type name.\n        serverType: e2-medium\n        # OS image name.\n        image: ubuntu-os-cloud/ubuntu-2204-jammy-v20221206\n\n      - name: compute-gcp-1\n        providerSpec:\n          # Name of the provider instance.\n          name: gcp-1\n          # Region of the nodepool.\n          region: europe-west3\n          # Zone of the nodepool.\n          zone: europe-west3-a\n        count: 2\n        # Machine type name.\n        serverType: e2-medium\n        # OS image name.\n        image: ubuntu-os-cloud/ubuntu-2204-jammy-v20221206\n        storageDiskSize: 50\n\n      - name: compute-gcp-2\n        providerSpec:\n          # Name of the provider instance.\n          name: gcp-2\n          # Region of the nodepool.\n          region: europe-west1\n          # Zone of the nodepool.\n          zone: europe-west1-c\n        count: 2\n        # Machine type name.\n        serverType: e2-medium\n        # OS image name.\n        image: ubuntu-os-cloud/ubuntu-2204-jammy-v20221206\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: gcp-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-gcp-1\n            - control-gcp-2\n          compute:\n            - compute-gcp-1\n            - compute-gcp-2\n
"},{"location":"input-manifest/providers/genesiscloud/","title":"Genesis Cloud","text":"

Genesis cloud provider requires apitoken token field in string format.

"},{"location":"input-manifest/providers/genesiscloud/#compute-example","title":"Compute example","text":"
apiVersion: v1\nkind: Secret\nmetadata:\n  name: genesiscloud-secret\ndata:\n  apitoken: GCAAAZZZZnnnnNNNNxXXX123BBcc123qqcva\ntype: Opaque\n
"},{"location":"input-manifest/providers/genesiscloud/#create-genesis-cloud-api-token","title":"Create Genesis Cloud API token","text":"

You can create Genesis Cloud API token by following this guide. The token must be able to have access to the following compute resources.

Instances, Network, Volumes\n
"},{"location":"input-manifest/providers/genesiscloud/#input-manifest-examples","title":"Input manifest examples","text":""},{"location":"input-manifest/providers/genesiscloud/#single-provider-multi-region-cluster-example","title":"Single provider, multi region cluster example","text":""},{"location":"input-manifest/providers/genesiscloud/#create-a-secret-for-genesis-cloud-provider","title":"Create a secret for Genesis cloud provider","text":"
kubectl create secret generic genesiscloud-secret --namespace=mynamespace --from-literal=apitoken='GCAAAZZZZnnnnNNNNxXXX123BBcc123qqcva'\n
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: genesis-example\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: genesiscloud\n      providerType: genesiscloud\n      secretRef:\n        name: genesiscloud-secret\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control\n        providerSpec:\n          name: genesiscloud\n          region: ARC-IS-HAF-1\n        count: 1\n        serverType: vcpu-2_memory-4g_disk-80g\n        image: \"Ubuntu 22.04\"\n        storageDiskSize: 50\n\n      - name: compute\n        providerSpec:\n          name: genesiscloud\n          region: ARC-IS-HAF-1\n        count: 3\n        serverType: vcpu-2_memory-4g_disk-80g\n        image: \"Ubuntu 22.04\"\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: genesiscloud-cluster\n        version: v1.27.0\n        network: 172.16.2.0/24\n        pools:\n          control:\n            - control\n          compute:\n            - compute\n
"},{"location":"input-manifest/providers/hetzner/","title":"Hetzner","text":"

Hetzner provider requires credentials token field in string format, and Hetzner DNS provider requires apitoken field in string format.

"},{"location":"input-manifest/providers/hetzner/#compute-example","title":"Compute example","text":"
apiVersion: v1\nkind: Secret\nmetadata:\n  name: hetzner-secret\ndata:\n  credentials: a3NsSVNBODc4YTZldFlBZlhZY2c1aVl5ckZHTmxDeGNJQ28wNjBIVkV5Z2pGczIxbnNrZTc2a3NqS2tvMjFscA==\ntype: Opaque\n
"},{"location":"input-manifest/providers/hetzner/#dns-example","title":"DNS example","text":"
apiVersion: v1\nkind: Secret\nmetadata:\n  name: hetznerdns-secret\ndata:\n  apitoken: a1V0UmcxcGdqQ1JhYXBQbWQ3cEFJalZnaHVyWG8xY24=\ntype: Opaque\n
"},{"location":"input-manifest/providers/hetzner/#create-hetzner-api-credentials","title":"Create Hetzner API credentials","text":"

You can create Hetzner API credentials by following this guide. The required permissions for the zone you want to use are:

Read & Write\n
"},{"location":"input-manifest/providers/hetzner/#create-hetzner-dns-credentials","title":"Create Hetzner DNS credentials","text":"

You can create Hetzner DNS credentials by following this guide.

DNS provider specification

The provider for DNS is different from the one for the Cloud.

"},{"location":"input-manifest/providers/hetzner/#dns-setup","title":"DNS setup","text":"

If you wish to use Hetzner as your DNS provider where Claudie creates DNS records pointing to Claudie managed clusters, you will need to create a public DNS zone by following this guide.

Hetzner is not my domain registrar

If you haven't acquired a domain via Hetzner and wish to utilize Hetzner for hosting your zone, you can refer to this guide on Hetzner nameservers. However, if you prefer not to use the entire domain, an alternative option is to delegate a subdomain to Hetzner.

"},{"location":"input-manifest/providers/hetzner/#input-manifest-examples","title":"Input manifest examples","text":""},{"location":"input-manifest/providers/hetzner/#single-provider-multi-region-cluster-example","title":"Single provider, multi region cluster example","text":""},{"location":"input-manifest/providers/hetzner/#create-a-secret-for-hetzner-provider","title":"Create a secret for Hetzner provider","text":"

The secret for an Hetzner provider must include the following mandatory fields: credentials.

kubectl create secret generic hetzner-secret-1 --namespace=mynamespace --from-literal=credentials='kslISA878a6etYAfXYcg5iYyrFGNlCxcICo060HVEygjFs21nske76ksjKko21lp'\n
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: HetznerExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: hetzner-1\n      providerType: hetzner\n      secretRef:\n        name: hetzner-secret-1\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-htz\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-1\n          # Region of the nodepool.\n          region: hel1\n          # Datacenter of the nodepool.\n          zone: hel1-dc2\n        count: 1\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n\n      - name: compute-1-htz\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-1\n          # Region of the nodepool.\n          region: fsn1\n          # Datacenter of the nodepool.\n          zone: fsn1-dc14\n        count: 2\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n        storageDiskSize: 50\n\n      - name: compute-2-htz\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-1\n          # Region of the nodepool.\n          region: nbg1\n          # Datacenter of the nodepool.\n          zone: nbg1-dc3\n        count: 2\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: hetzner-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-htz\n          compute:\n            - compute-1-htz\n            - compute-2-htz\n
"},{"location":"input-manifest/providers/hetzner/#multi-provider-multi-region-clusters-example","title":"Multi provider, multi region clusters example","text":""},{"location":"input-manifest/providers/hetzner/#create-a-secret-for-hetzner-provider_1","title":"Create a secret for Hetzner provider","text":"

The secret for an Hetzner provider must include the following mandatory fields: credentials.

kubectl create secret generic hetzner-secret-1 --namespace=mynamespace --from-literal=credentials='kslISA878a6etYAfXYcg5iYyrFGNlCxcICo060HVEygjFs21nske76ksjKko21lp'\nkubectl create secret generic hetzner-secret-2 --namespace=mynamespace --from-literal=credentials='kslIIOUYBiuui7iGBYIUiuybpiUB87bgPyuCo060HVEygjFs21nske76ksjKko21l'\n
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: HetznerExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: hetzner-1\n      providerType: hetzner\n      secretRef:\n        name: hetzner-secret-1\n        namespace: mynamespace\n    - name: hetzner-2\n      providerType: hetzner\n      secretRef:\n        name: hetzner-secret-2\n        namespace: mynamespace        \n\n  nodePools:\n    dynamic:\n      - name: control-htz-1\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-1\n          # Region of the nodepool.\n          region: hel1\n          # Datacenter of the nodepool.\n          zone: hel1-dc2\n        count: 1\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n\n      - name: control-htz-2\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-2\n          # Region of the nodepool.\n          region: fsn1\n          # Datacenter of the nodepool.\n          zone: fsn1-dc14\n        count: 2\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n\n      - name: compute-htz-1\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-1\n          # Region of the nodepool.\n          region: fsn1\n          # Datacenter of the nodepool.\n          zone: fsn1-dc14\n        count: 2\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n        storageDiskSize: 50\n\n      - name: compute-htz-2\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-2\n          # Region of the nodepool.\n          region: nbg1\n          # Datacenter of the nodepool.\n          zone: nbg1-dc3\n        count: 2\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: hetzner-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-htz-1\n            - control-htz-2\n          compute:\n            - compute-htz-1\n            - compute-htz-2\n
"},{"location":"input-manifest/providers/oci/","title":"OCI","text":"

OCI provider requires you to input privatekey, keyfingerprint, tenancyocid, userocid, and compartmentocid.

"},{"location":"input-manifest/providers/oci/#compute-and-dns-example","title":"Compute and DNS example","text":"
apiVersion: v1\nkind: Secret\nmetadata:\n  name: oci-secret\ndata:\n  compartmentocid: b2NpZDIuY29tcGFydG1lbnQub2MyLi5hYWFhYWFhYWEycnNmdmx2eGMzNG8wNjBrZmR5Z3NkczIxbnNrZTc2a3Nqa2tvMjFscHNkZnNm    \n  keyfingerprint: YWI6Y2Q6M2Y6MzQ6MzM6MjI6MzI6MzQ6NTQ6NTQ6NDU6NzY6NzY6Nzg6OTg6YWE=\n  privatekey: >-\n    LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQogICAgICAgIE1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2oyL2Fza0pTTG9zYWQKICAgICAgICBNSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkCiAgICAgICAgTUlJRXZRSUJBREFOQmdrcWhraUc5dzBCQVFFRkFBU0NCS2N3Z2dTakFnRUFBb0lCQVFDajIvYXNrSlNMb3NhZAogICAgICAgIE1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2oyL2Fza0pTTG9zYWQKICAgICAgICBNSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkCiAgICAgICAgTUlJRXZRSUJBREFOQmdrcWhraUc5dzBCQVFFRkFBU0NCS2N3Z2dTakFnRUFBb0lCQVFDajIvYXNrSlNMb3NhZAogICAgICAgIE1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2oyL2Fza0pTTG9zYWQKICAgICAgICBNSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkCiAgICAgICAgTUlJRXZRSUJBREFOQmdrcWhraUc5dzBCQVFFRkFBU0NCS2N3Z2dTakFnRUFBb0lCQVFDajIvYXNrSlNMb3NhZAogICAgICAgIE1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2oyL2Fza0pTTG9zYWQKICAgICAgICBNSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkCiAgICAgICAgTUlJRXZRSUJBREFOQmdrcWhraUc5dzBCQVFFRkFBU0NCS2N3Z2dTakFnRUFBb0lCQVFDajIvYXNrSlNMb3NhZAogICAgICAgIE1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2oyL2Fza0pTTG9zYWQKICAgICAgICBNSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkCiAgICAgICAgTUlJRXZRSUJBREFOQmdrcWhraUc5dzBCQVFFRkFBU0NCS2N3Z2dTakFnRUFBb0lCQVFDajIvYXNrSlNMb3NhZAogICAgICAgIE1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2oyL2Fza0pTTG9zYWQKICAgICAgICBNSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkCiAgICAgICAgTUlJRXZRSUJBREFOQmdrcWhraUc5dzBCQVFFRkFBU0NCS2N3Z2dTakFnRUFBb0lCQVFDajIvYXNrSlNMb3NhZAogICAgICAgIE1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2oyL2Fza0pTTG9zYWQKICAgICAgICBNSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkCiAgICAgICAgTUlJRXZRSUJBREFOQmdrcWhraUc5dzBCQVFFRkFBU0NCS2N3Z2dTakFnRUFBb0lCQVFDajIvYXNrSlNMb3NhZAogICAgICAgIE1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2oyL2Fza0pTTG9zYWQKICAgICAgICBNSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkCiAgICAgICAgTUlJRXZRSUJBREFOQmdrcWhraUc5dzBCQVFFRkFBU0NCS2N3Z2dTakFnRUFBb0lCQVFDajIvYXNrSlNMb3NhZAogICAgICAgIE1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2oyLz09CiAgICAgICAgLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0=\n  tenancyocid: b2NpZDIudGVuYW5jeS5vYzIuLmFhYWFhYWFheXJzZnZsdnhjMzRvMDYwa2ZkeWdzZHMyMW5za2U3NmtzamtrbzIxbHBzZGZzZnNnYnJ0Z2hz\n  userocid: b2NpZDIudXNlci5vYzIuLmFhYWFhYWFhYWFueXJzZnZsdnhjMzRvMDYwa2ZkeWdzZHMyMW5za2U3NmtzamtrbzIxbHBzZGZzZg==\ntype: Opaque\n
"},{"location":"input-manifest/providers/oci/#create-oci-credentials","title":"Create OCI credentials","text":""},{"location":"input-manifest/providers/oci/#prerequisites","title":"Prerequisites","text":"
  1. Install OCI CLI by following this guide.
  2. Configure OCI CLI by following this guide.
"},{"location":"input-manifest/providers/oci/#creating-oci-credentials-for-claudie","title":"Creating OCI credentials for Claudie","text":"
  1. Export your tenant id:

    export tenancy_ocid=\"ocid\"\n

    Find your tenant id

    You can find it under Identity & Security tab and Compartments option.

  2. Create OCI compartment where Claudie deploys its resources:

    {\n  oci iam compartment create --name claudie-compartment --description claudie-compartment --compartment-id $tenancy_ocid\n}\n

  3. Create the claudie user:

    oci iam user create --name claudie-user --compartment-id $tenancy_ocid --description claudie-user --email <email address>\n

  4. Create a group that will hold permissions for the user:

    oci iam group create --name claudie-group --compartment-id $tenancy_ocid --description claudie-group\n

  5. Generate policy file with necessary permissions:

    {\ncat > policy.txt <<EOF\n[\n  \"Allow group claudie-group to manage instance-family in compartment claudie-compartment\",\n  \"Allow group claudie-group to manage volume-family in compartment claudie-compartment\",\n  \"Allow group claudie-group to manage virtual-network-family in tenancy\",\n  \"Allow group claudie-group to manage dns-zones in compartment claudie-compartment\",\n  \"Allow group claudie-group to manage dns-records in compartment claudie-compartment\"\n]\nEOF\n}\n

  6. Create a policy with required permissions:

    oci iam policy create --name claudie-policy --statements file://policy.txt --compartment-id $tenancy_ocid --description claudie-policy\n

  7. Declare user_ocid and group_ocid:

    {\n  group_ocid=$(oci iam group list | jq -r '.data[] | select(.name == \"claudie-group\") | .id')\n  user_ocid=$(oci iam user list | jq -r '.data[] | select(.name == \"claudie-user\") | .id')\n}\n

  8. Attach claudie-user to claudie-group:

    oci iam group add-user --group-id $group_ocid --user-id $user_ocid\n

  9. Generate key pair for claudie-user and enter N/A for no passphrase:

    oci setup keys --key-name claudie-user --output-dir .\n

  10. Upload the public key to use for the claudie-user:

    oci iam user api-key upload --user-id $user_ocid --key-file claudie-user_public.pem\n

  11. Export compartment_ocid and fingerprint, to use them when creating provider secret.

      compartment_ocid=$(oci iam compartment list | jq -r '.data[] | select(.name == \"claudie-compartment\") | .id')\n  fingerprint=$(oci iam user api-key list --user-id $user_ocid | jq -r '.data[0].fingerprint')\n

"},{"location":"input-manifest/providers/oci/#dns-setup","title":"DNS setup","text":"

If you wish to use OCI as your DNS provider where Claudie creates DNS records pointing to Claudie managed clusters, you will need to create a public DNS zone by following this guide.

OCI is not my domain registrar

You cannot buy a domain from Oracle at this time so you can update nameservers of your OCI hosted zone by following this guide on changing nameservers. However, if you prefer not to use the entire domain, an alternative option is to delegate a subdomain to OCI.

"},{"location":"input-manifest/providers/oci/#iam-policies-required-by-claudie","title":"IAM policies required by Claudie","text":"
\"Allow group <GROUP_NAME> to manage instance-family in compartment <COMPARTMENT_NAME>\"\n\"Allow group <GROUP_NAME> to manage volume-family in compartment <COMPARTMENT_NAME>\"\n\"Allow group <GROUP_NAME> to manage virtual-network-family in tenancy\"\n\"Allow group <GROUP_NAME> to manage dns-zones in compartment <COMPARTMENT_NAME>\",\n\"Allow group <GROUP_NAME> to manage dns-records in compartment <COMPARTMENT_NAME>\",\n
"},{"location":"input-manifest/providers/oci/#input-manifest-examples","title":"Input manifest examples","text":""},{"location":"input-manifest/providers/oci/#single-provider-multi-region-cluster-example","title":"Single provider, multi region cluster example","text":""},{"location":"input-manifest/providers/oci/#create-a-secret-for-oci-provider","title":"Create a secret for OCI provider","text":"

The secret for an OCI provider must include the following mandatory fields: compartmentocid, userocid, tenancyocid, keyfingerprint and privatekey.

# Refer to values exported in \"Creating OCI credentials for Claudie\" section\nkubectl create secret generic oci-secret-1 --namespace=mynamespace --from-literal=compartmentocid=$compartment_ocid --from-literal=userocid=$user_ocid --from-literal=tenancyocid=$tenancy_ocid --from-literal=keyfingerprint=$fingerprint --from-file=privatekey=./claudie-user_public.pem\n
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: OCIExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: oci-1\n      providerType: oci\n      secretRef:\n        name: oci-secret-1\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-oci\n        providerSpec:\n          # Name of the provider instance.\n          name: oci-1\n          # Region of the nodepool.\n          region: eu-milan-1\n          # Availability domain of the nodepool.\n          zone: hsVQ:EU-MILAN-1-AD-1\n        count: 1\n        # VM shape name.\n        serverType: VM.Standard2.2\n        # OCID of the image.\n        # Make sure to update it according to the region.\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n\n      - name: compute-1-oci\n        providerSpec:\n          # Name of the provider instance.\n          name: oci-1\n          # Region of the nodepool.\n          region: eu-frankfurt-1\n          # Availability domain of the nodepool.\n          zone: hsVQ:EU-FRANKFURT-1-AD-1\n        count: 2\n        # VM shape name.\n        serverType: VM.Standard2.1\n        # OCID of the image.\n        # Make sure to update it according to the region.\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n        storageDiskSize: 50\n\n      - name: compute-2-oci\n        providerSpec:\n          # Name of the provider instance.\n          name: oci-1\n          # Region of the nodepool.\n          region: eu-frankfurt-1\n          # Availability domain of the nodepool.\n          zone: hsVQ:EU-FRANKFURT-1-AD-2\n        count: 2\n        # VM shape name.\n        serverType: VM.Standard2.1\n        # OCID of the image.\n        # Make sure to update it according to the region.\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: oci-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-oci\n          compute:\n            - compute-1-oci\n            - compute-2-oci\n
"},{"location":"input-manifest/providers/oci/#multi-provider-multi-region-clusters-example","title":"Multi provider, multi region clusters example","text":""},{"location":"input-manifest/providers/oci/#create-a-secret-for-oci-provider_1","title":"Create a secret for OCI provider","text":"

The secret for an OCI provider must include the following mandatory fields: compartmentocid, userocid, tenancyocid, keyfingerprint and privatekey.

# Refer to values exported in \"Creating OCI credentials for Claudie\" section\nkubectl create secret generic oci-secret-1 --namespace=mynamespace --from-literal=compartmentocid=$compartment_ocid --from-literal=userocid=$user_ocid --from-literal=tenancyocid=$tenancy_ocid --from-literal=keyfingerprint=$fingerprint --from-file=privatekey=./claudie-user_public.pem\n\nkubectl create secret generic oci-secret-2 --namespace=mynamespace --from-literal=compartmentocid=$compartment_ocid2 --from-literal=userocid=$user_ocid2 --from-literal=tenancyocid=$tenancy_ocid2 --from-literal=keyfingerprint=$fingerprint2 --from-file=privatekey=./claudie-user_public2.pem\n
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: OCIExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: oci-1\n      providerType: oci\n      secretRef:\n        name: oci-secret-1\n        namespace: mynamespace\n    - name: oci-2\n      providerType: oci\n      secretRef:\n        name: oci-secret-2\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-oci-1\n        providerSpec:\n          # Name of the provider instance.\n          name: oci-1\n          # Region of the nodepool.\n          region: eu-milan-1\n          # Availability domain of the nodepool.\n          zone: hsVQ:EU-MILAN-1-AD-1\n        count: 1\n        # VM shape name.\n        serverType: VM.Standard2.2\n        # OCID of the image.\n        # Make sure to update it according to the region.\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n\n      - name: control-oci-2\n        providerSpec:\n          # Name of the provider instance.\n          name: oci-2\n          # Region of the nodepool.\n          region: eu-frankfurt-1\n          # Availability domain of the nodepool.\n          zone: hsVQ:EU-FRANKFURT-1-AD-3\n        count: 2\n        # VM shape name.\n        serverType: VM.Standard2.1\n        # OCID of the image.\n        # Make sure to update it according to the region.\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n\n      - name: compute-oci-1\n        providerSpec:\n          # Name of the provider instance.\n          name: oci-1\n          # Region of the nodepool.\n          region: eu-frankfurt-1\n          # Availability domain of the nodepool.\n          zone: hsVQ:EU-FRANKFURT-1-AD-1\n        count: 2\n        # VM shape name.\n        serverType: VM.Standard2.1\n        # OCID of the image.\n        # Make sure to update it according to the region.\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n        storageDiskSize: 50\n\n      - name: compute-oci-2\n        providerSpec:\n          # Name of the provider instance.\n          name: oci-2\n          # Region of the nodepool.\n          region: eu-milan-1\n          # Availability domain of the nodepool.\n          zone: hsVQ:EU-MILAN-1-AD-1\n        count: 2\n        # VM shape name.\n        serverType: VM.Standard2.1\n        # OCID of the image.\n        # Make sure to update it according to the region..\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: oci-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-oci-1\n            - control-oci-2\n          compute:\n            - compute-oci-1\n            - compute-oci-2\n
"},{"location":"input-manifest/providers/oci/#flex-instances-example","title":"Flex instances example","text":""},{"location":"input-manifest/providers/oci/#create-a-secret-for-oci-provider_2","title":"Create a secret for OCI provider","text":"

The secret for an OCI provider must include the following mandatory fields: compartmentocid, userocid, tenancyocid, keyfingerprint and privatekey.

# Refer to values exported in \"Creating OCI credentials for Claudie\" section\nkubectl create secret generic oci-secret-1 --namespace=mynamespace --from-literal=compartmentocid=$compartment_ocid --from-literal=userocid=$user_ocid --from-literal=tenancyocid=$tenancy_ocid --from-literal=keyfingerprint=$fingerprint --from-file=privatekey=./claudie-user_public.pem\n
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: OCIExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: oci-1\n      providerType: oci\n      secretRef:\n        name: oci-secret-1\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: oci\n        providerSpec:\n          # Name of the provider instance.\n          name: oci-1\n          # Region of the nodepool.\n          region: eu-frankfurt-1\n          # Availability domain of the nodepool.\n          zone: hsVQ:EU-FRANKFURT-1-AD-1\n        count: 2\n        # VM shape name.\n        serverType: VM.Standard.E4.Flex\n        # further describes the selected server type.\n        machineSpec:\n          # use 2 ocpus.\n          cpuCount: 2\n          # use 8 gb of memory.\n          memory: 8\n        # OCID of the image.\n        # Make sure to update it according to the region.\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: oci-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - oci\n          compute:\n            - oci\n
"},{"location":"input-manifest/providers/on-prem/","title":"On premise nodes","text":"

Claudie is designed to leverage your existing infrastructure and utilise it for building Kubernetes clusters together with supported cloud providers. However, Claudie operates under a few assumptions:

  1. Accessibility of Machines: Claudie requires access to the machines specified by the provided endpoint. It needs the ability to connect to these machines in order to perform necessary operations.

  2. Connectivity between Static Nodes: Static nodes within the infrastructure should be able to communicate with each other using the specified endpoints. This connectivity is important for proper functioning of the Kubernetes cluster.

  3. SSH Access with Root Privileges: Claudie relies on SSH access to the nodes using the SSH key provided in the input manifest. The SSH key should grant root privileges to enable Claudie to perform required operations on the nodes.

  4. Meeting the Kubernetes nodes requirements: Learn more.

By ensuring that these assumptions are met, Claudie can effectively utilise your infrastructure and build Kubernetes clusters while collaborating with the supported cloud providers.

"},{"location":"input-manifest/providers/on-prem/#private-key-example-secret","title":"Private key example secret","text":"
apiVersion: v1\nkind: Secret\nmetadata:\n  name: static-node-key\ndata:\n  privatekey: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBbzJEOGNYb0Uxb3VDblBYcXFpVW5qbHh0c1A4YXlKQW4zeFhYdmxLOTMwcDZBUzZMCncvVW03THFnbUhpOW9GL3pWVnB0TDhZNmE2NWUvWjk0dE9SQ0lHY0VJendpQXF3M3M4NGVNcnoyQXlrSWhsWE0KVEpSS3J3SHJrbDRtVlBvdE9paDFtZkVTenFMZ25TMWdmQWZxSUVNVFdOZlRkQmhtUXpBNVJFT2NpQ1Q1dFRnMApraDI1SmVHeU9qR3pzaFhkKzdaVi9PUXVQUk5Mb2lrQzFDVFdtM0FSVFFDeUpZaXR5bURVeEgwa09wa2VyODVoCmpFRTRkUnUxVzQ2WDZkdEUrSlBZNkNKRlR2c1VUcGlqT3QzQmNTSTYyY2ZyYmFRYXhvQXk2bEJLVlB1cm1xYm0Kb09JNHVRUWJWRGt5Q3V4MzcwSTFjTUVzWkszYVNBa0ZZSUlMRndJREFRQUJBb0lCQUVLUzFhc2p6bTdpSUZIMwpQeTBmd0xPWTVEVzRiZUNHSlVrWkxIVm9YK2hwLzdjVmtXeERMQjVRbWZvblVSWFZvMkVIWFBDWHROeUdERDBLCnkzUGlnek9TNXJPNDRCNzRzQ1g3ZW9Dd1VRck9vS09rdUlBSCtUckE3STRUQVVtbE8rS3o4OS9MeFI4Z2JhaCsKZ2c5b1pqWEpQMHYzZmptVGE3QTdLVXF3eGtzUEpORFhyN0J2MkhGc3ZueHROTkhWV3JBcjA3NUpSU2U3akJIRgpyQnpIRGFOUUhjYWwybTJWbDAvbGM4SVgyOEIwSXBYOEM5ajNqVGUwRS9XOVYyaURvM0ZvbmZzVU1BSm9KeW1nCkRzRXFxb25Cc0ZFeE9iY1BUNlh4SHRLVHVXMkRDRHF3c20xTVM2L0xUZzRtMFZ0alBRbGE5cnd0Z1lQcEtVSWYKbkRya3ZBRUNnWUVBOC9EUTRtNWF4UE0xL2d4UmVFNVZJSEMzRjVNK0s0S0dsdUNTVUNHcmtlNnpyVmhOZXllMwplbWpUV21lUmQ4L0szYzVxeGhJeGkvWE8vc0ZvREthSjdHaVl4L2RiOEl6dlJZYkw2ZHJiOVh0aHVObmhJWTlkCmJPd0VhbWxXZGxZbzlhUTBoYTFpSHpoUHVhMjN0TUNiM2xpZzE3MVZuUURhTXlhS3plaVMxUmNDZ1lFQXEzU2YKVEozcDRucmh4VjJiMEJKUStEdjkrRHNiZFBCY0pPbHpYVVVodHB6d3JyT3VKdzRUUXFXeG1pZTlhK1lpSzd0cAplY2YyOEltdHY0dy9aazg1TUdmQm9hTkpwdUNmNWxoMElseDB3ZXROQXlmb3dTNHZ3dUlZNG1zVFlvcE1WV20yClV5QzlqQ1M4Q0Y2Y1FrUVdjaVVlc2dVWHFocE50bXNLTG9LWU9nRUNnWUVBNWVwZVpsd09qenlQOGY4WU5tVFcKRlBwSGh4L1BZK0RsQzRWa1FjUktXZ1A2TTNKYnJLelZZTGsySXlva1VDRjRHakI0TUhGclkzZnRmZTA2TFZvMQorcXptK3Vub0xNUVlySllNMFQvbk91cnNRdmFRR3pwdG1zQ2t0TXJOcEVFMjM3YkJqaERKdjVVcWgxMzFISmJCCkVnTEVyaklVWkNNdWhURlplQk14ZVVjQ2dZRUFqZkZPc0M5TG9hUDVwVnVKMHdoVzRDdEtabWNJcEJjWk1iWFQKUERRdlpPOG9rbmxPaENheTYwb2hibTNYODZ2aVBqSTVjQWlMOXpjRUVNQWEvS2c1d0VrbGxKdUtMZzFvVTFxSApTcXNnUGlwKzUwM3k4M3M1THkzZlRCTTVTU3NWWnVETmdLUnFSOHRobjh3enNPaU5iSkl1aDFLUDlOTXg0d05hCnVvYURZQUVDZ1lFQW5xNzJJUEU1MlFwekpjSDU5RmRpbS8zOU1KYU1HZlhZZkJBNXJoenZnMmc5TW9URXpWKysKSVZ2SDFTSjdNTTB1SVBCa1FpbC91V083bU9DR2hHVHV3TGt3Uy9JU1FjTmRhSHlTRDNiZzdndzc5aG1UTVhiMgozVFpCTjdtb3FWM0VhRUhWVU1nT1N3dHUySTlQN1RJNGJJV0RQUWxuWE53Q0tCWWNKanRraWNRPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=\ntype: Opaque\n
"},{"location":"input-manifest/providers/on-prem/#input-manifest-example","title":"Input manifest example","text":""},{"location":"input-manifest/providers/on-prem/#private-cluster-example","title":"Private cluster example","text":"
kubectl create secret generic static-node-key --namespace=mynamespace --from-file=privatekey=private.pem\n
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: PrivateClusterExample\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  nodePools:\n    static:\n        - name: control\n          nodes:\n            - endpoint: \"192.168.10.1\"\n              secretRef:\n                name: static-node-key\n                namespace: mynamespace\n\n        - name: compute\n          nodes:\n            - endpoint: \"192.168.10.2\"\n              secretRef:\n                name: static-node-key\n                namespace: mynamespace\n            - endpoint: \"192.168.10.3\"\n              secretRef:\n                name: static-node-key\n                namespace: mynamespace\n\n  kubernetes:\n    clusters:\n      - name: private-cluster\n        version: 1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control\n          compute:\n            - compute\n
"},{"location":"input-manifest/providers/on-prem/#hybrid-cloud-example","title":"Hybrid cloud example","text":""},{"location":"input-manifest/providers/on-prem/#create-secret-for-private-key","title":"Create secret for private key","text":"
kubectl create secret generic static-node-key --namespace=mynamespace --from-file=privatekey=private.pem\n

To see how to configure Hetzner or any other credentials for hybrid cloud, refer to their docs.

apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: HybridCloudExample\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: hetzner-1\n      providerType: hetzner\n      secretRef:\n        name: hetzner-secret-1\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-htz\n        providerSpec:\n          name: hetzner-1\n          region: fsn1\n          zone: fsn1-dc14\n        count: 3\n        serverType: cpx11\n        image: ubuntu-22.04\n\n    static:\n        - name: datacenter-1\n          nodes:\n            - endpoint: \"192.168.10.1\"\n              secretRef:\n                name: static-node-key\n                namespace: mynamespace\n            - endpoint: \"192.168.10.2\"\n              secretRef:\n                name: static-node-key\n                namespace: mynamespace\n            - endpoint: \"192.168.10.3\"\n              secretRef:\n                name: static-node-key\n                namespace: mynamespace\n\n  kubernetes:\n    clusters:\n      - name: hybrid-cluster\n        version: 1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-hetzner\n          compute:\n            - datacenter-1\n
"},{"location":"latency-limitations/latency-limitations/","title":"Latency-imposed limitations","text":"

The general rule of thumb is that every 100 km of distance adds roughly ~1ms of latency. Therefore in the following subsections, we describe what problems might and will most probably arise when working with high latency using etcd and Longhorn.

"},{"location":"latency-limitations/latency-limitations/#etcd-limitations","title":"etcd limitations","text":"

A distance between etcd nodes in the multi-cloud environment of more than 600 km can be detrimental to cluster health. In a scenario like this, an average deployment time can double compared to a scenario with etcd nodes in different availability zones within the same cloud provider. Besides this, the total number of the etcd Slow Applies increases rapidly, and a Round-trip time varies from ~0.05s to ~0.2s, whereas in a single-cloud scenario with etcd nodes in a different AZs the range is from ~0.003s to ~0.025s.

In multi-cloud clusters, a request to a KubeAPI lasts generally from ~0.025s to ~0.25s. On the other hand, in a one-cloud scenario, they last from ~0.005s to ~0.025s.

You can read more about this topic here, and for distances above 600 km, we recommend customizing further the etcd deployment (see).

"},{"location":"latency-limitations/latency-limitations/#longhorn-limitations","title":"Longhorn limitations","text":"

There are basically these three problems when dealing with a high latency in Longhorn:

Generally, a single volume with 3 replicas can tolerate a maximum network latency of around 100ms. In the case of a multiple-volume scenario, the maximum network latency can be no more than 20ms. The network latency has a significant impact on IO performance and total network bandwidth. See more about CPU and network requirements here

"},{"location":"latency-limitations/latency-limitations/#how-to-avoid-high-latency-problems","title":"How to avoid high latency problems","text":"

When dealing with RWO volumes you can avoid mount failures caused by high latency by setting Longhorn to only use storage on specific nodes (follow this tutorial) and using nodeAffinity or nodeSelector to schedule your workload pods only to the nodes that have replicas of the volumes or are close to them.

"},{"location":"latency-limitations/latency-limitations/#how-to-mitigate-high-latency-problems-with-rwx-volumes","title":"How to mitigate high latency problems with RWX volumes","text":"

To mitigate high latency issues with RWX volumes you can maximize these Longhorn settings:

Thanks to maximizing these settings you should successfully mount a RWX volume for which a latency between a node with a share-manager pod and a node with a workload pod + replica is ~200ms. However, it will take from 7 to 10 minutes. Also, there are some resource requirements on the nodes and limitations on the maximum size of the RWX volumes. For example, you will not succeed in mounting even a 1Gi RWX volume for which a latency between a node with a share-manager pod and a node with a workload pod + replica is ~200ms, if the nodes have only 2 shared vCPUs and 4GB RAM. This applies even when there are no other workloads in the cluster. Your nodes need at least 2vCPU and 8GB RAM. Generally, the more CPU you assign to the Longhorn manager the more you can mitigate the issue with high latency and RWX volumes.

Keep in mind, that using machines with higher resources and maximizing these Longhorn settings doesn't necessarily guarantee successful mount of the RWX volumes. It also depends on the size of these volumes. For example, even after maximizing these settings and using nodes with 2vCPU and 8GB RAM with ~200ms latency between them, you will fail to mount a 10Gi volume to the workload pod in case you try to mount multiple volumes at once. In case you do it one by one, you should be good.

To conclude, maximizing these Longhorn settings can help to mitigate the high latency issue when mounting RWX volumes, but it is resource-hungry and it also depends on the size of the RWX volume + the total number of the RWX volumes that are attaching at once.

"},{"location":"loadbalancing/loadbalancing-solution/","title":"Claudie load balancing solution","text":""},{"location":"loadbalancing/loadbalancing-solution/#loadbalancer","title":"Loadbalancer","text":"

To create a highly available kubernetes cluster, Claudie creates load balancers for the kubeAPI server. These load balancers use Nginx to load balance the traffic among the cluster nodes. Claudie also supports definition of custom load balancers for the applications running inside the cluster.

"},{"location":"loadbalancing/loadbalancing-solution/#concept","title":"Concept","text":""},{"location":"loadbalancing/loadbalancing-solution/#example-diagram","title":"Example diagram","text":""},{"location":"loadbalancing/loadbalancing-solution/#definitions","title":"Definitions","text":""},{"location":"loadbalancing/loadbalancing-solution/#role","title":"Role","text":"

Claudie uses the concept of roles while configuring the load balancers from the input manifest. Each role represents a loadbalancer configuration for a particular use. Roles are then assigned to the load balancer cluster. A single load balancer cluster can have multiple roles assigned.

"},{"location":"loadbalancing/loadbalancing-solution/#targeted-kubernetes-cluster","title":"Targeted kubernetes cluster","text":"

Load balancer gets assigned to a kubernetes cluster with the field targetedK8s. This field is using the name of the kubernetes cluster as a value. Currently, a single load balancer can only be assigned to a single kubernetes cluster.

Among multiple load balancers targeting the same kubernetes cluster only one of them can have the API server role (i.e. the role with target port 6443) attached to it.

"},{"location":"loadbalancing/loadbalancing-solution/#dns","title":"DNS","text":"

Claudie creates and manages the DNS for the load balancer. If the user adds a load balancer into their infrastructure via Claudie, Claudie creates a DNS A record with the public IP of the load balancer machines behind it. When the load balancer configuration changes in any way, that is a node is added/removed, the hostname or the target changes, the DNS record is reconfigured by Claudie on the fly. This rids the user of the need to manage DNS.

"},{"location":"loadbalancing/loadbalancing-solution/#nodepools","title":"Nodepools","text":"

Loadbalancers are build from user defined nodepools in pools field, similar to how kubernetes clusters are defined. These nodepools allow the user to change/scale the load balancers according to their needs without any fuss. See the nodepool definition for more information.

"},{"location":"loadbalancing/loadbalancing-solution/#an-example-of-load-balancer-definition","title":"An example of load balancer definition","text":"

See an example load balancer definition in our reference example input manifest.

"},{"location":"loadbalancing/loadbalancing-solution/#notes","title":"Notes","text":""},{"location":"loadbalancing/loadbalancing-solution/#cluster-ingress-controller","title":"Cluster ingress controller","text":"

You still need to deploy your own ingress controller to use the load balancer. It needs to be set up to use nodeport with the ports configured under roles in the load balancer definition.

"},{"location":"monitoring/grafana/","title":"Prometheus Monitoring","text":"

In our environment, we rely on Claudie to export Prometheus metrics, providing valuable insights into the state of our infrastructure and applications. To utilize Claudie's monitoring capabilities, it's essential to have Prometheus installed. With this setup, you can gain visibility into various metrics such as:

You can find Claudie dashboard here.

"},{"location":"monitoring/grafana/#configure-scraping-metrics","title":"Configure scraping metrics","text":"

We recommend using the Prometheus Operator for managing Prometheus deployments efficiently.

  1. Create RBAC that allows Prometheus to scrape metrics from Claudie\u2019s pods:

    apiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n  name: claudie-pod-reader\n  namespace: claudie\nrules:\n- apiGroups: [\"\"]\n  resources: [\"pods\"]\n  verbs: [\"get\", \"list\"]\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n  name: claudie-pod-reader-binding\n  namespace: claudie\nsubjects:\n# this SA is created by https://github.com/prometheus-operator/kube-prometheus\n# in your case you might need to bind this Role to a different SA\n- kind: ServiceAccount\n  name: prometheus-k8s\n  namespace: monitoring\nroleRef:\n  kind: Role\n  name: claudie-pod-reader\n  apiGroup: rbac.authorization.k8s.io\n

  2. Create Prometheus PodMonitor to scrape metrics from Claudie\u2019s pods

    apiVersion: monitoring.coreos.com/v1\nkind: PodMonitor\nmetadata:\n  name: claudie-metrics\n  namespace: monitoring\n  labels:\n    name: claudie-metrics\nspec:\n  namespaceSelector:\n    matchNames:\n      - claudie\n  selector:\n    matchLabels:\n      app.kubernetes.io/part-of: claudie\n  podMetricsEndpoints:\n  - port: metrics\n

  3. Import our dashboard into your Grafana instance:

That's it! Now you have set up RBAC for Prometheus, configured a PodMonitor to scrape metrics from Claudie's pods, and imported a Grafana dashboard to visualize the metrics.

"},{"location":"node-local-dns/node-local-dns/","title":"Deploying Node-Local-DNS","text":"

Claudie doesn't deploy node-local-dns by default. In this section we'll walk through an example of how to deploy node-local-dns for a claudie created cluster.

"},{"location":"node-local-dns/node-local-dns/#1-download-nodelocaldnsyaml","title":"1. Download nodelocaldns.yaml","text":"

Based on the kubernetes version you are using in your cluster download the nodelocaldns.yaml from the kubernetes repository

Make sure to download the YAML for the right kubernetes version, e.g. for kubernetes version 1.27 you would use:

wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.27/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml\n
"},{"location":"node-local-dns/node-local-dns/#2-modify-downloaded-nodelocaldnsyaml","title":"2. Modify downloaded nodelocaldns.yaml","text":"

We'll need to replace the references to __PILLAR__DNS__DOMAIN__ and some of the references to __PILLAR__LOCAL__DNS__

To replace __PILLAR__DNS__DOMAIN__ execute:

sed -i \"s/__PILLAR__DNS__DOMAIN__/cluster.local/g\" nodelocaldns.yaml\n

To replace __PILLAR__LOCAL__DNS__ find the references and change it to 169.254.20.10 as shown below:

    ...\n      containers:\n      - name: node-cache\n        image: registry.k8s.io/dns/k8s-dns-node-cache:1.22.20\n        resources:\n          requests:\n            cpu: 25m\n            memory: 5Mi\n-       args: [ \"-localip\", \"__PILLAR__LOCAL__DNS__,__PILLAR__DNS__SERVER__\", \"-conf\", \"/etc/Corefile\", \"-upstreamsvc\", \"kube-dns-upstream\" ]\n+       args: [ \"-localip\", \"169.254.20.10\", \"-conf\", \"/etc/Corefile\", \"-upstreamsvc\", \"kube-dns-upstream\" ]\n        securityContext:\n          capabilities:\n            add:\n            - NET_ADMIN\n        ports:\n        - containerPort: 53\n          name: dns\n          protocol: UDP\n        - containerPort: 53\n          name: dns-tcp\n          protocol: TCP\n        - containerPort: 9253\n          name: metrics\n          protocol: TCP\n        livenessProbe:\n          httpGet:\n-           host: __PILLAR__LOCAL__DNS__\n+           host: 169.254.20.10\n            path: /health\n            port: 8080\n          initialDelaySeconds: 60\n          timeoutSeconds: 5\n    ...\n
"},{"location":"node-local-dns/node-local-dns/#3-apply-the-modified-manifest","title":"3. Apply the modified manifest.","text":"

kubectl apply -f ./nodelocaldns.yaml

"},{"location":"roadmap/roadmap/","title":"Roadmap for Claudie","text":"

v0.8.1: - [x] Support for more cloud providers - [x] OCI - [x] AWS - [x] Azure - [x] Cloudflare - [x] GenesisCloud - [x] Hybrid-cloud support (on-premises) - [x] arm64 support for the nodepools - [x] App-level metrics - [x] Autoscaler

"},{"location":"sitemap/sitemap/","title":"Sitemap","text":"

This section contains a brief descriptions about main parts of the Claudie's documentation.

"},{"location":"sitemap/sitemap/#getting-started","title":"Getting Started","text":"

The \"Getting Started\" section is where you'll learn how to begin using Claudie. We'll guide you through the initial steps and show you how to set things up, so you can start using the software right away.

You'll also find helpful information on how to customize Claudie to suit your needs, including specifications for the settings you can adjust, and examples of how to use configuration files to get started.

By following the steps in this section, you'll have everything you need to start using Claudie with confidence!

"},{"location":"sitemap/sitemap/#input-manifest","title":"Input manifest","text":"

This section contains examples of YAML files of the InputManifest CRD that tell Claudie what should an infrastructure look like. Besides these files, you can also find an API reference for the InputManifest CRD there.

"},{"location":"sitemap/sitemap/#how-claudie-works","title":"How Claudie works","text":"

In this section, we'll show you how Claudie works and guide you through our workflow. We'll explain how we store and manage data, balance the workload across different parts of the system, and automatically adjust resources to handle changes in demand.

By following our explanations, you'll gain a better understanding of how Claudie operates and be better equipped to use it effectively.

"},{"location":"sitemap/sitemap/#claudie-use-cases","title":"Claudie Use Cases","text":"

The \"Claudie Use Cases\" section includes examples of different ways you can use Claudie to solve various problems. We've included these examples to help you understand the full range of capabilities Claudie offers and to show you how it can be applied in different scenarios.

By exploring these use cases, you'll get a better sense of how Claudie can be a valuable tool for your work.

"},{"location":"sitemap/sitemap/#faq","title":"FAQ","text":"

You may find helpful answers in our FAQ section.

"},{"location":"sitemap/sitemap/#roadmap-for-claudie","title":"Roadmap for Claudie","text":"

In this section, you'll find a roadmap for Claudie that outlines the features we've already added and those we plan to add in the future.

By checking out the roadmap, you'll be able to stay informed about the latest updates and see how Claudie is evolving to meet the needs of its users.

"},{"location":"sitemap/sitemap/#contributing","title":"Contributing","text":"

In this section, we've gathered all the information you'll need if you want to help contribute to the Claudie project or release a new version of the software.

By checking out this section, you'll get a better sense of what's involved in contributing and how you can be part of making Claudie even better.

"},{"location":"sitemap/sitemap/#changelog","title":"Changelog","text":"

The \"changelog\" section is where you can find information about all the changes, updates, and issues related to each version of Claudie.

"},{"location":"sitemap/sitemap/#latency-limitations","title":"Latency limitations","text":"

In this section, we describe a latency limitations, which you should take into an account, when desiging your infrastructure.

"},{"location":"sitemap/sitemap/#troubleshooting","title":"Troubleshooting","text":"

In case you run into issues, we recommend following some of the trobleshooting guides in this section.

"},{"location":"sitemap/sitemap/#creating-claudie-backup","title":"Creating Claudie Backup","text":"

This section describes steps to back up claudie and its dependencies.

"},{"location":"sitemap/sitemap/#claudie-hardening","title":"Claudie Hardening","text":"

This section describes how to further configure the default claudie deployment. It is highly recommended that you read this section.

"},{"location":"sitemap/sitemap/#prometheus-monitoring","title":"Prometheus Monitoring","text":"

In this section we walk you through the setup of Claudie's Prometheus metrics to gain visibility into various metrics that Claudie exposes.

"},{"location":"sitemap/sitemap/#updating-claudie","title":"Updating Claudie","text":"

This section describes how to execute updates, such as OS or kubernetes version, in Claudie.

"},{"location":"sitemap/sitemap/#deploying-node-local-dns","title":"Deploying Node-Local-DNS","text":"

Claudie doesn't deploy Node-Local-DNS in the default mode, thus you have to install it independently. This section provides a step-by-step guide on how to do it.

"},{"location":"sitemap/sitemap/#command-cheat-sheet","title":"Command Cheat Sheet","text":"

The \"Command Cheat Sheet\" section contains a useful kubectl commands to interact with Claudie.

"},{"location":"sitemap/sitemap/#version-matrix","title":"Version matrix","text":"

In this section, you can find supported Kubernetes and OS versions for the latest Claudie versions.

"},{"location":"storage/storage-solution/","title":"Claudie storage solution","text":""},{"location":"storage/storage-solution/#concept","title":"Concept","text":"

Running stateful workloads is a complex task, even more so when considering the multi-cloud environment. Claudie therefore needs to be able to accommodate stateful workloads, regardless of the underlying infrastructure providers.

Claudie orchestrates storage on the kubernetes cluster nodes by creating one \"storage cluster\" across multiple providers. This \"storage cluster\" has a series of zones, one for each cloud provider instance. Each zone then stores its own persistent volume data.

This concept is translated into longhorn implementation, where each zone is represented by a Storage Class which is backed up by the nodes defined under the same cloud provider instance. Furthermore, each node uses separate disk to the one, where OS is installed, to assure clear data separation. The size of the storage disk can be configured in storageDiskSize field of the nodepool specification.

"},{"location":"storage/storage-solution/#longhorn","title":"Longhorn","text":"

A Claudie-created cluster comes with the longhorn deployment preinstalled and ready to be used. By default, only worker nodes are used to store data.

Longhorn installed in the cluster is set up in a way that it provides one default StorageClass called longhorn, which, if used, creates a volume that is then replicated across random nodes in the cluster.

Besides the default storage class, Claudie can also create custom storage classes, which force persistent volumes to be created on specific nodes based on the provider instance they have. In other words, you can use a specific provider instance to provision nodes for your storage needs, while using another provider instance for computing tasks.

"},{"location":"storage/storage-solution/#example","title":"Example","text":"

To follow along, have a look at the example of InputManifest below.

storage-classes-example.yaml
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: ExampleManifestForStorageClasses\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n\n  providers:\n    - name: storage-provider\n      providerType: hetzner\n      secretRef:\n        name: storage-provider-secrets\n        namespace: claudie-secrets\n\n    - name: compute-provider\n      providerType: hetzner\n      secretRef:\n        name: storage-provider-secrets\n        namespace: claudie-secrets\n\n    - name: dns-provider\n      providerType: cloudflare\n      secretRef:\n        name: dns-provider-secret\n        namespace: claudie-secrets\n\n  nodePools:\n    dynamic:\n        - name: control\n          providerSpec:\n            name: compute-provider\n            region: hel1\n            zone: hel1-dc2\n          count: 3\n          serverType: cpx21\n          image: ubuntu-22.04\n\n        - name: datastore\n          providerSpec:\n            name: storage-provider\n            region: hel1\n            zone: hel1-dc2\n          count: 5\n          serverType: cpx21\n          image: ubuntu-22.04\n          storageDiskSize: 800\n          taints:\n            - key: node-type\n              value: datastore\n              effect: NoSchedule\n\n        - name: compute\n          providerSpec:\n            name: compute-provider\n            region: hel1\n            zone: hel1-dc2\n          count: 10\n          serverType: cpx41\n          image: ubuntu-22.04\n          taints:\n            - key: node-type\n              value: compute\n              effect: NoSchedule\n\n        - name: loadbalancer\n          providerSpec:\n            name: compute-provider\n            region: hel1\n            zone: hel1-dc2\n          count: 1\n          serverType: cpx21\n          image: ubuntu-22.04\n\n  kubernetes:\n    clusters:\n      - name: my-awesome-claudie-cluster\n        version: 1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control\n          compute:\n            - datastore\n            - compute\n\n  loadBalancers:\n    roles:\n      - name: apiserver\n        protocol: tcp\n        port: 6443\n        targetPort: 6443\n        targetPools: \n          - control\n\n    clusters:\n      - name: apiserver-lb\n        roles:\n          - apiserver\n        dns:\n          dnsZone: dns-zone\n          provider: dns-provider\n        targetedK8s: my-awesome-claudie-cluster\n        pools:\n          - loadbalancer\n

When Claudie applies this input manifest, the following storage classes are installed:

Now all you have to do is specify correct storage class when defining your PVCs.

In case you are interested in using different cloud provider for datastore-nodepool or compute-nodepool of this InputManifest example, see the list of supported providers instance

For more information on how Longhorn works you can check out Longhorn's official documentation.

"},{"location":"troubleshooting/troubleshooting/","title":"Troubleshooting guide","text":"

In progress

As we continue expanding our troubleshooting guide, we understand that issues may arise during your usage of Claudie. Although the guide is not yet complete, we encourage you to create a GitHub issue if you encounter any problems. Your feedback and reports are highly valuable to us in improving our platform and addressing any issues you may face.

"},{"location":"troubleshooting/troubleshooting/#claudie-cluster-not-starting","title":"Claudie cluster not starting","text":"

Claudie relies on all services to be interconnected. If any of these services fail to create due to node unavailability or resource constraints, Claudie will be unable to provision your cluster.

  1. Check if all Claudie services are running:

    kubectl get pods -n claudie\n
    NAME                                   READY   STATUS      RESTARTS        AGE\nansibler-5c6c776b75-82c2q              1/1     Running     0               8m10s\nbuilder-59f9d44596-n2qzm               1/1     Running     0               8m10s\nmanager-5d76c89b4d-tb6h4               1/1     Running     1 (6m37s ago)   8m10s\ncreate-table-job-jvs9n                 0/1     Completed   1               8m10s\ndynamodb-68777f9787-8wjhs              1/1     Running     0               8m10s\nclaudie-operator-5755b7bc69-5l84h      1/1     Running     0               8m10s\nkube-eleven-64468cd5bd-qp4d4           1/1     Running     0               8m10s\nkuber-698c4564c-dhsvg                  1/1     Running     0               8m10s\nmake-bucket-job-fb5sp                  0/1     Completed   0               8m10s\nminio-0                                1/1     Running     0               8m10s\nminio-1                                1/1     Running     0               8m10s\nminio-2                                1/1     Running     0               8m10s\nminio-3                                1/1     Running     0               8m10s\nmongodb-67bf769957-9ct5z               1/1     Running     0               8m10s\nterraformer-fd664b7ff-dd2h7            1/1     Running     0               8m9s\n
  2. Check the InputManifest resource status to find out what is the actual cluster state.

    kubectl get inputmanifests.claudie.io resourceName -o jsonpath={.status}\n
      {\n    \"clusters\": {\n      \"one-of-my-cluster\": {\n        \"message\": \" installing VPN\",\n        \"phase\": \"ANSIBLER\",\n        \"state\": \"IN_PROGRESS\"\n      }\n    },\n    \"state\": \"IN_PROGRESS\"\n  }    \n
  3. Examine claudie-operator service logs. The claudie-operator service logs will provide insights into any issues during cluster bootstrap and identify the problematic service. If cluster creation fails despite all Claudie pods being scheduled, it may suggest lack of permissions for Claudie providers' credentials. In this case, operator logs will point to Terrafomer service, and Terraformer service logs will provide detailed error output.

    kubectl -n claudie logs -l app.kubernetes.io/name=claudie-operator\n
    6:04AM INF Using log with the level \"info\" module=claudie-operator\n6:04AM INF Claudie-operator is ready to process input manifests module=claudie-operator\n6:04AM INF Claudie-operator is ready to watch input manifest statuses module=claudie-operator\n

    Debug log level

    Using debug log level will help here with identifying the issue closely. This guide shows how you can set it up during step 5.

    Claudie benefit!

    The great thing about Claudie is that it utilizes open source tools to set up and configure infrastructure based on your preferences. As a result, the majority of errors can be easily found and resolved through online resources.

"},{"location":"troubleshooting/troubleshooting/#terraformer-service-not-starting","title":"Terraformer service not starting","text":"

Terraformer relies on MinIO and DynamoDB datastores to be configured via jobs make-bucket-job and create-table-job respectively. If these jobs fail to configure the datastores, or the datastores themselves fail to start, Terraformer will also fail to start.

"},{"location":"troubleshooting/troubleshooting/#datastore-initialization-jobs","title":"Datastore initialization jobs","text":"

The create-table-job is responsible for creating necessary tables in the DynamoDB datastore, while the make-bucket-job creates a bucket in the MinIO datastore. If these jobs encounter scheduling problems or experience slow autoscaling, they may fail to complete within the designated time frame. To handle this, we have set the backoffLimit of both jobs to fail after approximately 42 minutes. If you encounter any issues with these jobs or believe the backoffLimit should be adjusted, please create an issue.

"},{"location":"troubleshooting/troubleshooting/#networking-issues","title":"Networking issues","text":""},{"location":"troubleshooting/troubleshooting/#wireguard-mtu","title":"Wireguard MTU","text":"

We use Wireguard for secure node-to-node connectivity. However, it requires setting the MTU value to match that of Wireguard. While the host system interface MTU value is adjusted accordingly, networking issues may arise for services hosted on Claudie managed Kubernetes clusters. For example, we observed that the GitHub actions runner docker container had to be configured with an MTU value of 1380 to avoid network errors during docker build process.

"},{"location":"troubleshooting/troubleshooting/#hetzner-and-oci-node-pools","title":"Hetzner and OCI node pools","text":"

We're experiencing networking issues caused by the blacklisting of public IPs owned by Hetzner and OCI. This problem affects the Ansibler and Kube-eleven services, which fail when attempting to add GPG keys to access the Google repository for package downloads. Unfortunately, there's no straightforward solution to bypass this issue. The recommended approach is to allow the services to fail, remove failed cluster and attempt provisioning a new cluster with newly allocated IP addresses that are not blocked by Google.

"},{"location":"troubleshooting/troubleshooting/#resolving-issues-with-terraform-state-lock","title":"Resolving issues with Terraform state lock","text":"

~During normal operation, the content of this section should not be required. If you ended up here, it means there was likely a bug somewhere in Claudie. Please open a bug report in that case and use the content of this section to troubleshoot your way out of it.

First of all you have to get into the directory in the terraformer pod, where all terraform files are located. In order to do that, follow these steps:

"},{"location":"troubleshooting/troubleshooting/#locked-state","title":"Locked state","text":"

Once you are in the directory with all TF files, run the following command:

terraform force-unlock <lock-id>\n

The lock-id is generally shown in the error message.

"},{"location":"update/update/","title":"Updating Claudie","text":"

In this section we'll describe how you can update resources that claudie creates based on changes in the manifest.

"},{"location":"update/update/#updating-kubernetes-version","title":"Updating Kubernetes Version","text":"

Updating the Kubernetes version is as easy as incrementing the version in the Input Manifest of the already build cluster.

# old version\n...\nkubernetes:\n  clusters:\n    - name: claudie-cluster\n      version: v1.27.0\n      network: 192.168.2.0/24\n      pools:\n        ...\n
# new version\n...\nkubernetes:\n  clusters:\n    - name: claudie-cluster\n      version: 1.28.0\n      network: 192.168.2.0/24\n      pools:\n        ...\n

When re-applied this will trigger a new workflow for the cluster that will result in the updated kubernetes version.

Downgrading a version is not supported once you've upgraded a cluster to a newer version

"},{"location":"update/update/#updating-dynamic-nodepool","title":"Updating Dynamic Nodepool","text":"

Nodepools specified in the InputManifest are immutable. Once created, they cannot be updated/changed. This decision was made to force the user to perform a rolling update by first deleting the nodepool and replacing it with a new version with the new desired state. A couple of examples are listed below.

"},{"location":"update/update/#updating-the-os-image","title":"Updating the OS image","text":"
# old version\n...\n- name: hetzner\n  providerSpec:\n    name: hetzner-1\n    region: fsn1\n    zone: fsn1-dc14\n  count: 1\n  serverType: cpx11\n  image: ubuntu-22.04\n...\n
# new version\n...\n- name: hetzner-1 # NOTE the different name.\n  providerSpec:\n    name: hetzner-1\n    region: fsn1\n    zone: fsn1-dc14\n  count: 1\n  serverType: cpx11\n  image: ubuntu-24.04\n...\n

When re-applied this will trigger a new workflow for the cluster that will result first in the addition of the new nodepool and then the deletion of the old nodepool.

"},{"location":"update/update/#changing-the-server-type-of-a-dynamic-nodepool","title":"Changing the Server Type of a Dynamic Nodepool","text":"

The same concept applies to changing the server type of a dynamic nodepool.

# old version\n...\n- name: hetzner\n  providerSpec:\n    name: hetzner-1\n    region: fsn1\n    zone: fsn1-dc14\n  count: 1\n  serverType: cpx11\n  image: ubuntu-22.04\n...\n
# new version\n...\n- name: hetzner-1 # NOTE the different name.\n  providerSpec:\n    name: hetzner-1\n    region: fsn1\n    zone: fsn1-dc14\n  count: 1\n  serverType: cpx21\n  image: ubuntu-22.04\n...\n

When re-applied this will trigger a new workflow for the cluster that will result in the updated server type of the nodepool.

"},{"location":"use-cases/use-cases/","title":"Use-cases and customers","text":"

We foresee the following use-cases of the Claudie platform

"},{"location":"use-cases/use-cases/#1-cloud-bursting","title":"1. Cloud-bursting","text":"

A company uses advanced cloud features in one of the hyper-scale providers (e.g. serverless Lambda and API Gateway functionality in AWS). They run a machine-learning application that they need to train for a pattern on a dataset. The learning phase requires significant compute resources. Claudie allows to extend the cluster in AWS (needed in order to access the AWS functionality) to Hetzner for saving the infrastructure costs of the machine-learning case.

Typical client profiles:

"},{"location":"use-cases/use-cases/#2-cost-saving","title":"2. Cost-saving","text":"

A company would like to utilize their on-premise or leased resources that they already invested into, but would like to:

  1. extend the capacity
  2. access managed features of a hyper-scale provider (AWS, GCP, ...)
  3. get the workload physically closer to a client (e. g. to South America)

Typical client profile:

"},{"location":"use-cases/use-cases/#3-smart-layer-as-a-service-on-top-of-simple-cloud-providers","title":"3. Smart-layer-as-a-Service on top of simple cloud-providers","text":"

An existing customer of medium-size provider (e.g. Exoscale) would like to utilize features that are typical for hyper-scale providers. Their current provider does neither offer nor plan to offer such an advanced functionality.

Typical client profile:

"},{"location":"use-cases/use-cases/#4-service-interconnect","title":"4. Service interconnect","text":"

A company would like to access on-premise-hosted services and cloud-managed services from within the same cluster. For on-premise services the on-premise cluster node would egress the traffic. The cloud-hosted cluster nodes would deal with the egress traffic to the cloud-managed services.

Typical client profile:

"},{"location":"version-matrix/version-matrix/","title":"Version matrix","text":"

In the following table, you can find the supported Kubernetes and OS versions for the latest Claudie versions.

Claudie Version Kubernetes versions OS versions v0.6.x 1.24.x, 1.25.x, 1.26.x Ubuntu 22.04 v0.7.0 1.24.x, 1.25.x, 1.26.x Ubuntu 22.04 v0.7.1-x 1.25.x, 1.26.x, 1.27.x Ubuntu 22.04 v0.8.0 1.25.x, 1.26.x, 1.27.x Ubuntu 22.04 v0.8.1 1.27.x, 1.28.x, 1.29.x Ubuntu 22.04"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"What is Claudie","text":"

Claudie is a platform for managing multi-cloud and hybrid-cloud Kubernetes clusters. These Kubernetes clusters can mix and match nodepools from various cloud providers, e.g. a single cluster can have a nodepool in AWS, another in GCP and another one on-premises. This is our opinionated way to build multi-cloud and hybrid-cloud Kubernetes infrastructure. On top of that Claudie supports Cluster Autoscaler on the managed clusters.

"},{"location":"#vision","title":"Vision","text":"

The purpose of Claudie is to become the final Kubernetes engine you'll ever need. It aims to build clusters that leverage features and costs across multiple cloud vendors and on-prem datacenters. A Kubernetes that you won't ever need to migrate away from.

"},{"location":"#use-cases","title":"Use cases","text":"

Claudie has been built as an answer to the following Kubernetes challenges:

You can read more here.

"},{"location":"#features","title":"Features","text":"

Claudie covers you with the following features functionalities:

See more in How Claudie works sections.

"},{"location":"#what-to-do-next","title":"What to do next","text":"

In case you are not sure where to go next, you can just simply start with our Getting Started Guide or read our documentation sitemap.

If you need help or want to have a chat with us, feel free to join our channel on kubernetes Slack workspace (get invite here).

"},{"location":"CHANGELOG/changelog-0.1.x/","title":"Claudie v0.1","text":"

The first official release of Claudie

"},{"location":"CHANGELOG/changelog-0.1.x/#deployment","title":"Deployment","text":"

To deploy the Claudie v0.1.X, please:

  1. Download the archive and checksums from the release page

  2. Verify the archive with the sha256 (optional)

    sha256sum -c --ignore-missing checksums.txt\n

    If valid, output is, depending on the archive downloaded

    claudie.tar.gz: OK\n

    or

    claudie.zip: OK\n

    or both.

  3. Lastly, unpack the archive and deploy using kubectl

    We strongly recommend changing the default credentials for MongoDB, MinIO and DynamoDB before you deploy it. To do this, change contents of the files in mongo/secrets, minio/secrets and dynamo/secrets respectively.

    kubectl apply -k .\n
"},{"location":"CHANGELOG/changelog-0.1.x/#v013","title":"v0.1.3","text":""},{"location":"CHANGELOG/changelog-0.1.x/#features","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.1.x/#bugfixes","title":"Bugfixes","text":"

No bugfixes since the last release.

"},{"location":"CHANGELOG/changelog-0.1.x/#known-issues","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.1.x/#v012","title":"v0.1.2","text":""},{"location":"CHANGELOG/changelog-0.1.x/#features_1","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.1.x/#bugfixes_1","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.1.x/#known-issues_1","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.1.x/#v011","title":"v0.1.1","text":""},{"location":"CHANGELOG/changelog-0.1.x/#features_2","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.1.x/#bugfixes_2","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.1.x/#known-issues_2","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.1.x/#v010","title":"v0.1.0","text":""},{"location":"CHANGELOG/changelog-0.1.x/#features_3","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.1.x/#bugfixes_3","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.1.x/#known-issues_3","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.2.x/","title":"Claudie v0.2","text":"

Due to a breaking change in the input manifest schema, the v0.2.x will not be backwards compatible with v0.1.x.

"},{"location":"CHANGELOG/changelog-0.2.x/#deployment","title":"Deployment","text":"

To deploy the Claudie v0.2.X, please:

  1. Download the archive and checksums from the release page

  2. Verify the archive with the sha256 (optional)

    sha256sum -c --ignore-missing checksums.txt\n

    If valid, output is, depending on the archive downloaded

    claudie.tar.gz: OK\n

    or

    claudie.zip: OK\n

    or both.

  3. Lastly, unpack the archive and deploy using kubectl

    We strongly recommend changing the default credentials for MongoDB, MinIO and DynamoDB before you deploy it. To do this, change contents of the files in mongo/secrets, minio/secrets and dynamo/secrets respectively.

    kubectl apply -k .\n
"},{"location":"CHANGELOG/changelog-0.2.x/#v020","title":"v0.2.0","text":""},{"location":"CHANGELOG/changelog-0.2.x/#features","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.2.x/#bugfixes","title":"Bugfixes","text":"

No bugfixes since the last release.

"},{"location":"CHANGELOG/changelog-0.2.x/#known-issues","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.2.x/#v021","title":"v0.2.1","text":""},{"location":"CHANGELOG/changelog-0.2.x/#features_1","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.2.x/#bugfixes_1","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.2.x/#known-issues_1","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.2.x/#v022","title":"v0.2.2","text":""},{"location":"CHANGELOG/changelog-0.2.x/#features_2","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.2.x/#bugfixes_2","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.3.x/","title":"Claudie v0.3","text":"

Due to a breaking change in the input manifest schema, the v0.3.x will not be backwards compatible with v0.2.x

"},{"location":"CHANGELOG/changelog-0.3.x/#deployment","title":"Deployment","text":"

To deploy the Claudie v0.3.X, please:

  1. Download the archive and checksums from the release page

  2. Verify the archive with the sha256 (optional)

    sha256sum -c --ignore-missing checksums.txt\n

    If valid, output is, depending on the archive downloaded

    claudie.tar.gz: OK\n

    or

    claudie.zip: OK\n

    or both.

  3. Lastly, unpack the archive and deploy using kubectl

    We strongly recommend changing the default credentials for MongoDB, MinIO and DynamoDB before you deploy it. To do this, change contents of the files in mongo/secrets, minio/secrets and dynamo/secrets respectively.

    kubectl apply -k .\n
"},{"location":"CHANGELOG/changelog-0.3.x/#v030","title":"v0.3.0","text":""},{"location":"CHANGELOG/changelog-0.3.x/#features","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.3.x/#bugfixes","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.3.x/#known-issues","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.3.x/#v031","title":"v0.3.1","text":""},{"location":"CHANGELOG/changelog-0.3.x/#features_1","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.3.x/#bugfixes_1","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.3.x/#known-issues_1","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.3.x/#v032","title":"v0.3.2","text":""},{"location":"CHANGELOG/changelog-0.3.x/#features_2","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.3.x/#bugfixes_2","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.3.x/#known-issues_2","title":"Known issues","text":"

No known issues since the last release.

"},{"location":"CHANGELOG/changelog-0.4.x/","title":"Claudie v0.4","text":"

Due to a breaking change in the input manifest schema, the v0.4.x will not be backwards compatible with v0.3.x

"},{"location":"CHANGELOG/changelog-0.4.x/#deployment","title":"Deployment","text":"

To deploy the Claudie v0.4.X, please:

  1. Download the archive and checksums from the release page

  2. Verify the archive with the sha256 (optional)

    sha256sum -c --ignore-missing checksums.txt\n

    If valid, output is, depending on the archive downloaded

    claudie.tar.gz: OK\n

    or

    claudie.zip: OK\n

    or both.

  3. Lastly, unpack the archive and deploy using kubectl

    We strongly recommend changing the default credentials for MongoDB, MinIO and DynamoDB before you deploy it. To do this, change contents of the files in mongo/secrets, minio/secrets and dynamo/secrets respectively.

    kubectl apply -k .\n
"},{"location":"CHANGELOG/changelog-0.4.x/#v040","title":"v0.4.0","text":""},{"location":"CHANGELOG/changelog-0.4.x/#features","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.4.x/#bugfixes","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.4.x/#known-issues","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.4.x/#v041","title":"v0.4.1","text":""},{"location":"CHANGELOG/changelog-0.4.x/#features_1","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.4.x/#bugfixes_1","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.4.x/#known-issues_1","title":"Known issues","text":"

No known issues since the last release

"},{"location":"CHANGELOG/changelog-0.4.x/#v042","title":"v0.4.2","text":""},{"location":"CHANGELOG/changelog-0.4.x/#features_2","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.4.x/#bugfixes_2","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.4.x/#knownissues","title":"KnownIssues","text":"

No new known issues since the last release

"},{"location":"CHANGELOG/changelog-0.5.x/","title":"Claudie v0.5","text":"

Due to a breaking change in swapping the CNI used in the Kubernetes cluster, the v0.5.x will not be backwards compatible with v0.4.x

"},{"location":"CHANGELOG/changelog-0.5.x/#deployment","title":"Deployment","text":"

To deploy Claudie v0.5.X, please:

  1. Download the archive and checksums from the release page

  2. Verify the archive with the sha256 (optional)

    sha256sum -c --ignore-missing checksums.txt\n

    If valid, output is, depending on the archive downloaded

    claudie.tar.gz: OK\n

    or

    claudie.zip: OK\n

    or both.

  3. Lastly, unpack the archive and deploy using kubectl

    We strongly recommend changing the default credentials for MongoDB, MinIO and DynamoDB before you deploy it. To do this, change contents of the files in mongo/secrets, minio/secrets and dynamo/secrets respectively.

    kubectl apply -k .\n
"},{"location":"CHANGELOG/changelog-0.5.x/#v050","title":"v0.5.0","text":""},{"location":"CHANGELOG/changelog-0.5.x/#features","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.5.x/#known-issues","title":"Known issues","text":""},{"location":"CHANGELOG/changelog-0.5.x/#v051","title":"v0.5.1","text":""},{"location":"CHANGELOG/changelog-0.5.x/#bug-fixes","title":"Bug fixes","text":""},{"location":"CHANGELOG/changelog-0.6.x/","title":"Claudie v0.6","text":"

Due to a breaking change in the terraform files the v0.6.x will not be backwards compatible with v0.5.x

"},{"location":"CHANGELOG/changelog-0.6.x/#deployment","title":"Deployment","text":"

To deploy Claudie v0.6.X, please:

  1. Download the archive and checksums from the release page

  2. Verify the archive with the sha256 (optional)

    sha256sum -c --ignore-missing checksums.txt\n

If valid, output is, depending on the archive downloaded

```sh\nclaudie.tar.gz: OK\n```\n

or

```sh\nclaudie.zip: OK\n```\n

or both.

  1. Lastly, unpack the archive and deploy using kubectl

We strongly recommend changing the default credentials for MongoDB, MinIO and DynamoDB before you deploy it. To do this, change contents of the files in mongo/secrets, minio/secrets and dynamo/secrets respectively.

```sh\nkubectl apply -k .\n```\n
"},{"location":"CHANGELOG/changelog-0.6.x/#v060","title":"v0.6.0","text":""},{"location":"CHANGELOG/changelog-0.6.x/#features","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.6.x/#bugfixes","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.6.x/#other","title":"Other","text":""},{"location":"CHANGELOG/changelog-0.6.x/#v061","title":"v0.6.1","text":""},{"location":"CHANGELOG/changelog-0.6.x/#bugfixes_1","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.6.x/#v062","title":"v0.6.2","text":""},{"location":"CHANGELOG/changelog-0.6.x/#features_1","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.6.x/#bugfixes_2","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.6.x/#v063","title":"v0.6.3","text":""},{"location":"CHANGELOG/changelog-0.6.x/#features_2","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.6.x/#bugfixes_3","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.6.x/#v064","title":"v0.6.4","text":""},{"location":"CHANGELOG/changelog-0.6.x/#features_3","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.6.x/#bugfixes_4","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.6.x/#v065","title":"v0.6.5","text":""},{"location":"CHANGELOG/changelog-0.6.x/#features_4","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.6.x/#bugfixes_5","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.6.x/#v066","title":"v0.6.6","text":""},{"location":"CHANGELOG/changelog-0.6.x/#features_5","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.6.x/#bugfixes_6","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.7.x/","title":"Claudie v0.7","text":"

Due to using the latest version of longhorn the v0.7.x will not be backwards compatible with v0.6.x

"},{"location":"CHANGELOG/changelog-0.7.x/#deployment","title":"Deployment","text":"

To deploy Claudie v0.7.X, please:

  1. Download claudie.yaml from release page

  2. Verify the checksum with sha256 (optional)

    We provide checksums in claudie_checksum.txt you can verify the downloaded yaml files againts the provided checksums.

  3. Install claudie using kubectl

We strongly recommend changing the default credentials for MongoDB, MinIO and DynamoDB before you deploy it.

kubectl apply -f https://github.com/berops/claudie/releases/latest/download/claudie.yaml\n

To further harden claudie, you may want to deploy our pre-defined network policies:

# for clusters using cilium as their CNI\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy-cilium.yaml\n
# other\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy.yaml\n

"},{"location":"CHANGELOG/changelog-0.7.x/#v070","title":"v0.7.0","text":"

Upgrade procedure: Before upgrading Claudie, upgrade Longhorn to 1.6.x as per this guide. In most cases this will boil down to running the following command: kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v1.6.0/deploy/longhorn.yaml.

"},{"location":"CHANGELOG/changelog-0.7.x/#features","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.7.x/#bugfixes","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.7.x/#v071","title":"v0.7.1","text":"

Migrate from the legacy package repositories apt.kubernetes.io, yum.kubernetes.io to the Kubernetes community-hosted repositories pkgs.k8s.io. A detailed how to can be found in https://kubernetes.io/blog/2023/08/31/legacy-package-repository-deprecation/

Kubernetes version 1.24 is no longer supported. 1.25.x 1.26.x 1.27.x are the currently supported versions.

"},{"location":"CHANGELOG/changelog-0.7.x/#bugfixes_1","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.7.x/#v072","title":"v0.7.2","text":""},{"location":"CHANGELOG/changelog-0.7.x/#features_1","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.7.x/#v073","title":"v0.7.3","text":""},{"location":"CHANGELOG/changelog-0.7.x/#bugfixes_2","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.7.x/#v074","title":"v0.7.4","text":""},{"location":"CHANGELOG/changelog-0.7.x/#bugfixes_3","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.7.x/#v075","title":"v0.7.5","text":""},{"location":"CHANGELOG/changelog-0.7.x/#features_2","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.7.x/#bugifxes","title":"Bugifxes","text":""},{"location":"CHANGELOG/changelog-0.8.x/","title":"Claudie v0.8","text":"

Due to updating terraform files the v0.8.x clusters build with claudie version v0.7.x will be forced to be recreated.

Nodepool/cluster names that do not meet the required length of 14 characters for nodepool names and 28 characters for cluster names must be adjusted or the new length validation will fail. You can achieve a rolling update by adding new nodepools with the new names and then removing the old nodepools before updating to version 0.8.

Before updating make backups of your data\"

"},{"location":"CHANGELOG/changelog-0.8.x/#deployment","title":"Deployment","text":"

To deploy Claudie v0.8.X, please:

  1. Download claudie.yaml from release page

  2. Verify the checksum with sha256 (optional)

We provide checksums in claudie_checksum.txt you can verify the downloaded yaml files againts the provided checksums.

  1. Install claudie using kubectl

We strongly recommend changing the default credentials for MongoDB, MinIO and DynamoDB before you deploy it.

kubectl apply -f https://github.com/berops/claudie/releases/latest/download/claudie.yaml\n

To further harden claudie, you may want to deploy our pre-defined network policies:

# for clusters using cilium as their CNI\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy-cilium.yaml\n
# other\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy.yaml\n

"},{"location":"CHANGELOG/changelog-0.8.x/#v080","title":"v0.8.0","text":""},{"location":"CHANGELOG/changelog-0.8.x/#features","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.8.x/#v081","title":"v0.8.1","text":"

Nodepools with genesis cloud provider will trigger a recreation of the cluster due to the change in terraform files. Make a backup of your data if your cluster constains genesis cloud nodepools.

"},{"location":"CHANGELOG/changelog-0.8.x/#features_1","title":"Features","text":""},{"location":"CHANGELOG/changelog-0.8.x/#bugfixes","title":"Bugfixes","text":""},{"location":"CHANGELOG/changelog-0.9.x/","title":"Claudie v0.9","text":"

Due to changes to the core of how Claudie works with terraform files and representation of the data in persistent storage the v0.9.x version will not be backwards compatible with clusters build using previous Claudie versions.

"},{"location":"CHANGELOG/changelog-0.9.x/#most-notable-changes-tldr","title":"Most notable changes (TL;DR)","text":""},{"location":"CHANGELOG/changelog-0.9.x/#experimental","title":"Experimental","text":"

Currently the HTTP proxy is experimental, it is made available by modifying the HTTP_PROXY_MODE in the Claudie config map in the claudie namespace. The possible values are (on|off|default). Default means that if a kubernetes cluster uses Hetzner nodepools, it will automatically switch to using the proxy, as we have encountered the most bad IP issues with Hetzner. By default the proxy is turned off.

It should be noted that the proxy is still in an experimental phase, where the API for interacting with the proxy may change in the future. Therefore, clusters using this feature in this release run the risk of being backwards incompatible with future 0.9.x releases, which will further stabilise the proxy API.

"},{"location":"CHANGELOG/changelog-0.9.x/#deployment","title":"Deployment","text":"

To deploy Claudie v0.9.X, please:

  1. Download claudie.yaml from release page

  2. Verify the checksum with sha256 (optional)

We provide checksums in claudie_checksum.txt you can verify the downloaded yaml files againts the provided checksums.

  1. Install Claudie using kubectl

We strongly recommend changing the default credentials for MongoDB, MinIO and DynamoDB before you deploy it.

kubectl apply -f https://github.com/berops/claudie/releases/latest/download/claudie.yaml\n

To further harden Claudie, you may want to deploy our pre-defined network policies:

# for clusters using cilium as their CNI\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy-cilium.yaml\n
# other\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy.yaml\n

"},{"location":"CHANGELOG/changelog-0.9.x/#v090","title":"v0.9.0","text":""},{"location":"CHANGELOG/changelog-0.9.x/#whats-changed","title":"What's changed","text":""},{"location":"CHANGELOG/changelog-0.9.x/#experimental_1","title":"Experimental","text":""},{"location":"CHANGELOG/changelog-0.9.x/#bug-fixes","title":"Bug fixes","text":""},{"location":"autoscaling/autoscaling/","title":"Autoscaling in Claudie","text":"

Claudie supports autoscaling by installing Cluster Autoscaler for Claudie-made clusters, with a custom implementation of external gRPC cloud provider, in Claudie context called autoscaler-adapter. This, together with Cluster Autoscaler is automatically managed by Claudie, for any clusters, which have at least one node pool defined with autoscaler field. Whats more, you can change the node pool specification freely from autoscaler configuration to static count or vice versa. Claudie will seamlessly configure Cluster Autoscaler, or even remove it when it is no longer needed.

"},{"location":"autoscaling/autoscaling/#what-triggers-a-scale-up","title":"What triggers a scale up","text":"

The scale up is triggered if there are pods in the cluster, which are unschedulable and

However, if pods' resource requests are larger than any new node would offer, the scale up will not be triggered. The cluster is scanned every 10 seconds for these pods, to assure quick response to the cluster needs. For more information, please have a look at official Cluster Autoscaler documentation.

"},{"location":"autoscaling/autoscaling/#what-triggers-a-scale-down","title":"What triggers a scale down","text":"

The scale down is triggered, if all following conditions are met

For more information, please have a look at official Cluster Autoscaler documentation.

"},{"location":"autoscaling/autoscaling/#architecture","title":"Architecture","text":"

As stated earlier, Claudie deploys Cluster Autoscaler and Autoscaler Adapter for every Claudie-made cluster which enables it. These components are deployed within the same cluster as Claudie.

"},{"location":"autoscaling/autoscaling/#considerations","title":"Considerations","text":"

As Claudie just extends Cluster Autoscaler, it is important that you follow their best practices. Furthermore, as number of nodes in autoscaled node pools can be volatile, you should carefully plan out how you will use the storage on such node pools. Longhorn support of Cluster Autoscaler is still in experimental phase (longhorn documentation).

"},{"location":"claudie-workflow/claudie-workflow/","title":"Claudie","text":""},{"location":"claudie-workflow/claudie-workflow/#a-single-platform-for-multiple-clouds","title":"A single platform for multiple clouds","text":""},{"location":"claudie-workflow/claudie-workflow/#microservices","title":"Microservices","text":""},{"location":"claudie-workflow/claudie-workflow/#data-stores","title":"Data stores","text":""},{"location":"claudie-workflow/claudie-workflow/#tools-used","title":"Tools used","text":""},{"location":"claudie-workflow/claudie-workflow/#manager","title":"Manager","text":"

Manger is the brain and main entry point for claudie. To build clusters users/services submit their configs to the manager service. The manager creates the desired state and schedules a number of jobs to be executed in order to achieve the desired state based on the current state. The jobs are then picked up by the builder service.

For the API see the GRPC definitions.

"},{"location":"claudie-workflow/claudie-workflow/#flow","title":"Flow","text":"

Each newly created manifest starts in the Pending state. Pending manifests are periodically checked and based on the specification provided in the applied configs, the desired state for each cluster, along with the tasks to be performed to achieve the desired state are created, after which the manifest is moved to the scheduled state. Tasks from Scheduled manifests are picked up by builder services gradually building the desired state. From this state, the manifest can end up in the Done or Error state. Any changes to the input manifest while it is in the Scheduled state will be reflected after it is moved to the Done state. After which the cycle repeats.

Each cluster has a current state and desired state based on which tasks are created. The desired state is created only once, when changes to the configuration are detected. Several tasks can be created that will gradually converge the current state to the desired state. Each time a task is picked up by the builder service the relevant state from the current state is transferred to the task so that each task has up-to-date information about current infrastructure and its up to the builder service to build/modify/delete the missing pieces in the picked up task.

Once a task is done building, either in error or successfully, the current state should be updated by the builder service so that the manager has the actual information about the current state of the infrastructure. When the manager receives a request for the update of the current state it transfers relevant information to the desired state that was created at the beginning, before the tasks were scheduled. This is the only point where the desired state is updated, and we only transfer information from current state (such as newly build nodes, ips, etc...). After all tasks have finished successfully the current and desired state should match.

"},{"location":"claudie-workflow/claudie-workflow/#rolling-updates","title":"Rolling updates","text":"

Unless otherwise specified, the default is to use the external templates located at https://github.com/berops/claudie-config to build the infrastructure for the dynamic nodepools. The templates provide reasonable defaults that anyone can use to build multi-provider clusters.

As we understand that someone may need more specific scenarios, we allow these external templates to be overridden by the user, see https://docs.claudie.io/latest/input-manifest/external-templates/ for more information. By providing the ability to specify the templates that should be used when building the infrastructure of the InputManifest, there is one common scenario that we decided should be handled by the manager service, which is rolling updates.

Rolling updates of nodepools are performed when a change to a provider's external templates is registered. The manager then checks that the external repository of the new templates exists and uses them to perform a rolling update of the already built infrastructure. The rolling update is performed in the following steps

If a failure occurs during the rolling update of a single Nodepool, the state is rolled back to the last possible working state. Rolling updates have a retry strategy that results in endless processing of rolling updates until it succeeds.

If the rollback to the last working state fails, it will also be retried indefinitely, in which case it is up to the claudie user to repair the cluster so that the rolling update can continue.

The individual states of the Input Manifest and how they are processed within manager are further visually described in the following sections.

"},{"location":"claudie-workflow/claudie-workflow/#pending-state","title":"Pending State","text":""},{"location":"claudie-workflow/claudie-workflow/#scheduled-state","title":"Scheduled State","text":""},{"location":"claudie-workflow/claudie-workflow/#doneerror-state","title":"Done/Error State","text":""},{"location":"claudie-workflow/claudie-workflow/#builder","title":"Builder","text":"

Processed tasks scheduled by the manager gradually building the desired state of the infrastructure. It communicates with terraformer, ansibler, kube-eleven and kuber services in order to manage the infrastructure.

"},{"location":"claudie-workflow/claudie-workflow/#flow_1","title":"Flow","text":""},{"location":"claudie-workflow/claudie-workflow/#terraformer","title":"Terraformer","text":"

Terraformer creates or destroys infrastructure via Terraform calls.

For the API see the GRPC definitions.

"},{"location":"claudie-workflow/claudie-workflow/#ansibler","title":"Ansibler","text":"

Ansibler uses Ansible to:

For the API see the GRPC definitions.

"},{"location":"claudie-workflow/claudie-workflow/#kube-eleven","title":"Kube-eleven","text":"

Kube-eleven uses KubeOne to spin up a kubernetes clusters, out of the spawned and pre-configured infrastructure.

For the API see the GRPC definitions.

"},{"location":"claudie-workflow/claudie-workflow/#kuber","title":"Kuber","text":"

Kuber manipulates the cluster resources using kubectl.

For the API see the GRPC definitions.

"},{"location":"claudie-workflow/claudie-workflow/#claudie-operator","title":"Claudie-operator","text":"

Claudie-operator is a layer between the user and Claudie. It is a InputManifest Custom Resource Definition controller, that will communicate with the manager service to communicate changes to the config made by the user.

"},{"location":"claudie-workflow/claudie-workflow/#flow_2","title":"Flow","text":""},{"location":"commands/commands/","title":"Command Cheat Sheet","text":"

In this section, we'll describe kubectl commands to interact with Claudie.

"},{"location":"commands/commands/#monitoring-the-cluster-state","title":"Monitoring the cluster state","text":"

Watch the cluster state in the InputManifest that is provisioned.

watch -n 2 'kubectl get inputmanifests.claudie.io manifest-name -ojsonpath='{.status}' | jq .'\n{\n  \"clusters\": {\n    \"my-super-cluster\": {\n      \"phase\": \"NONE\",\n      \"state\": \"DONE\"\n    }\n  },\n  \"state\": \"DONE\"\n}   \n

"},{"location":"commands/commands/#viewing-the-cluster-metadata","title":"Viewing the cluster metadata","text":"

Each secret created by Claudie has following labels:

Key Value claudie.io/project Name of the project. claudie.io/cluster Name of the cluster. claudie.io/cluster-id ID of the cluster. claudie.io/output Output type, either kubeconfig or metadata.

Claudie creates kubeconfig secret in claudie namespace:

kubectl get secrets -n claudie -l claudie.io/output=kubeconfig\n
NAME                                  TYPE     DATA   AGE\nmy-super-cluster-6ktx6rb-kubeconfig   Opaque   1      134m\n

You can recover kubeconfig for your cluster with the following command:

kubectl get secrets -n claudie -l claudie.io/output=kubeconfig,claudie.io/cluster=$YOUR-CLUSTER-NAME -o jsonpath='{.items[0].data.kubeconfig}' | base64 -d > my-super-cluster-kubeconfig.yaml\n

If you want to connect to your dynamic k8s nodes via SSH, you can recover private SSH key for each nodepool:

kubectl get secrets -n claudie -l claudie.io/output=metadata,claudie.io/cluster=$YOUR-CLUSTER-NAME -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq '.dynamic_nodepools | map_values(.nodepool_private_key)'\n

To recover public IP of your dynamic k8s nodes to connect to via SSH:

kubectl get secrets -n claudie -l claudie.io/output=metadata,claudie.io/cluster=$YOUR-CLUSTER-NAME -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq '.dynamic_nodepools | map_values(.node_ips)'\n

You can display all dynamic load balancer nodes metadata by:

kubectl get secrets -n claudie -l claudie.io/output=metadata,claudie.io/cluster=$YOUR-CLUSTER-NAME -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq -r .dynamic_load_balancer_nodepools\n

In case you want to connect to your dynamic load balancer nodes via SSH, you can recover private SSH key:

kubectl get secrets -n claudie -l claudie.io/output=metadata,claudie.io/cluster=$YOUR-CLUSTER-NAME -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq '.dynamic_load_balancer_nodepools | .[]'\n

To recover public IP of your dynamic load balancer nodes to connect to via SSH:

kubectl get secrets -n claudie -l claudie.io/output=metadata,claudie.io/cluster=$YOUR-CLUSTER-NAME -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq '.dynamic_load_balancer_nodepools | .[] | map_values(.node_ips)'\n

You can display all static load balancer nodes metadata by:

kubectl get secrets -n claudie -l claudie.io/output=metadata,claudie.io/cluster=$YOUR-CLUSTER-NAME -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq -r .static_load_balancer_nodepools\n

In order to display public IPs and private SSH keys of your static load balancer nodes by:

kubectl get secrets -n claudie -l claudie.io/output=metadata,claudie.io/cluster=$YOUR-CLUSTER-NAME -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq -r '.static_load_balancer_nodepools | .[] | map_values(.node_info)'\n

To connect to one of your static load balancer nodes via SSH, you can recover private SSH key:

kubectl get secrets -n claudie -l claudie.io/output=metadata,claudie.io/cluster=$YOUR-CLUSTER-NAME -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq -r '.static_load_balancer_nodepools | .[]'\n
"},{"location":"contributing/contributing/","title":"Contributing","text":""},{"location":"contributing/contributing/#bug-reports","title":"Bug reports","text":"

When you encounter a bug, please create a new issue and use our bug template. Before you submit, please check:

be careful not to include your cloud credentials

"},{"location":"contributing/local-testing/","title":"Local testing of Claudie","text":"

In order to speed up the development, Claudie can be run locally for initial testing purposes. However, it's important to note that running Claudie locally has limitations compared to running it in a Kubernetes cluster.

"},{"location":"contributing/local-testing/#limitations-of-claudie-when-running-locally","title":"Limitations of Claudie when running locally","text":""},{"location":"contributing/local-testing/#claudie-operatorcrd-testing","title":"Claudie Operator/CRD testing","text":"

The Operator component as well as CRDs heavily relies on the Kubernetes cluster. However, with a little hacking, you can test them, by creating local cluster (minikube/kind/...), and exporting environment variable KUBECONFIG pointing to the local cluster Kubeconfig. Once you start the Claudie Operator, it should pick up the Kubeconfig and you can use local cluster to deploy and test CRDs.

"},{"location":"contributing/local-testing/#autoscaling-testing","title":"Autoscaling testing","text":"

Testing or simulating the Claudie autoscaling is not feasible when running Claudie locally because it dynamically deploys Cluster Autoscaler and Autoscaler Adapter in the management cluster.

"},{"location":"contributing/local-testing/#claudie-outputs","title":"Claudie outputs","text":"

Since Claudie generates two types of output per cluster (node metadata and kubeconfig), testing these outputs is not possible because they are created as Kubernetes Secrets.

"},{"location":"contributing/local-testing/#requirements-to-run-claudie-locally","title":"Requirements to run Claudie locally","text":"

As Claudie uses number of external tools to build and manage clusters, it is important these tools are installed on your local system.

"},{"location":"contributing/local-testing/#how-to-run-claudie-locally","title":"How to run Claudie locally","text":"

To simplify the deployment of Claudie into local system, we recommend to use rules defined in Makefile.

To start all the datastores, simply run make datastoreStart, which will create containers for each required datastore with preconfigured port-forwarding.

To start all services, run make <service name>, in separate shells. In case you will make some changes to the code, to apply them, please kill the process and start it again using make <service name>.

"},{"location":"contributing/local-testing/#how-to-test-claudie-locally","title":"How to test Claudie locally","text":"

Once Claudie is up and running, there are three main ways to test it locally.

"},{"location":"contributing/local-testing/#test-claudie-using-testing-framework","title":"Test Claudie using Testing-framework","text":"

You can test Claudie deployed locally via custom made testing framework. It was designed to support testing from local so the code itself does not require any changes. However, in order to supply testing input manifest, you have to create directory called test-sets in the ./testing-framework, which will contain the input manifests. Bear in mind that these manifest are not CRDs, rather they are raw YAML file which is described in /internal/manifest/manifest.go.

This way of testing brings benefits like automatic verification of Longhorn deployment or automatic clean up of the infrastructure upon failure.

To run the Testing-framework locally, use make test rule which will start the testing. If you wish to disable the automatic clean up, set the environment variable AUTO_CLEAN_UP to FALSE.

Example of directory structure:

services/testing-framework/\n\u251c\u2500\u2500 ...\n\u2514\u2500\u2500 test-sets\n    \u2514\u2500\u2500 test-set-dev\n        \u251c\u2500\u2500 1.yaml\n        \u251c\u2500\u2500 2.yaml\n        \u2514\u2500\u2500 3.yaml\n

Example of raw YAML input manifest:

name: TestSetDev\n\nproviders:\n  hetzner:\n    - name: hetzner-1\n      credentials: \"api token\"\n  gcp:\n    - name: gcp-1\n      credentials: |\n        service account key as JSON\n      gcpProject: \"project id\"\n  oci:\n    - name: oci-1\n      privateKey: |\n        -----BEGIN RSA PRIVATE KEY-----\n        ..... put the private key here ....\n        -----END RSA PRIVATE KEY-----\n      keyFingerprint: \"key fingerprint\"\n      tenancyOcid: \"tenancy ocid\"\n      userOcid: \"user ocid\"\n      compartmentOcid: \"compartment ocid\"\n  aws:\n    - name: aws-1\n      accessKey: \"access key\"\n      secretKey: \"secret key\"\n  azure:\n    - name: azure-1\n      subscriptionId: \"subscription id\"\n      tenantId: \"tenant id\"\n      clientId: \"client id\"\n      clientSecret: \"client secret\"\n  hetznerdns:\n    - name: hetznerdns-1\n      apiToken: \"api token\"\n  cloudflare:\n    - name: cloudflare-1\n      apiToken: \"api token\"\n\nnodePools:\n  dynamic:\n    - name: htz-compute\n      providerSpec:\n        name: hetzner-1\n        region: nbg1\n        zone: nbg1-dc3\n      count: 1\n      serverType: cpx11\n      image: ubuntu-22.04\n      storageDiskSize: 50\n\n    - name: hetzner-lb\n      providerSpec:\n        name: hetzner-1\n        region: nbg1\n        zone: nbg1-dc3\n      count: 1\n      serverType: cpx11\n      image: ubuntu-22.04\n\n  static:\n    - name: static-pool\n      nodes:\n        - endpoint: \"192.168.52.1\"\n          username: root\n          privateKey: |\n            -----BEGIN RSA PRIVATE KEY-----\n            ...... put the private key here .....\n            -----END RSA PRIVATE KEY-----\n        - endpoint: \"192.168.52.2\"\n          username: root\n          privateKey: |\n            -----BEGIN RSA PRIVATE KEY-----\n            ...... put the private key here .....\n            -----END RSA PRIVATE KEY-----\n\nkubernetes:\n  clusters:\n    - name: dev-test\n      version: v1.27.0\n      network: 192.168.2.0/24\n      pools:\n        control:\n          - static-pool\n        compute:\n          - htz-compute\n\nloadBalancers:\n  roles:\n    - name: apiserver-lb\n      protocol: tcp\n      port: 6443\n      targetPort: 6443\n      targetPools: \n        - static-pool\n  clusters:\n    - name: miro-lb\n      roles:\n        - apiserver-lb\n      dns:\n        dnsZone: zone.com\n        provider: cloudflare-1\n      targetedK8s: dev-test\n      pools:\n        - hetzner-lb\n
"},{"location":"contributing/local-testing/#test-claudie-using-manual-manifest-injection","title":"Test Claudie using manual manifest injection","text":"

To test Claudie in a more \"manual\" way, you can use the specified GRPC API to inject/delete/modify an input manifest.

When using this technique, you most likely will omit the initial step of the InputManifest being passed through the operator. If this is the case, you will need to add templates to the providers listed in the InputManifest otherwise the workflow will panic at an early stage due to unset templates.

To specify templates you add them to the provider definition as shown in the snippet below:

  hetzner:\n    - name: hetzner-1\n      credentials: \"api token\"\n      templates:\n        repository: \"https://github.com/berops/claudie-config\"\n        path: \"templates/terraformer/hetzner\"\n

We provide ready-to-use terraform templates, which can be used by claudie at https://github.com/berops/claudie-config, If you would like to use your own, you can fork the repo, or write your own templates and modify the provider definition in the InputManifest to point to your templates

"},{"location":"contributing/local-testing/#deploy-claudie-in-the-local-cluster-for-testing","title":"Deploy Claudie in the local cluster for testing","text":"

Claudie can be also tested on a local cluster by following these steps.

  1. Spin up a local cluster using a tool like Kind, Minikube, or any other preferred method.

  2. Build the images for Claudie from the current source code by running the command make containerimgs. This command will build all the necessary images for Claudie and assign a new tag; a short hash from the most recent commit.

  3. Update the new image tag in the relevant kustomization.yaml files. These files can be found in the ./manifests directory. Additionally, set the imagePullPolicy to Never.

  4. Import the built images into your local cluster. This step will vary depending on the specific tool you're using for the local cluster. Refer to the documentation of the cluster tool for instructions on importing custom images.

  5. Apply the Claudie manifests to the local cluster.

By following these steps, you can set up and test Claudie on a local cluster using the newly built images. Remember, these steps are going to be repeated if you will make changes to the source code.

"},{"location":"contributing/release/","title":"How to release a new version of Claudie","text":"

The release process of Claudie consists of a few manual steps and a few automated steps.

"},{"location":"contributing/release/#manual-steps","title":"Manual steps","text":"

Whoever is responsible for creating a new release has to:

  1. Write a new entry to a relevant Changelog document
  2. Add release notes to the Releases page
  3. Publish a release
"},{"location":"contributing/release/#automated-steps","title":"Automated steps","text":"

After a new release is published, a release pipeline and a release-docs pipeline runs.

A release pipeline consists of the following steps:

  1. Build new images tagged with the release tag
  2. Push them to the container registry where anyone can pull them
  3. Add Claudie manifest files to the release assets, with image tags referencing this release

A release-docs pipeline consists of the following steps:

  1. If there is a new Changelog file:
    1. Checkout to a new feature branch
    2. Add reference to the new Changelog file in mkdocs.yml
    3. Create a PR to merge changes from new feature branch to master (PR needs to be created to update changes in master branch and align with branch protection)
  2. Deploy new version of docs on docs.claudie.io
"},{"location":"creating-claudie-backup/creating-claudie-backup/","title":"Creating Claudie Backup","text":"

In this section we'll explain where the state of Claudie is and backing up the necessary components and restoring them on a completely new cluster.

"},{"location":"creating-claudie-backup/creating-claudie-backup/#claudie-state","title":"Claudie state","text":"

Claudie stores its state in 3 different places.

These are the only services that will have a PVC attached to it, the other are stateless.

"},{"location":"creating-claudie-backup/creating-claudie-backup/#backing-up-claudie","title":"Backing up Claudie","text":""},{"location":"creating-claudie-backup/creating-claudie-backup/#using-velero","title":"Using Velero","text":"

This is the primary backup and restore method.

Velero does not support HostPath volumes. If the PVCs in your management cluster are attached to such volumes (e.g. when running on Kind or MiniKube), the backup will not work. In this case, use the below backup method.

All resources that are deployed or created by Claudie can be identified with the following label:

    app.kubernetes.io/part-of: claudie\n

If you want to include your deployed Input Manifests to be part of the backup you'll have to add the same label to them.

We'll walk through the following scenario step-by-step to back up claudie and then restore it.

Claudie is already deployed on an existing Management Cluster and at least 1 Input Manifest has been applied. The state is backed up and the Management Cluster is replaced by a new one on which we restore the state.

To back up the resources we'll be using Velero version v1.11.0.

The following steps will all be executed with the existing Management Cluster in context.

  1. To create a backup, Velero needs to store the state to external storage. The list of supported providers for the external storage can be found in the link. In this guide we'll be using AWS S3 object storage for our backup.

  2. Prepare the S3 bucket by following the first two steps in this setup guide, excluding the installation step, as this will be different for our use-case.

If you do not have the aws CLI locally installed, follow the user guide to set it up.

  1. Execute the following command to install Velero on the Management Cluster.
    velero install \\\n--provider aws \\\n--plugins velero/velero-plugin-for-aws:v1.6.0 \\\n--bucket $BUCKET \\\n--secret-file ./credentials-velero \\\n--backup-location-config region=$REGION \\\n--snapshot-location-config region=$REGION \\\n--use-node-agent \\\n--default-volumes-to-fs-backup\n

Following the instructions in step 2, you should have a credentials-velero file with the access and secret keys for the aws setup. The env variables $BUCKET and $REGION should be set to the name and region for the bucket created in AWS S3.

By default Velero will use your default config $HOME/.kube/config, if this is not the config that points to your Management Cluster, you can override it with the --kubeconfig argument.

  1. Backup claudie by executing
    velero backup create claudie-backup --selector app.kubernetes.io/part-of=claudie\n

To track the progress of the backup execute

velero backup describe claudie-backup --details\n

From this point the new Management Cluster for Claudie is in context. We expect that your default kubeconfig points to the new Management Cluster, if it does not, you can override it in the following commands using --kubeconfig ./path-to-config.

  1. Repeat the step to install Velero, but now on the new Management Cluster.
  2. Install cert manager to the new Management Cluster by executing:
    kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml\n
  3. To restore the state that was stored in the S3 bucket execute
    velero restore create --from-backup claudie-backup\n

Once all resources are restored, you should be able to deploy new input manifests and also modify existing infrastructure without any problems.

"},{"location":"creating-claudie-backup/creating-claudie-backup/#manual-backup","title":"Manual backup","text":"

Claudie is already deployed on an existing Management Cluster and at least 1 Input Manifest has been applied.

Create a directory where the backup of the state will be stored.

mkdir claudie-backup\n

Put your Claudie inputmanifests into the created folder, e.g. kubectl get InputManifest -A -oyaml > ./claudie-backup/all.yaml

We will now back up the state of the respective input manifests from MongoDB and MinIO.

kubectl get pods -n claudie\n\nNAME                                READY   STATUS      RESTARTS      AGE\nansibler-6f4557cf74-b4dts           1/1     Running     0             18m\nbuilder-5d68987c86-qdfd5            1/1     Running     0             18m\nclaudie-operator-6d9ddc7f8b-hv84c   1/1     Running     0             18m\nmanager-5d75bfffc6-d9qfm            1/1     Running     0             18m\ncreate-table-job-ghb9f              0/1     Completed   1             18m\ndynamodb-6d65df988-c626j            1/1     Running     0             18m\nkube-eleven-556cfdfd98-jq6hl        1/1     Running     0             18m\nkuber-7f8cd4cd89-6ds2w              1/1     Running     0             18m\nmake-bucket-job-9mjft               0/1     Completed   0             18m\nminio-0                             1/1     Running     0             18m\nminio-1                             1/1     Running     0             18m\nminio-2                             1/1     Running     0             18m\nminio-3                             1/1     Running     0             18m\nmongodb-6ccb5f5dff-ptdw2            1/1     Running     0             18m\nterraformer-66c6f67d98-pwr9t        1/1     Running     0             18m\n

To backup state from MongoDB execute the following command

kubectl exec -n claudie mongodb-<your-mongdb-pod> -- sh -c 'mongoexport --uri=mongodb://$MONGO_INITDB_ROOT_USERNAME:$MONGO_INITDB_ROOT_PASSWORD@localhost:27017/claudie -c inputManifests --authenticationDatabase admin' > claudie-backup/inputManifests\n

Next we need to backup the state from MinIO. Port-forward the MinIO service so that it is accessible from localhost.

kubectl port-forward -n claudie svc/minio 9000:9000\n

Setup an alias for the mc command line tool.

mc alias set claudie-minio http://127.0.0.1:9000 <ACCESSKEY> <SECRETKEY>\n

Provide the access and secret key for minio. The default can be found in the github repository in the manifests/claudie/minio/secrets folder. If you have not changed them, we strongly encourage you to do so!

Download the state into the backup folder

mc mirror claudie-minio/claudie-tf-state-files ./claudie-backup\n

You now have everything you need to restore your input manifests to a new management cluster.

These files will contain your credentials, DO NOT STORE THEM OUT IN THE PUBLIC!

To restore the state on your new management cluster you can follow these commands. We expect that your default kubeconfig points to the new Management Cluster, if it does not, you can override it in the following commands using --kubeconfig ./path-to-config.

Copy the collection into the MongoDB pod.

kubectl cp ./claudie-backup/inputManifests mongodb-<your-mongodb-pod>:/tmp/inputManifests -n claudie\n

Import the state to MongoDB.

kubectl exec -n claudie mongodb-<your-mongodb-pod> -- sh -c 'mongoimport --uri=mongodb://$MONGO_INITDB_ROOT_USERNAME:$MONGO_INITDB_ROOT_PASSWORD@localhost:27017/claudie -c inputManifests --authenticationDatabase admin --file /tmp/inputManifests'\n

Don't forget to delete the /tmp/inputManifests file

Port-forward the MinIO service and import the backed up state.

mc cp --recursive ./claudie-backup/<your-folder-name-downloaded-from-minio> claudie-minio/claudie-tf-state-files\n

You can now apply your Claudie inputmanifests which will be immediately in the DONE stage. You can verify this with

kubectl get inputmanifests -A\n

Now you can make any new changes to your inputmanifests on the new management cluster and the state will be re-used.

The secrets for the clusters, namely kubeconfig and cluster-metadata, are re-created after the workflow with the changes has finished.

Alternatively you may also use any GUI clients for MongoDB and Minio for more straightforward backup of the state. All you need to backup is the bucket claudie-tf-state-files in MinIO and the collection inputManifests from MongoDB

Once all data is restored, you should be able to deploy new input manifests and also modify existing infrastructure without any problems.

"},{"location":"docs-guides/deployment-workflow/","title":"Documentation deployment","text":"

Our documentation is hosted on GitHub Pages. Whenever a new push to gh-pages branch happens, it will deploy a new version of the doc. All the commits and pushes to this branch are automated through our release-docs.yml pipeline with the usage of mike tool.

That's also the reason, why we do not recommend making any manual changes in gh-pages branch. However, in case you have to, use the commands below.

"},{"location":"docs-guides/deployment-workflow/#generate-a-new-version-of-the-docs","title":"Generate a new version of the docs","text":"
mike deploy <version>\n
mike deploy <version> --push\n
mike set-default <version>\n
"},{"location":"docs-guides/deployment-workflow/#deploy-docs-manually-from-some-older-github-tags","title":"Deploy docs manually from some older GitHub tags","text":"
git checkout tags/<tag>\n

To find out how, follow the mkdocs documentation

python3 -m venv ./venv\n
source ./venv/bin/activate\n
pip install -r requirements.txt\n
mike deploy <version> --push\n
"},{"location":"docs-guides/deployment-workflow/#deploy-docs-for-a-new-release-manually","title":"Deploy docs for a new release manually","text":"

In case the release-docs.yml fails, you can deploy the new version manually by following this steps:

git checkout tags/<release tag>\n
python3 -m venv ./venv\n
source ./venv/bin/activate\n
pip install -r requirements.txt\n
mike deploy <release tag> latest --push -u\n

Don't forget to use the latest tag in the last command, because otherwise the new version will not be loaded as default one, when visiting docs.claudie.io

Find more about how to work with mike.

"},{"location":"docs-guides/deployment-workflow/#automatic-update-of-the-latest-documentation-version","title":"Automatic update of the latest documentation version","text":"

The automatic-docs-update.yml pipeline will update the docs automatically, in case you add the label refresh-docs or comment /refresh-docs on your PR. In order to trigger this pipeline again you have to re-add refresh-docs label or once again comment /refresh-docs in your PR.

[!NOTE] /refresh-docs comment triggers automatic update only when the automatic-docs-update.yml file is in the default branch.

"},{"location":"docs-guides/development/","title":"Development of the Claudie official docs","text":"

First of all, it is worth to mention, that we are using MkDocs to generate HTML documents from markdown ones. To make our documentation prettier, we have used Material theme for MkDocs. Regarding the version of our docs we are using mike.

"},{"location":"docs-guides/development/#how-to-run","title":"How to run","text":"

First install the dependencies from requirements.txt in your local machine. However before doing that we recommend creating a virtual environment by running the command below.

python3 -m venv ./venv\n

After that you want to activate that newly create virtual environment by running:

source ./venv/bin/activate\n

Now, we can install the docs dependencies, which we mentioned before.

pip install -r requirements.txt\n

After successfull instalation, you can run command below, which generates HTML files for the docs and host in on your local server.

mkdocs serve\n
"},{"location":"docs-guides/development/#how-to-test-changes","title":"How to test changes","text":"

Whenever you make some changes in docs folder or in mkdocs.yml file, you can see if the changes were applied as you expected by running the command below, which starts the server with newly generated docs.

mkdocs serve\n

Using this command you will not see the docs versioning, because we are using mike tool for this.

In case you want to test the docs versioning, you will have to run:

mike serve\n

Keep in mind, that mike takes the docs versions from gh-pages branch. That means, you will not be able to see your changes, in case you didn't run the command below before.

mike deploy <version>\n

Be careful, because this command creates a new version of the docs in your local gh-pages branch.

"},{"location":"faq/FAQ/","title":"Frequently Asked Question","text":"

We have prepared some of our most frequently asked question to help you out!

"},{"location":"faq/FAQ/#does-claudie-make-sense-as-a-pure-k8s-orchestration-on-a-single-cloud-provider-iaas","title":"Does Claudie make sense as a pure K8s orchestration on a single cloud-provider IaaS?","text":"

Since Claudie specializes in multicloud, you will likely face some drawbacks, such as the need for a public IPv4 address for each node. Otherwise it works well in a single-provider mode. Using Claudie will also give you some advantages, such as scaling to multi-cloud as your needs change, or the autoscaler that Claudie provides.

"},{"location":"faq/FAQ/#which-scenarios-make-sense-for-using-claudie-and-which-dont","title":"Which scenarios make sense for using Claudie and which don't?","text":"

Claudie aims to address the following scenarios, described in more detail on the use-cases page:

Using Claudie doesn't make sense when you rely on specific features of a cloud provider and necessarily tying yourself to that cloud provider.

"},{"location":"faq/FAQ/#is-there-any-networking-performance-impact-due-to-the-introduction-of-the-vpn-layer","title":"Is there any networking performance impact due to the introduction of the VPN layer?","text":"

We compared the use of the VPN layer with other solutions and concluded that the impact on performance is negligible. \u2028If you are interested in performed benchmarks, we summarized the results in our blog post.

"},{"location":"faq/FAQ/#what-is-the-performance-impact-of-a-geographically-distributed-control-plane-in-claudie","title":"What is the performance impact of a geographically distributed control plane in Claudie?","text":"

We have performed several tests and problems start to appear when the control nodes are geographically about 600 km apart. Although this is not an answer that fits all scenarios and should only be taken as a reference point.

If you are interested in the tests we have run and a more detailed answer, you can read more in our blog post.

"},{"location":"faq/FAQ/#does-the-cloud-provider-traffic-egress-bill-represent-a-significant-part-on-the-overall-running-costs","title":"Does the cloud provider traffic egress bill represent a significant part on the overall running costs?","text":"

Costs are individual and depend on the cost of the selected cloud provider and the type of workload running on the cluster based on the user's needs. Networking expenses can exceed 50% of your provider bill, therefore we recommend making your workload geography and provider aware (e.g. using taints and affinities).

"},{"location":"faq/FAQ/#should-i-be-worried-about-giving-claudie-provider-credentials-including-ssh-keys","title":"Should I be worried about giving Claudie provider credentials, including ssh keys?","text":"

Provider credentials are created as secrets in the Management Cluster for Claudie which you then reference when creating the input manifest, that is passed to Claudie. Claudie only uses the credentials to create a connection to nodes in the case of static nodepools or to provision the required infrastructure in the case of dynamic nodepools. The credentials are as secure as your secret management allows.

We are transparent and all of our code is open-sourced, if in doubt you can always check for yourself.

"},{"location":"faq/FAQ/#does-each-node-need-a-public-ip-address","title":"Does each node need a public IP address?","text":"

For dynamic nodepools, nodes created by Claudie in specified cloud providers, each node needs a public IP, for static nodepools no public IP is needed.

"},{"location":"faq/FAQ/#is-a-guicliclusterapi-providerterraform-provider-planned","title":"Is a GUI/CLI/ClusterAPI provider/Terraform provider planned?","text":"

A GUI is not actively considered at this point in time. Other possibilities are openly discussed in this github issue.

"},{"location":"faq/FAQ/#what-is-the-roadmap-for-adding-support-for-new-cloud-iaas-providers","title":"What is the roadmap for adding support for new cloud IaaS providers?","text":"

Adding support for a new cloud provider is an easy task. Let us know your needs.

"},{"location":"feedback/feedback-form/","title":"Feedback form","text":"Your message: Send"},{"location":"getting-started/detailed-guide/","title":"Detailed guide","text":"

This detailed guide for Claudie serves as a resource for providing an overview of Claudie's features, installation instructions, customization options, and its role in provisioning and managing clusters. We'll start by guiding you through the process of setting up a management cluster, where Claudie will be installed, enabling you to effortlessly monitor and control clusters across multiple hyperscalers.

Tip!

Claudie offers extensive customization options for your Kubernetes cluster across multiple hyperscalers. This detailed guide assumes you have AWS and Hetzner accounts. You can customize your deployment across different supported providers. If you wish to use different providers, we recommend to follow this guide anyway and create your own input manifest file based on the provided example. Refer to the supported provider table for the input manifest configuration of each provider.

"},{"location":"getting-started/detailed-guide/#supported-providers","title":"Supported providers","text":"Supported Provider Node Pools DNS AWS Azure GCP OCI Hetzner Cloudflare N/A GenesisCloud N/A

For adding support for other cloud providers, open an issue or propose a PR.

"},{"location":"getting-started/detailed-guide/#prerequisites","title":"Prerequisites","text":"
  1. Install Kind by following the Kind documentation.
  2. Install kubectl tool to communicate with your management cluster by following the Kubernetes documentation.
  3. Install Kustomize by following Kustomize documentation.
  4. Install Docker by following Docker documentation.
"},{"location":"getting-started/detailed-guide/#claudie-deployment","title":"Claudie deployment","text":"
  1. Create a Kind cluster where you will deploy Claudie, also referred to as the Management Cluster.

    kind create cluster --name=claudie\n

    Management cluster consideration.

    We recommend using a non-ephemeral management cluster! Deleting the management cluster prevents autoscaling of Claudie node pools as well as loss of state! We recommended to use a managed Kubernetes offerings to ensure management cluster resiliency. Kind cluster is sufficient for this guide.

  2. Check if have the correct current kubernetes context. The context should be kind-claudie.

    kubectl config current-context\n
  3. If context is not kind-claudie, switch to it:

    kubectl config use-context kind-claudie\n
  4. One of the prerequisites is cert-manager, deploy it with the following command:

    kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml\n
  5. Download latest Claudie release:

    wget https://github.com/berops/claudie/releases/latest/download/claudie.yaml\n

    Tip!

    For the initial attempt, it's highly recommended to enable debug logs, especially when creating a large cluster with DNS. This helps identify and resolve any permission issues that may occur across different hyperscalers. Locate ConfigMap with GOLANG_LOG variable in claudie.yaml file, and change GOLANG_LOG: info to GOLANG_LOG: debug to enable debug logging, for more customization refer to this table.

  6. Deploy Claudie using Kustomize plugin:

    kubectl apply -f claudie.yaml\n

    Claudie Hardening

    By default network policies are not included in claudie.yaml, instead they're provided as standalone to be deployed separately as the Management cluster to where Claudie is deployed may use different CNI plugin. You can deploy our predefined network policies to further harden claudie:

    # for clusters using cilium as their CNI\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy-cilium.yaml\n
    # other\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy.yaml\n

    1. Claudie will be deployed into claudie namespace, you can view if all pods are running:

    kubectl get pods -n claudie \n
    NAME                           READY   STATUS      RESTARTS        AGE\nansibler-5c6c776b75-82c2q      1/1     Running     0               8m10s\nbuilder-59f9d44596-n2qzm       1/1     Running     0               8m10s\nmanager-5d76c89b4d-tb6h4       1/1     Running     1 (6m37s ago)   8m10s\ncreate-table-job-jvs9n         0/1     Completed   1               8m10s\ndynamodb-68777f9787-8wjhs      1/1     Running     0               8m10s\nclaudie-operator-5755b7bc69-5l84h      1/1     Running     0               8m10s\nkube-eleven-64468cd5bd-qp4d4   1/1     Running     0               8m10s\nkuber-698c4564c-dhsvg          1/1     Running     0               8m10s\nmake-bucket-job-fb5sp          0/1     Completed   0               8m10s\nminio-0                        1/1     Running     0               8m10s\nminio-1                        1/1     Running     0               8m10s\nminio-2                        1/1     Running     0               8m10s\nminio-3                        1/1     Running     0               8m10s\nmongodb-67bf769957-9ct5z       1/1     Running     0               8m10s\nterraformer-fd664b7ff-dd2h7    1/1     Running     0               8m9s\n

    Changing the namespace

    By default, Claudie will monitor all namespaces, and it will watch for Input Manifest and provider Secrets in the cluster. If you would like limit the namespaces to watch - overwrite CLAUDIE_NAMESPACES environment variable in claudie-operator deployment. Example:

    env:\n  - name: CLAUDIE_NAMESPACES\n    value: \"claudie,different-namespace\"\n

    Troubleshoot!

    If you experience problems refer to our troubleshooting guide.

  7. Let's create a AWS high availability cluster which we'll expand later on with Hetzner bursting capacity. Let's start by creating providers secrets for the infrastructure, and next we will reference them in inputmanifest-bursting.yaml.

    # AWS provider requires the secrets to have fields: accesskey and secretkey\nkubectl create secret generic aws-secret-1 --namespace=mynamespace --from-literal=accesskey='SLDUTKSHFDMSJKDIALASSD' --from-literal=secretkey='iuhbOIJN+oin/olikDSadsnoiSVSDsacoinOUSHD'\nkubectl create secret generic aws-secret-dns --namespace=mynamespace --from-literal=accesskey='ODURNGUISNFAIPUNUGFINB' --from-literal=secretkey='asduvnva+skd/ounUIBPIUjnpiuBNuNipubnPuip'    \n
    # inputmanifest-bursting.yaml\n\napiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: cloud-bursting\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: aws-1\n      providerType: aws\n      secretRef:\n        name: aws-secret-1\n        namespace: mynamespace\n    - name: aws-dns\n      providerType: aws\n      secretRef:\n        name: aws-secret-dns\n        namespace: mynamespace    \n  nodePools:\n    dynamic:\n      - name: aws-control\n        providerSpec:\n            name: aws-1\n            region: eu-central-1\n            zone: eu-central-1a\n        count: 3\n        serverType: t3.medium\n        image: ami-0965bd5ba4d59211c\n      - name: aws-worker\n        providerSpec:\n            name: aws-1\n            region: eu-north-1\n            zone: eu-north-1a\n        count: 3\n        serverType: t3.medium\n        image: ami-03df6dea56f8aa618\n        storageDiskSize: 200\n      - name: aws-lb\n        providerSpec:\n            name: aws-1\n            region: eu-central-2\n            zone: eu-central-2a\n        count: 2\n        serverType: t3.small\n        image: ami-0e4d1886bf4bb88d5\n  kubernetes:\n    clusters:\n      - name: my-super-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n            control:\n            - aws-control\n            compute:\n            - aws-worker\n  loadBalancers:\n    roles:\n      - name: apiserver\n        protocol: tcp\n        port: 6443\n        targetPort: 6443\n        targetPools:\n            - aws-control\n    clusters:\n      - name: loadbalance-me\n        roles:\n            - apiserver\n        dns:\n            dnsZone: domain.com # hosted zone domain name where claudie creates dns records for this cluster\n            provider: aws-dns\n            hostname: supercluster # the sub domain of the new cluster\n        targetedK8s: my-super-cluster\n        pools:\n            - aws-lb\n

    Tip!

    In this example, two AWS providers are used \u2014 one with access to compute resources and the other with access to DNS. However, it is possible to use a single AWS provider with permissions for both services.

  8. Apply the InputManifest crd with your cluster configuration file:

    kubectl apply -f ./inputmanifest-bursting.yaml\n

    Tip!

    InputManifests serve as a single source of truth for both Claudie and the user, which makes creating infrastructure via input manifests as infrastructure as a code and can be easily integrated into a GitOps workflow.

    Errors in input manifest

    Validation webhook will reject the InputManifest at this stage if it finds errors within the manifest. Refer to our API guide for details.

  9. View logs from claudie-operator service to see the InputManifest reconcile process:

    View the InputManifest state with kubectl

    kubectl get inputmanifests.claudie.io cloud-bursting -o jsonpath={.status} | jq .\n
    Here\u2019s an example of .status fields in the InputManifest resource type:

      {\n    \"clusters\": {\n      \"my-super-cluster\": {\n        \"message\": \" installing VPN\",\n        \"phase\": \"ANSIBLER\",\n        \"state\": \"IN_PROGRESS\"\n      }\n    },\n    \"state\": \"IN_PROGRESS\"\n  }\n

    Claudie architecture

    Claudie utilizes multiple services for cluster provisioning, refer to our workflow documentation as to how it works under the hood.

    Provisioning times may vary!

    Please note that cluster creation time may vary due to provisioning capacity and machine provisioning times of selected hyperscalers.

    After finishing the InputManifest state reflects that the cluster is provisioned.

    kubectl get inputmanifests.claudie.io cloud-bursting -o jsonpath={.status} | jq .\n  {\n    \"clusters\": {\n      \"my-super-cluster\": {\n        \"phase\": \"NONE\",\n        \"state\": \"DONE\"\n      }\n    },\n    \"state\": \"DONE\"\n  }    \n
  10. Claudie creates kubeconfig secret in claudie namespace:

    kubectl get secrets -n claudie -l claudie.io/output=kubeconfig\n
    NAME                                  TYPE     DATA   AGE\nmy-super-cluster-6ktx6rb-kubeconfig   Opaque   1      134m\n

    You can recover kubeconfig for your cluster with the following command:

    kubectl get secrets -n claudie -l claudie.io/output=kubeconfig -o jsonpath='{.items[0].data.kubeconfig}' | base64 -d > my-super-cluster-kubeconfig.yaml\n

    If you want to connect to your dynamic k8s nodes via SSH, you can recover private SSH key:

    kubectl get secrets -n claudie -l claudie.io/output=metadata -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq '.dynamic_nodepools | map_values(.nodepool_private_key)'\n

    To recover public IP of your dynamic k8s nodes to connect to via SSH:

    kubectl get secrets -n claudie -l claudie.io/output=metadata -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq -r .dynamic_nodepools.node_ips\n

    In case you want to connect to your dynamic load balancer nodes via SSH, you can recover private SSH key:

    kubectl get secrets -n claudie -l claudie.io/output=metadata -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq '.dynamic_load_balancer_nodepools | .[]'\n

    To recover public IP addresses of your dynamic load balancer nodes to connect to via SSH:

    kubectl get secrets -n claudie -l claudie.io/output=metadata -ojsonpath='{.items[0].data.metadata}' | base64 -d | jq -r '.dynamic_load_balancer_nodepools[] | .node_ips'\n

    Each secret created by Claudie has following labels:

    Key Value claudie.io/project Name of the project. claudie.io/cluster Name of the cluster. claudie.io/cluster-id ID of the cluster. claudie.io/output Output type, either kubeconfig or metadata.
  11. Use your new kubeconfig to see what\u2019s in your new cluster

    kubectl get pods -A --kubeconfig=my-super-cluster-kubeconfig.yaml\n
  12. Let's add a bursting autoscaling node pool in Hetzner cloud. In order to use other hyperscalers, we'll need to add a new provider with appropriate credentials. First we will create a provider secret for Hetzner Cloud, then we open inputmanifest-bursting.yaml input manifest again and append the new Hetzner node pool configuration.

    # Hetzner provider requires the secrets to have field: credentials\nkubectl create secret generic hetzner-secret-1 --namespace=mynamespace --from-literal=credentials='kslISA878a6etYAfXYcg5iYyrFGNlCxcICo060HVEygjFs21nske76ksjKko21lp'\n

    Claudie autoscaling

    Autoscaler in Claudie is deployed in Claudie management cluster and provisions additional resources remotely at the time of need. For more information check out how Claudie autoscaling works.

    # inputmanifest-bursting.yaml\n\napiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: cloud-bursting\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: hetzner-1         # add under nodePools.dynamic section\n      providerType: hetzner\n      secretRef:\n        name: hetzner-secret-1\n        namespace: mynamespace        \n  nodePools:\n    dynamic:\n    ...\n      - name: hetzner-worker  # add under nodePools.dynamic section\n        providerSpec:\n            name: hetzner-1   # use your new hetzner provider hetzner-1 to create these nodes\n            region: hel1\n            zone: hel1-dc2\n        serverType: cpx51\n        image: ubuntu-22.04\n        autoscaler:           # this node pool uses a claudie autoscaler instead of static count of nodes\n            min: 1\n            max: 10\n    kubernetes:\n      clusters:\n      - name: my-super-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n            control:\n            - aws-control\n            compute:\n            - aws-worker\n            - hetzner-worker  # add it to the compute list here\n...\n
  13. Update the crd with the new InputManifest to incorporate the desired changes.

    Deleting existing secrets!

    Deleting or replacing existing input manifest secrets triggers cluster deletion! To add new components to your existing clusters, generate a new secret value and apply it using the following command.

    kubectl apply -f ./inputmanifest-bursting.yaml\n
  14. You can also passthrough additional ports from load balancers to control plane and or worker node pools by adding additional roles under roles.

    # inputmanifest-bursting.yaml\n\napiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: cloud-bursting\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  ...\n  loadBalancers:\n    roles:\n      - name: apiserver\n        protocol: tcp\n        port: 6443\n        targetPort: 6443\n        targetPools: # only loadbalances for port 6443 for the aws-control nodepool\n            - aws-control\n      - name: https\n        protocol: tcp\n        port: 443\n        targetPort: 443\n        targetPools: # only loadbalances for port 443 for the aws-worker nodepool\n            - aws-worker\n            # possible to add other nodepools, hetzner-worker, for example\n    clusters:\n      - name: loadbalance-me\n        roles:\n            - apiserver\n            - https # define it here\n        dns:\n            dnsZone: domain.com\n            provider: aws-dns\n            hostname: supercluster\n        targetedK8s: my-super-cluster\n        pools:\n            - aws-lb\n
    !!! note Load balancing Please refer how our load balancing works by reading our documentation.

  15. Update the InputManifest again with the new configuration.

    kubectl apply -f ./inputmanifest-bursting.yaml\n

  16. To delete the cluster just simply delete the secret and wait for Claudie to destroy it.

    kubectl delete -f ./inputmanifest-bursting.yaml\n

    Removing clusters

    Deleting Claudie or the management cluster does not remove the Claudie managed clusters. Delete the secret first to initiate Claudie's deletion process.

  17. After Claudie-operator finished deletion workflow delete minikube cluster

    kind delete cluster\n

"},{"location":"getting-started/detailed-guide/#general-tips","title":"General tips","text":""},{"location":"getting-started/detailed-guide/#control-plane-considerations","title":"Control plane considerations","text":""},{"location":"getting-started/detailed-guide/#egress-traffic","title":"Egress traffic","text":"

Hyperscalers charge for outbound data and multi-region infrastructure.

Example

Consider a scenario where you have a workload that involves processing extensive datasets from GCP storage using Claudie managed AWS GPU instances. To minimize egress network traffic costs, it is recommended to host the datasets in an S3 bucket and limit egress traffic from GCP and keep the workload localised.

"},{"location":"getting-started/detailed-guide/#on-your-own-path","title":"On your own path","text":"

Once you've gained a comprehensive understanding of how Claudie operates through this guide, you can deploy it to a reliable management cluster, this could be a cluster that you already have. Tailor your input manifest file to suit your specific requirements and explore a detailed example showcasing providers, load balancing, and DNS records across various hyperscalers by visiting this comprehensive example.

"},{"location":"getting-started/detailed-guide/#claudie-customization","title":"Claudie customization","text":"

All of the customisable settings can be found in claudie/.env file.

Variable Default Type Description GOLANG_LOG info string Log level for all services. Can be either info or debug. HTTP_PROXY_MODE default string default, on or off. default utilizes HTTP proxy only when there's at least one node in the K8s cluster from the Hetzner cloud provider. on uses HTTP proxy even when the K8s cluster doesn't have any nodes from the Hetzner. off turns off the usage of HTTP proxy. If the value isn't set or differs from on or off it always works with the default. HTTP_PROXY_URL http://proxy.claudie.io:8880 string HTTP proxy URL used in kubeone proxy configuration to build the K8s cluster. DATABASE_HOSTNAME mongodb string Database hostname used for Claudie configs. MANAGER_HOSTNAME manager string Manager service hostname. TERRAFORMER_HOSTNAME terraformer string Terraformer service hostname. ANSIBLER_HOSTNAME ansibler string Ansibler service hostname. KUBE_ELEVEN_HOSTNAME kube-eleven string Kube-eleven service hostname. KUBER_HOSTNAME kuber string Kuber service hostname. MINIO_HOSTNAME minio string MinIO hostname used for state files. DYNAMO_HOSTNAME dynamo string DynamoDB hostname used for lock files. DYNAMO_TABLE_NAME claudie string Table name for DynamoDB lock files. AWS_REGION local string Region for DynamoDB lock files. DATABASE_PORT 27017 int Port of the database service. TERRAFORMER_PORT 50052 int Port of the Terraformer service. ANSIBLER_PORT 50053 int Port of the Ansibler service. KUBE_ELEVEN_PORT 50054 int Port of the Kube-eleven service. MANAGER_PORT 50055 int Port of the MANAGER service. KUBER_PORT 50057 int Port of the Kuber service. MINIO_PORT 9000 int Port of the MinIO service. DYNAMO_PORT 8000 int Port of the DynamoDB service."},{"location":"getting-started/get-started-using-claudie/","title":"Getting started","text":""},{"location":"getting-started/get-started-using-claudie/#get-started-using-claudie","title":"Get started using Claudie","text":""},{"location":"getting-started/get-started-using-claudie/#prerequisites","title":"Prerequisites","text":"

Before you begin, please make sure you have the following prerequisites installed and set up:

  1. Claudie needs to be installed on an existing Kubernetes cluster, referred to as the Management Cluster, which it uses to manage the clusters it provisions. For testing, you can use ephemeral clusters like Minikube or Kind. However, for production environments, we recommend using a more resilient solution since Claudie maintains the state of the infrastructure it creates.

  2. Claudie requires the installation of cert-manager in your Management Cluster. To install cert-manager, use the following command:

    kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml\n

"},{"location":"getting-started/get-started-using-claudie/#supported-providers","title":"Supported providers","text":"Supported Provider Node Pools DNS AWS Azure GCP OCI Hetzner Cloudflare N/A GenesisCloud N/A

For adding support for other cloud providers, open an issue or propose a PR.

"},{"location":"getting-started/get-started-using-claudie/#install-claudie","title":"Install Claudie","text":"
  1. Deploy Claudie to the Management Cluster:
    kubectl apply -f https://github.com/berops/claudie/releases/latest/download/claudie.yaml\n

To further harden claudie, you may want to deploy our pre-defined network policies:

# for clusters using cilium as their CNI\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy-cilium.yaml\n
# other\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy.yaml\n

"},{"location":"getting-started/get-started-using-claudie/#deploy-your-cluster","title":"Deploy your cluster","text":"
  1. Create Kubernetes Secret resource for your provider configuration.

    kubectl create secret generic example-aws-secret-1 \\\n  --namespace=mynamespace \\\n  --from-literal=accesskey='myAwsAccessKey' \\\n  --from-literal=secretkey='myAwsSecretKey'\n

    Check the supported providers for input manifest examples. For an input manifest spanning all supported hyperscalers checkout out this example.

  2. Deploy InputManifest resource which Claudie uses to create infrastructure, include the created secret in .spec.providers as follows:

    kubectl apply -f - <<EOF\napiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: examplemanifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n      - name: aws-1\n      providerType: aws\n      secretRef:\n          name: example-aws-secret-1 # reference the secret name\n          namespace: mynamespace     # reference the secret namespace\n  nodePools:\n      dynamic:\n      - name: control-aws\n          providerSpec:\n            name: aws-1\n            region: eu-central-1\n            zone: eu-central-1a\n          count: 1\n          serverType: t3.medium\n          image: ami-0965bd5ba4d59211c\n      - name: compute-1-aws\n          providerSpec:\n            name: aws-1\n            region: eu-west-3\n            zone: eu-west-3a\n          count: 2\n          serverType: t3.medium\n          image: ami-029c608efaef0b395\n          storageDiskSize: 50\n  kubernetes:\n      clusters:\n      - name: aws-cluster\n          version: 1.27.0\n          network: 192.168.2.0/24\n          pools:\n            control:\n                - control-aws\n            compute:\n                - compute-1-aws        \nEOF\n

    Deleting existing InputManifest resource deletes provisioned infrastructure!

"},{"location":"getting-started/get-started-using-claudie/#connect-to-your-cluster","title":"Connect to your cluster","text":"

Claudie outputs base64 encoded kubeconfig secret <cluster-name>-<cluster-hash>-kubeconfig in the namespace where it is deployed:

  1. Recover kubeconfig of your cluster by running:
    kubectl get secrets -n claudie -l claudie.io/output=kubeconfig -o jsonpath='{.items[0].data.kubeconfig}' | base64 -d > your_kubeconfig.yaml\n
  2. Use your new kubeconfig:
    kubectl get pods -A --kubeconfig=your_kubeconfig.yaml\n
"},{"location":"getting-started/get-started-using-claudie/#cleanup","title":"Cleanup","text":"
  1. To remove your cluster and its associated infrastructure, delete the cluster definition block from the InputManifest:
    kubectl apply -f - <<EOF\napiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: examplemanifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n      - name: aws-1\n      providerType: aws\n      secretRef:\n          name: example-aws-secret-1 # reference the secret name\n          namespace: mynamespace     # reference the secret namespace\n  nodePools:\n      dynamic:\n      - name: control-aws\n          providerSpec:\n            name: aws-1\n            region: eu-central-1\n            zone: eu-central-1a\n          count: 1\n          serverType: t3.medium\n          image: ami-0965bd5ba4d59211c\n      - name: compute-1-aws\n          providerSpec:\n            name: aws-1\n            region: eu-west-3\n            zone: eu-west-3a\n          count: 2\n          serverType: t3.medium\n          image: ami-029c608efaef0b395\n          storageDiskSize: 50\n  kubernetes:\n    clusters:\n#      - name: aws-cluster\n#          version: 1.27.0\n#          network: 192.168.2.0/24\n#          pools:\n#            control:\n#                - control-aws\n#            compute:\n#                - compute-1-aws         \nEOF\n
  2. To delete all clusters defined in the input manifest, delete the InputManifest. This triggers the deletion process, removing the infrastructure and all data associated with the manifest.

    kubectl delete inputmanifest examplemanifest\n
"},{"location":"hardening/hardening/","title":"Claudie Hardening","text":"

In this section we'll describe how to further configure security hardening of the default deployment for claudie.

"},{"location":"hardening/hardening/#passwords","title":"Passwords","text":"

When deploying the default manifests claudie uses simple passwords for MongoDB, DynamoDB and MinIO.

You can find the passwords at these paths:

manifests/claudie/mongo/secrets\nmanifests/claudie/minio/secrets\nmanifests/claudie/dynamo/secrets\n

It is highly recommended that you change these passwords to more secure ones.

"},{"location":"hardening/hardening/#network-policies","title":"Network Policies","text":"

The default deployment of claudie comes without any network policies, as based on the CNI on the Management cluster the network policies may not be fully supported.

We have a set of network policies pre-defined that can be found in:

manifests/network-policies\n

Currently, we have a cilium specific network policy that's using CiliumNetworkPolicy and another that uses NetworkPolicy which should be supported by most network plugins.

To install network policies you can simply execute one the following commands:

# for clusters using cilium as their CNI\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy-cilium.yaml\n
# other\nkubectl apply -f https://github.com/berops/claudie/releases/latest/download/network-policy.yaml\n
"},{"location":"http-proxy/http-proxy/","title":"Usage of HTTP proxy","text":"

In this section, we'll describe the default HTTP proxy setup and its the further customization.

"},{"location":"http-proxy/http-proxy/#default-setup","title":"Default setup","text":"

By default HTTP_PROXY_MODE is set to default see, thus Claudie utilizes the HTTP proxy in building the K8s cluster only when there is at least one node from the Hetzner cloud provider. This means, that if you have a cluster with one master node in Azure and one worker node in AWS Claudie won't use the HTTP proxy to build the K8s cluster. However, if you add another worker node from Hetzner the whole process of building the K8s cluster will utilize the HTTP proxy.

This approach was implemented to address the following issues:

"},{"location":"http-proxy/http-proxy/#further-customization","title":"Further customization","text":"

In case you don't want to utilize the HTTP proxy at all (even when there are nodes in the K8s cluster from the Hetzner cloud provider) you can turn off the HTTP proxy by setting the HTTP_PROXY_MODE to off (see). On the other hand, if you wish to use the HTTP proxy whenever building a K8s cluster (even when there aren't any nodes in the K8s cluster from the Hetzner cloud provider) you can set the HTTP_PROXY_MODE to on (see).

If you want to utilize your own HTTP proxy you can set its URL in HTTP_PROXY_URL (see). By default, this value is set to http://proxy.claudie.io:8880. In case your HTTP proxy runs on myproxy.com and is exposed on port 3128 the HTTP_PROXY_URL has to be set to http://myproxy.com:3128. This means you always have to specify the whole URL with the protocol (HTTP), domain name, and port.

"},{"location":"input-manifest/api-reference/","title":"InputManifest API reference","text":"

InputManifest is a definition of the user's infrastructure. It contains cloud provider specification, nodepool specification, Kubernetes and loadbalancer clusters.

"},{"location":"input-manifest/api-reference/#status","title":"Status","text":"

Most recently observed status of the InputManifest

"},{"location":"input-manifest/api-reference/#spec","title":"Spec","text":"

Specification of the desired behavior of the InputManifest

Providers is a list of defined cloud provider configuration that will be used in infrastructure provisioning.

Describes nodepools used for either kubernetes clusters or loadbalancer cluster defined in this manifest.

List of Kubernetes cluster this manifest will manage.

List of loadbalancer clusters the Kubernetes clusters may use.

"},{"location":"input-manifest/api-reference/#providers","title":"Providers","text":"

Contains configurations for supported cloud providers. At least one provider needs to be defined.

The name of the provider specification. The name is limited to 15 characters. It has to be unique across all providers.

Type of a provider. The providerType defines mandatory fields that has to be included for a specific provider. A list of available providers can be found at providers section. Allowed values are:

Value Description aws AWS provider type azure Azure provider type cloudflare Cloudflare provider type gcp GCP provider type hetzner Hetzner provider type hetznerdns Hetzner DNS provider type oci OCI provider type genesiscloud GenesisCloud provider type

Represents a Secret Reference. It has enough information to retrieve secret in any namespace.

Support for more cloud providers is in the roadmap.

For static nodepools a provider is not needed, refer to the static section for more detailed information.

"},{"location":"input-manifest/api-reference/#secretref","title":"SecretRef","text":"

SecretReference represents a Kubernetes Secret Reference. It has enough information to retrieve secret in any namespace.

Name of the secret, which holds data for the particular cloud provider instance.

Namespace of the secret which holds data for the particular cloud provider instance.

"},{"location":"input-manifest/api-reference/#cloudflare","title":"Cloudflare","text":"

The fields that need to be included in a Kubernetes Secret resource to utilize the Cloudflare provider. To find out how to configure Cloudflare follow the instructions here

Credentials for the provider (API token).

"},{"location":"input-manifest/api-reference/#hetznerdns","title":"HetznerDNS","text":"

The fields that need to be included in a Kubernetes Secret resource to utilize the HetznerDNS provider. To find out how to configure HetznerDNS follow the instructions here

Credentials for the provider (API token).

"},{"location":"input-manifest/api-reference/#gcp","title":"GCP","text":"

The fields that need to be included in a Kubernetes Secret resource to utilize the GCP provider. To find out how to configure GCP provider and service account, follow the instructions here.

Credentials for the provider. Stringified JSON service account key.

Project id of an already existing GCP project where the infrastructure is to be created.

"},{"location":"input-manifest/api-reference/#genesiscloud","title":"GenesisCloud","text":"

The fields that need to be included in a Kubernetes Secret resource to utilize the Genesis Cloud provider. To find out how to configure Genesis Cloud provider, follow the instructions here.

API token for the provider.

"},{"location":"input-manifest/api-reference/#hetzner","title":"Hetzner","text":"

The fields that need to be included in a Kubernetes Secret resource to utilize the Hetzner provider. To find out how to configure Hetzner provider and service account, follow the instructions here.

Credentials for the provider (API token).

"},{"location":"input-manifest/api-reference/#oci","title":"OCI","text":"

The fields that need to be included in a Kubernetes Secret resource to utilize the OCI provider. To find out how to configure OCI provider and service account, follow the instructions here.

Private key used to authenticate to the OCI.

Fingerprint of the user-supplied private key.

OCID of the tenancy where privateKey is added as an API key

OCID of the user in the supplied tenancy

OCID of the compartment where VMs/VCNs/... will be created

"},{"location":"input-manifest/api-reference/#aws","title":"AWS","text":"

The fields that need to be included in a Kubernetes Secret resource to utilize the AWS provider. To find out how to configure AWS provider and service account, follow the instructions here.

Access key ID for your AWS account.

Secret key for the Access key specified above.

"},{"location":"input-manifest/api-reference/#azure","title":"Azure","text":"

The fields that need to be included in a Kubernetes Secret resource to utilize the Azure provider. To find out how to configure Azure provider and service account, follow the instructions here.

Subscription ID of your subscription in Azure.

Tenant ID of your tenancy in Azure.

Client ID of your client. The Claudie is design to use a service principal with appropriate permissions.

Client secret generated for your client.

"},{"location":"input-manifest/api-reference/#nodepools","title":"Nodepools","text":"

Collection of static and dynamic nodepool specification, to be referenced in the kubernetes or loadBalancer clusters.

List of dynamically to-be-created nodepools of not yet existing machines, used for Kubernetes or loadbalancer clusters.

These are only blueprints, and will only be created per reference in kubernetes or loadBalancer clusters. E.g. if the nodepool isn't used, it won't even be created. Or if the same nodepool is used in two different clusters, it will be created twice. In OOP analogy, a dynamic nodepool would be a class that would get instantiated N >= 0 times depending on which clusters reference it.

List of static nodepools of already existing machines, not provisioned by Claudie, used for Kubernetes (see requirements) or loadbalancer clusters. These can be baremetal servers or VMs with IPs assigned. Claudie is able to join them into existing clusters, or provision clusters solely on the static nodepools. Typically we'll find these being used in on-premises scenarios, or hybrid-cloud clusters.

"},{"location":"input-manifest/api-reference/#dynamic","title":"Dynamic","text":"

Dynamic nodepools are defined for cloud provider machines that Claudie is expected to provision.

Name of the nodepool. The name is limited by 14 characters. Each nodepool will have a random hash appended to the name, so the whole name will be of format <name>-<hash>.

Collection of provider data to be used while creating the nodepool.

Number of the nodes in the nodepool. Maximum value of 255. Mutually exclusive with autoscaler.

Type of the machines in the nodepool.

Currently, only AMD64 machines are supported.

Further describes the selected server type, if available by the cloud provider.

OS image of the machine.

Currently, only Ubuntu 22.04 AMD64 images are supported.

The size of the storage disk on the nodes in the node pool is specified in GB. The OS disk is created automatically with a predefined size of 100GB for Kubernetes nodes and 50GB for LoadBalancer nodes.

This field is optional; however, if a compute node pool does not define it, the default value will be used for the creation of the storage disk. Control node pools and LoadBalancer node pools ignore this field.

The default value for this field is 50, with a minimum value also set to 50. This value is only applicable to compute nodes. If the disk size is set to 0, no storage disk will be created for any nodes in the particular node pool.

Autoscaler configuration for this nodepool. Mutually exclusive with count.

Map of user defined labels, which will be applied on every node in the node pool. This field is optional.

To see the default labels Claudie applies on each node, refer to this section.

Map of user defined annotations, which will be applied on every node in the node pool. This field is optional.

You can use Kubernetes annotations to attach arbitrary non-identifying metadata. Clients such as tools and libraries can retrieve this metadata.

Array of user defined taints, which will be applied on every node in the node pool. This field is optional.

To see the default taints Claudie applies on each node, refer to this section.

"},{"location":"input-manifest/api-reference/#provider-spec","title":"Provider Spec","text":"

Provider spec is an additional specification built on top of the data from any of the provider instance. Here are provider configuration examples for each individual provider: aws, azure, gcp, cloudflare, hetzner and oci.

Name of the provider instance specified in providers

Region of the nodepool.

Zone of the nodepool.

"},{"location":"input-manifest/api-reference/#autoscaler-configuration","title":"Autoscaler Configuration","text":"

Autoscaler configuration on per nodepool basis. Defines the number of nodes, autoscaler will scale up or down specific nodepool.

Minimum number of nodes in nodepool.

Maximum number of nodes in nodepool.

"},{"location":"input-manifest/api-reference/#static","title":"Static","text":"

Static nodepools are defined for static machines which Claudie will not manage. Used for on premise nodes.

In case you want to use your static nodes in the Kubernetes cluster, make sure they meet the requirements.

Name of the static nodepool. The name is limited by 14 characters.

List of static nodes for a particular static nodepool.

Map of user defined labels, which will be applied on every node in the node pool. This field is optional.

To see the default labels Claudie applies on each node, refer to this section.

Map of user defined annotations, which will be applied on every node in the node pool. This field is optional.

You can use Kubernetes annotations to attach arbitrary non-identifying metadata. Clients such as tools and libraries can retrieve this metadata.

Array of user defined taints, which will be applied on every node in the node pool. This field is optional.

To see the default taints Claudie applies on each node, refer to this section.

"},{"location":"input-manifest/api-reference/#static-node","title":"Static node","text":"

Static node defines single static node from a static nodepool.

Endpoint under which Claudie will access this node.

Name of a user with root privileges, will be used to SSH into this node and install dependencies. This attribute is optional. In case it isn't specified a root username is used.

Secret from which private key will be taken used to SSH into the machine (as root or as a user specificed in the username attribute).

The field in the secret must be privatekey, i.e.

apiVersion: v1\ntype: Opaque\nkind: Secret\n  name: private-key-node-1\n  namespace: claudie-secrets\ndata:\n  privatekey: <base64 encoded private key>\n
"},{"location":"input-manifest/api-reference/#kubernetes","title":"Kubernetes","text":"

Defines Kubernetes clusters.

List of Kubernetes clusters Claudie will create.

"},{"location":"input-manifest/api-reference/#cluster-k8s","title":"Cluster-k8s","text":"

Collection of data used to define a Kubernetes cluster.

Name of the Kubernetes cluster. The name is limited by 28 characters. Each cluster will have a random hash appended to the name, so the whole name will be of format <name>-<hash>.

Kubernetes version of the cluster.

Version should be defined in format vX.Y. In terms of supported versions of Kubernetes, Claudie follows kubeone releases and their supported versions. The current kubeone version used in Claudie is 1.8. To see the list of supported versions, please refer to kubeone documentation.

Network range for the VPN of the cluster. The value should be defined in format A.B.C.D/mask.

List of nodepool names this cluster will use. Remember that nodepools defined in nodepools are only \"blueprints\". The actual nodepool will be created once referenced here.

"},{"location":"input-manifest/api-reference/#loadbalancer","title":"LoadBalancer","text":"

Defines loadbalancer clusters.

List of roles loadbalancers use to forward the traffic. Single role can be used in multiple loadbalancer clusters.

List of loadbalancer clusters used in the Kubernetes clusters defined under clusters.

"},{"location":"input-manifest/api-reference/#role","title":"Role","text":"

Role defines a concrete loadbalancer configuration. Single loadbalancer can have multiple roles.

Name of the role. Used as a reference in clusters.

Protocol of the rule. Allowed values are:

Value Description tcp Role will use TCP protocol udp Role will use UDP protocol

Port of the incoming traffic on the loadbalancer.

Port where loadbalancer forwards the traffic.

"},{"location":"input-manifest/api-reference/#cluster-lb","title":"Cluster-lb","text":"

Collection of data used to define a loadbalancer cluster.

Name of the loadbalancer. The name is limited by 28 characters.

List of roles the loadbalancer uses.

Specification of the loadbalancer's DNS record.

Name of the Kubernetes cluster targetted by this loadbalancer.

List of nodepool names this loadbalancer will use. Remember, that nodepools defined in nodepools are only \"blueprints\". The actual nodepool will be created once referenced here.

"},{"location":"input-manifest/api-reference/#dns","title":"DNS","text":"

Collection of data Claudie uses to create a DNS record for the loadbalancer.

DNS zone inside which the records will be created. GCP/AWS/OCI/Azure/Cloudflare/Hetzner DNS zone is accepted.

The record created in this zone must be accessible to the public. Therefore, a public DNS zone is required.

Name of provider to be used for creating an A record entry in defined DNS zone.

Custom hostname for your A record. If left empty, the hostname will be a random hash.

"},{"location":"input-manifest/api-reference/#default-labels","title":"Default labels","text":"

By default, Claudie applies following labels on every node in the cluster, together with those defined by the user.

Key Value claudie.io/nodepool Name of the node pool. claudie.io/provider Cloud provider name. claudie.io/provider-instance User defined provider name. claudie.io/node-type Type of the node. Either control or compute. topology.kubernetes.io/region Region where the node resides. topology.kubernetes.io/zone Zone of the region where node resides. kubernetes.io/os Os family of the node. kubernetes.io/arch Architecture type of the CPU. v1.kubeone.io/operating-system Os type of the node."},{"location":"input-manifest/api-reference/#default-taints","title":"Default taints","text":"

By default, Claudie applies only node-role.kubernetes.io/control-plane taint for control plane nodes, with effect NoSchedule, together with those defined by the user.

"},{"location":"input-manifest/example/","title":"Example yaml file","text":"example.yaml
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: ExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  # Providers field is used for defining the providers. \n  # It is referencing a secret resource in Kubernetes cluster.\n  # Each provider haves its own mandatory fields that are defined in the secret resource.\n  # Every supported provider has an example in this input manifest.\n  # providers:\n  #   - name: \n  #       providerType:   # Type of the provider secret [aws|azure|gcp|oci|hetzner|hetznerdns|cloudflare]. \n  #       templates:      # external templates used to build the infrastructure by that given provider. If omitted default templates will be used.\n  #         repository:   # publicly available git repository where the templates can be acquired\n  #         tag:          # optional tag. If set is used to checkout to a specific hash commit of the git repository.\n  #         path:         # path where the templates for the specific provider can be found.\n  #       secretRef:      # Secret reference specification.\n  #         name:         # Name of the secret resource.\n  #         namespace:    # Namespace of the secret resource.\n  providers:\n    # Hetzner DNS provider.\n    - name: hetznerdns-1\n      providerType: hetznerdns\n      templates:\n        repository: \"https://github.com/berops/claudie-config\"\n        path: \"templates/terraformer/hetznerdns\"\n      secretRef:\n        name: hetznerdns-secret-1\n        namespace: example-namespace\n\n    # Cloudflare DNS provider.\n    - name: cloudflare-1\n      providerType: cloudflare\n      # templates: ... using default templates\n      secretRef:\n        name: cloudflare-secret-1\n        namespace: example-namespace\n\n    # Hetzner Cloud provider.\n    - name: hetzner-1\n      providerType: hetzner\n      secretRef:\n        name: hetzner-secret-1\n        namespace: example-namespace\n\n    # GCP cloud provider.\n    - name: gcp-1\n      providerType: gcp\n      secretRef:\n        name: gcp-secret-1\n        namespace: example-namespace\n\n    # OCI cloud provider.\n    - name: oci-1\n      providerType: oci\n      secretRef:\n        name: oci-secret-1\n        namespace: example-namespace\n\n    # AWS cloud provider.\n    - name: aws-1\n      providerType: aws\n      secretRef:\n        name: aws-secret-1\n        namespace: example-namespace\n\n    # Azure cloud provider.\n    - name: azure-1\n      providerType: azure\n      secretRef:\n        name: azure-secret-1\n        namespace: example-namespace\n\n\n  # Nodepools field is used for defining the nodepool specification.\n  # You can think of them as a blueprints, not actual nodepools that will be created.\n  nodePools:\n    # Dynamic nodepools are created by Claudie, in one of the cloud providers specified.\n    # Definition specification:\n    # dynamic:\n    #   - name:             # Name of the nodepool, which is used as a reference to it. Needs to be unique.\n    #     providerSpec:     # Provider specification for this nodepool.\n    #       name:           # Name of the provider instance, referencing one of the providers define above.\n    #       region:         # Region of the nodepool.\n    #       zone:           # Zone of the nodepool.\n    #     count:            # Static number of nodes in this nodepool.\n    #     serverType:       # Machine type of the nodes in this nodepool.\n    #     image:            # OS image of the nodes in the nodepool.\n    #     storageDiskSize:  # Disk size of the storage disk for compute nodepool. (optional)\n    #     autoscaler:       # Autoscaler configuration. Mutually exclusive with Count.\n    #       min:            # Minimum number of nodes in nodepool.\n    #       max:            # Maximum number of nodes in nodepool.\n    #     labels:           # Map of custom user defined labels for this nodepool. This field is optional and is ignored if used in Loadbalancer cluster. (optional)\n    #     annotations:      # Map of user defined annotations, which will be applied on every node in the node pool. (optional)\n    #     taints:           # Array of custom user defined taints for this nodepool. This field is optional and is ignored if used in Loadbalancer cluster. (optional)\n    #       - key:          # The taint key to be applied to a node.\n    #         value:        # The taint value corresponding to the taint key.\n    #         effect:       # The effect of the taint on pods that do not tolerate the taint.\n    #\n    # Example definitions for each provider\n    dynamic:\n      - name: control-htz\n        providerSpec:\n          name: hetzner-1\n          region: hel1\n          zone: hel1-dc2\n        count: 3\n        serverType: cpx11\n        image: ubuntu-22.04\n        labels:\n          country: finland\n          city: helsinki\n        annotations:\n          node.longhorn.io/default-node-tags: '[\"finland\"]'\n        taints:\n          - key: country\n            value: finland\n            effect: NoSchedule\n\n      - name: compute-htz\n        providerSpec:\n          name: hetzner-1\n          region: hel1\n          zone: hel1-dc2\n        count: 2\n        serverType: cpx11\n        image: ubuntu-22.04\n        storageDiskSize: 50\n        labels:\n          country: finland\n          city: helsinki\n        annotations:\n          node.longhorn.io/default-node-tags: '[\"finland\"]'\n\n      - name: htz-autoscaled\n        providerSpec:\n          name: hetzner-1\n          region: hel1\n          zone: hel1-dc2\n        serverType: cpx11\n        image: ubuntu-22.04\n        storageDiskSize: 50\n        autoscaler:\n          min: 1\n          max: 5\n        labels:\n          country: finland\n          city: helsinki\n        annotations:\n          node.longhorn.io/default-node-tags: '[\"finland\"]'\n\n      - name: control-gcp\n        providerSpec:\n          name: gcp-1\n          region: europe-west1\n          zone: europe-west1-c\n        count: 3\n        serverType: e2-medium\n        image: ubuntu-os-cloud/ubuntu-2204-jammy-v20221206\n        labels:\n          country: germany\n          city: frankfurt\n        annotations:\n          node.longhorn.io/default-node-tags: '[\"germany\"]'\n\n      - name: compute-gcp\n        providerSpec:\n          name: gcp-1\n          region: europe-west1\n          zone: europe-west1-c\n        count: 2\n        serverType: e2-small\n        image: ubuntu-os-cloud/ubuntu-2204-jammy-v20221206\n        storageDiskSize: 50\n        labels:\n          country: germany\n          city: frankfurt\n        taints:\n          - key: city\n            value: frankfurt\n            effect: NoExecute\n        annotations:\n          node.longhorn.io/default-node-tags: '[\"germany\"]'\n\n      - name: control-oci\n        providerSpec:\n          name: oci-1\n          region: eu-milan-1\n          zone: hsVQ:EU-MILAN-1-AD-1\n        count: 3\n        serverType: VM.Standard2.1\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n\n      - name: compute-oci\n        providerSpec:\n          name: oci-1\n          region: eu-milan-1\n          zone: hsVQ:EU-MILAN-1-AD-1\n        count: 2\n        serverType: VM.Standard2.1\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n        storageDiskSize: 50\n\n      - name: control-aws\n        providerSpec:\n          name: aws-1\n          region: eu-central-1\n          zone: eu-central-1c\n        count: 2\n        serverType: t3.medium\n        image: ami-0965bd5ba4d59211c\n\n      - name: compute-aws\n        providerSpec:\n          name: aws-1\n          region: eu-central-1\n          zone: eu-central-1c\n        count: 2\n        serverType: t3.medium\n        image: ami-0965bd5ba4d59211c\n        storageDiskSize: 50\n\n      - name: control-azure\n        providerSpec:\n          name: azure-1\n          region: West Europe\n          zone: \"1\"\n        count: 2\n        serverType: Standard_B2s\n        image: Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts:22.04.202212120\n\n      - name: compute-azure\n        providerSpec:\n          name: azure-1\n          region: West Europe\n          zone: \"1\"\n        count: 2\n        serverType: Standard_B2s\n        image: Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts:22.04.202212120\n        storageDiskSize: 50\n\n      - name: loadbalancer-1\n        provider:\n        providerSpec:\n          name: gcp-1\n          region: europe-west1\n          zone: europe-west1-c\n        count: 2\n        serverType: e2-small\n        image: ubuntu-os-cloud/ubuntu-2004-focal-v20220610\n\n      - name: loadbalancer-2\n        providerSpec:\n          name: hetzner-1\n          region: hel1\n          zone: hel1-dc2\n        count: 2\n        serverType: cpx11\n        image: ubuntu-20.04\n\n    # Static nodepools are created by user beforehand.\n    # In case you want to use them in the Kubernetes cluster, make sure they meet the requirements. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin\n    # Definition specification:\n    # static:\n    #   - name:             # Name of the nodepool, which is used as a reference to it. Needs to be unique.\n    #     nodes:            # List of nodes which will be access under this nodepool.\n    #       - endpoint:     # IP under which Claudie will access this node. Can be private as long as Claudie will be able to access it.\n    #         username:     # Username of a user with root privileges (optional). If not specified user with name \"root\" will be used\n    #         secretRef:    # Secret reference specification, holding private key which will be used to SSH into the node (as root or as a user specificed in the username attribute).\n    #           name:       # Name of the secret resource.\n    #           namespace:  # Namespace of the secret resource.\n    #     labels:           # Map of custom user defined labels for this nodepool. This field is optional and is ignored if used in Loadbalancer cluster. (optional)\n    #     annotations:      # Map of user defined annotations, which will be applied on every node in the node pool. (optional)\n    #     taints:           # Array of custom user defined taints for this nodepool. This field is optional and is ignored if used in Loadbalancer cluster. (optional)\n    #       - key:          # The taint key to be applied to a node.\n    #         value:        # The taint value corresponding to the taint key.\n    #         effect:       # The effect of the taint on pods that do not tolerate the taint.\n    #\n    # Example definitions\n    static:\n      - name: datacenter-1\n        nodes:\n          - endpoint: \"192.168.10.1\"\n            secretRef:\n              name: datacenter-1-key\n              namespace: example-namespace\n\n          - endpoint: \"192.168.10.2\"\n            secretRef:\n              name: datacenter-1-key\n              namespace: example-namespace\n\n          - endpoint: \"192.168.10.3\"\n            username: admin\n            secretRef:\n              name: datacenter-1-key\n              namespace: example-namespace\n        labels:\n          datacenter: datacenter-1\n        annotations:\n          node.longhorn.io/default-node-tags: '[\"datacenter-1\"]'   \n        taints:\n          - key: datacenter\n            effect: NoExecute\n\n\n  # Kubernetes field is used to define the kubernetes clusters.\n  # Definition specification:\n  #\n  # clusters:\n  #   - name:           # Name of the cluster. The name will be appended to the created node name.\n  #     version:        # Kubernetes version in semver scheme, must be supported by KubeOne.\n  #     network:        # Private network IP range.\n  #     pools:          # Nodepool names which cluster will be composed of. User can reuse same nodepool specification on multiple clusters.\n  #       control:      # List of nodepool names, which will be used as control nodes.\n  #       compute:      # List of nodepool names, which will be used as compute nodes.\n  #\n  # Example definitions:\n  kubernetes:\n    clusters:\n      - name: dev-cluster\n        version: 1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-htz\n            - control-gcp\n          compute:\n            - compute-htz\n            - compute-gcp\n            - compute-azure\n            - htz-autoscaled\n\n      - name: prod-cluster\n        version: 1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-htz\n            - control-gcp\n            - control-oci\n            - control-aws\n            - control-azure\n          compute:\n            - compute-htz\n            - compute-gcp\n            - compute-oci\n            - compute-aws\n            - compute-azure\n\n      - name: hybrid-cluster\n        version: 1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - datacenter-1\n          compute:\n            - compute-htz\n            - compute-gcp\n            - compute-azure\n\n  # Loadbalancers field defines loadbalancers used for the kubernetes clusters and roles for the loadbalancers.\n  # Definition specification for role:\n  #\n  # roles:\n  #   - name:         # Name of the role, used as a reference later. Must be unique.\n  #     protocol:     # Protocol, this role will use.\n  #     port:         # Port, where traffic will be coming.\n  #     targetPort:   # Port, where loadbalancer will forward traffic to.\n  #     targetPools:  # Targeted nodes on kubernetes cluster. Specify a nodepool that is used in the targeted K8s cluster.\n  #\n  # Definition specification for loadbalancer:\n  #\n  # clusters:\n  #   - name:         # Loadbalancer cluster name\n  #     roles:        # List of role names this loadbalancer will fulfil.\n  #     dns:          # DNS specification, where DNS records will be created.\n  #       dnsZone:    # DNS zone name in your provider.\n  #       provider:   # Provider name for the DNS.\n  #       hostname:   # Hostname for the DNS record. Keep in mind the zone will be included automatically. If left empty the Claudie will create random hash as a hostname.\n  #     targetedK8s:  # Name of the targeted kubernetes cluster\n  #     pools:        # List of nodepool names used for loadbalancer\n  #\n  # Example definitions:\n  loadBalancers:\n    roles:\n      - name: apiserver\n        protocol: tcp\n        port: 6443\n        targetPort: 6443\n        targetPools:\n            - control-htz # make sure that this nodepools is acutally used by the targeted `dev-cluster` cluster.\n    clusters:\n      - name: apiserver-lb-dev\n        roles:\n          - apiserver\n        dns:\n          dnsZone: dns-zone\n          provider: hetznerdns-1\n        targetedK8s: dev-cluster\n        pools:\n          - loadbalancer-1\n      - name: apiserver-lb-prod\n        roles:\n          - apiserver\n        dns:\n          dnsZone: dns-zone\n          provider: cloudflare-1\n          hostname: my.fancy.url\n        targetedK8s: prod-cluster\n        pools:\n          - loadbalancer-2\n
"},{"location":"input-manifest/external-templates/","title":"External Templates","text":"

Claudie allows to plug in your own templates for spawning the infrastructure. Specifying which templates are to be used is done at the provider level in the Input Manifest, for example:

apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: genesis-example\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: genesiscloud\n      providerType: genesiscloud\n      templates:\n        repository: \"https://github.com/berops/claudie-config\"\n        tag: \"v0.9.0\" # optional\n        path: \"templates/terraformer/genesiscloud\"\n      secretRef:\n        name: genesiscloud-secret\n        namespace: secrets\n...\n

The template repository need to follow a certain convention to work properly. For example: If we consider an external template repository accessible via a public git repository at:

https://github.com/berops/claudie-config\n

The repository can either contain only the necessary template files, or they can be stored in a subtree. To handle this, you need to pass a path within the public git repository, such as

templates/terraformer/gcp\n

This denotes that the necessary templates for Google Cloud Platform can be found in the subtree at:

claudie-config/templates/terraformer/gcp\n

To only deal with the necessary template files a sparse-checkout is used when downloading the external repository to have a local mirror present which will then be used to generate the terraform files. When using the template files for generation the subtree present at the above given example claudie-config/templates/terraformer/gcp the directory is traversed and the following rules apply:

The complete structure of a subtree for a single provider for external templates located at claudie-config/templates/terraformer/gcp can look as follows:

\u2514\u2500\u2500 terraformer\n    |\u2500\u2500 gcp\n    \u2502   \u251c\u2500\u2500 dns\n    \u2502       \u2514\u2500\u2500 dns.tpl\n    \u2502   \u251c\u2500\u2500 networking\n    \u2502       \u2514\u2500\u2500 networking.tpl\n    \u2502   \u251c\u2500\u2500 nodepool\n    \u2502       \u251c\u2500\u2500 node.tpl\n    \u2502       \u2514\u2500\u2500 node_networking.tpl\n    \u2502   \u2514\u2500\u2500 provider\n    \u2502       \u2514\u2500\u2500 provider.tpl\n    ...\n

Examples of external templates can be found on: https://github.com/berops/claudie-config

"},{"location":"input-manifest/external-templates/#rolling-update","title":"Rolling update","text":"

To handle more specific scenarios where the default templates provided by claudie do not fit the use case, we allow these external templates to be changed/adapted by the user.

By providing this ability to specify the templates to be used when building the InputManifest infrastructure, there is one common scenario that should be handled by claudie, which is rolling updates.

Rolling updates of nodepools are performed when a change to a provider's external templates is registered. Claudie checks that the external repository of the new templates exists and uses them to perform a rolling update of the infrastructure already built. In the below example, when the templates of provider Hetzner-1 are changed the rolling update of all the nodepools which reference that provider will start by doing an update on a single nodepool at a time.

apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: HetznerExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: hetzner-1\n      providerType: hetzner\n      templates:\n-       repository: \"https://github.com/berops/claudie-config\"\n-       path: \"templates/terraformer/hetzner\"\n+       repository: \"https://github.com/YouRepository/claudie-config\"\n+       path: \"templates/terraformer/hetzner\"\n      secretRef:\n        name: hetzner-secret-1\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-htz\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-1\n          # Region of the nodepool.\n          region: hel1\n          # Datacenter of the nodepool.\n          zone: hel1-dc2\n        count: 1\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n\n      - name: compute-1-htz\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-1\n          # Region of the nodepool.\n          region: fsn1\n          # Datacenter of the nodepool.\n          zone: fsn1-dc14\n        count: 2\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n        storageDiskSize: 50\n\n      - name: compute-2-htz\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-1\n          # Region of the nodepool.\n          region: nbg1\n          # Datacenter of the nodepool.\n          zone: nbg1-dc3\n        count: 2\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: hetzner-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-htz\n          compute:\n            - compute-1-htz\n            - compute-2-htz\n
"},{"location":"input-manifest/gpu-example/","title":"GPUs example","text":"

We will follow the guide from Nvidia to deploy the gpu-operator into a claudie build kubernetes cluster. Make sure you fulfill the necessary listed requirements in prerequisites before continuing, if you decide to use a different cloud provider.

In this example we will be using GenesisCloud as our provider, with the following config:

apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: genesis-example\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: genesiscloud\n      providerType: genesiscloud\n      secretRef:\n        name: genesiscloud-secret\n        namespace: secrets\n\n  nodePools:\n    dynamic:\n    - name: gencloud-cpu\n      providerSpec:\n        name: genesiscloud\n        region: ARC-IS-HAF-1\n      count: 1\n      serverType: vcpu-2_memory-4g_disk-80g\n      image: \"Ubuntu 22.04\"\n      storageDiskSize: 50\n\n    - name: gencloud-gpu\n      providerSpec:\n        name: genesiscloud\n        region: ARC-IS-HAF-1\n      count: 2\n      serverType: vcpu-4_memory-12g_disk-80g_nvidia3080-1\n      image: \"Ubuntu 22.04\"\n      storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: gpu-example\n        version: v1.27.0\n        network: 172.16.2.0/24\n        pools:\n          control:\n            - gencloud-cpu\n          compute:\n            - gencloud-gpu\n

After the InputManifest was successfully build by claudie, we deploy the gpu-operator to the gpu-exameplkubernetes cluster.

  1. Create a namespace for the gpu-operator.
kubectl create ns gpu-operator\n
kubectl label --overwrite ns gpu-operator pod-security.kubernetes.io/enforce=privileged\n
  1. Add Nvidia Helm repository.
helm repo add nvidia https://helm.ngc.nvidia.com/nvidia \\\n    && helm repo update\n
  1. Install the operator.
helm install --wait --generate-name \\\n    -n gpu-operator --create-namespace \\\n    nvidia/gpu-operator\n
  1. Wait for the pods in the gpu-operator namespace to be ready.
NAME                                                              READY   STATUS      RESTARTS      AGE\ngpu-feature-discovery-4lrbz                                       1/1     Running     0              10m\ngpu-feature-discovery-5x88d                                       1/1     Running     0              10m\ngpu-operator-1708080094-node-feature-discovery-gc-84ff8f47tn7cd   1/1     Running     0              10m\ngpu-operator-1708080094-node-feature-discovery-master-757c27tm6   1/1     Running     0              10m\ngpu-operator-1708080094-node-feature-discovery-worker-495z2       1/1     Running     0              10m\ngpu-operator-1708080094-node-feature-discovery-worker-n8fl6       1/1     Running     0              10m\ngpu-operator-1708080094-node-feature-discovery-worker-znsk4       1/1     Running     0              10m\ngpu-operator-6dfb9bd487-2gxzr                                     1/1     Running     0              10m\nnvidia-container-toolkit-daemonset-jnqwn                          1/1     Running     0              10m\nnvidia-container-toolkit-daemonset-x9t56                          1/1     Running     0              10m\nnvidia-cuda-validator-l4w85                                       0/1     Completed   0              10m\nnvidia-cuda-validator-lqxhq                                       0/1     Completed   0              10m\nnvidia-dcgm-exporter-l9nzt                                        1/1     Running     0              10m\nnvidia-dcgm-exporter-q7c2x                                        1/1     Running     0              10m\nnvidia-device-plugin-daemonset-dbjjl                              1/1     Running     0              10m\nnvidia-device-plugin-daemonset-x5kfs                              1/1     Running     0              10m\nnvidia-driver-daemonset-dcq4g                                     1/1     Running     0              10m\nnvidia-driver-daemonset-sjjlb                                     1/1     Running     0              10m\nnvidia-operator-validator-jbc7r                                   1/1     Running     0              10m\nnvidia-operator-validator-q59mc                                   1/1     Running     0              10m\n

When all pods are ready you should be able to verify if the GPUs can be used

kubectl get nodes -o json | jq -r '.items[] | {name:.metadata.name, gpus:.status.capacity.\"nvidia.com/gpu\"}'\n
  1. Deploy an example manifest that uses one of the available GPUs from the worker nodes.
apiVersion: v1\nkind: Pod\nmetadata:\n  name: cuda-vectoradd\nspec:\n  restartPolicy: OnFailure\n  containers:\n    - name: cuda-vectoradd\n      image: \"nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda11.7.1-ubuntu20.04\"\n      resources:\n        limits:\n          nvidia.com/gpu: 1\n

From the logs of the pods you should be able to see

kubectl logs cuda-vectoradd\n[Vector addition of 50000 elements]\nCopy input data from the host memory to the CUDA device\nCUDA kernel launch with 196 blocks of 256 threads\nCopy output data from the CUDA device to the host memory\nTest PASSED\nDone\n
"},{"location":"input-manifest/providers/aws/","title":"AWS","text":"

AWS cloud provider requires you to input the credentials as an accesskey and a secretkey.

"},{"location":"input-manifest/providers/aws/#compute-and-dns-example","title":"Compute and DNS example","text":"
apiVersion: v1\nkind: Secret\nmetadata:\n  name: aws-secret\ndata:\n  accesskey: U0xEVVRLU0hGRE1TSktESUFMQVNTRA==\n  secretkey: aXVoYk9JSk4rb2luL29saWtEU2Fkc25vaVNWU0RzYWNvaW5PVVNIRA==\ntype: Opaque\n
"},{"location":"input-manifest/providers/aws/#create-aws-credentials","title":"Create AWS credentials","text":""},{"location":"input-manifest/providers/aws/#prerequisites","title":"Prerequisites","text":"
  1. Install AWS CLI tools by following this guide.
  2. Setup AWS CLI on your machine by following this guide.
  3. Ensure that the regions you're planning to use are enabled in your AWS account. You can check the available regions using this guide, and you can enable them using this guide. Otherwise, you may encounter a misleading error suggesting your STS token is invalid.
"},{"location":"input-manifest/providers/aws/#creating-aws-credentials-for-claudie","title":"Creating AWS credentials for Claudie","text":"
  1. Create a user using AWS CLI:

    aws iam create-user --user-name claudie\n

  2. Create a policy document with compute and DNS permissions required by Claudie:

    cat > policy.json <<EOF\n{\n   \"Version\":\"2012-10-17\",\n   \"Statement\":[\n      {\n         \"Effect\":\"Allow\",\n         \"Action\":[\n            \"ec2:*\"\n         ],\n         \"Resource\":\"*\"\n      },\n      {\n         \"Effect\":\"Allow\",\n         \"Action\":[\n            \"route53:*\"\n         ],\n         \"Resource\":\"*\"\n      }\n   ]\n}\nEOF\n

    DNS permissions

    Exclude route53 permissions from the policy document, if you prefer not to use AWS as the DNS provider.

  3. Attach the policy to the claudie user:

    aws iam put-user-policy --user-name claudie --policy-name ec2-and-dns-access --policy-document file://policy.json\n

  4. Create access keys for claudie user:

    aws iam create-access-key --user-name claudie\n
    {\n   \"AccessKey\":{\n      \"UserName\":\"claudie\",\n      \"AccessKeyId\":\"AKIAIOSFODNN7EXAMPLE\",\n      \"Status\":\"Active\",\n      \"SecretAccessKey\":\"wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY\",\n      \"CreateDate\":\"2018-12-14T17:34:16Z\"\n   }\n}\n

"},{"location":"input-manifest/providers/aws/#dns-setup","title":"DNS setup","text":"

If you wish to use AWS as your DNS provider where Claudie creates DNS records pointing to Claudie managed clusters, you will need to create a public hosted zone by following this guide.

AWS is not my domain registrar

If you haven't acquired a domain via AWS and wish to utilize AWS for hosting your zone, you can refer to this guide on AWS nameservers. However, if you prefer not to use the entire domain, an alternative option is to delegate a subdomain to AWS.

"},{"location":"input-manifest/providers/aws/#input-manifest-examples","title":"Input manifest examples","text":""},{"location":"input-manifest/providers/aws/#create-a-secret-for-aws-provider","title":"Create a secret for AWS provider","text":"

The secret for an AWS provider must include the following mandatory fields: accesskey and secretkey.

kubectl create secret generic aws-secret-1 --namespace=mynamespace --from-literal=accesskey='SLDUTKSHFDMSJKDIALASSD' --from-literal=secretkey='iuhbOIJN+oin/olikDSadsnoiSVSDsacoinOUSHD'\n
"},{"location":"input-manifest/providers/aws/#single-provider-multi-region-cluster-example","title":"Single provider, multi region cluster example","text":"
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: AWSExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n\n  providers:\n    - name: aws-1\n      providerType: aws\n      secretRef:\n        name: aws-secret-1\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-aws\n        providerSpec:\n          # Name of the provider instance.\n          name: aws-1\n          # Region of the nodepool.\n          region: eu-central-1\n          # Availability zone of the nodepool.\n          zone: eu-central-1a\n        count: 1\n        # Instance type name.\n        serverType: t3.medium\n        # AMI ID of the image.\n        # Make sure to update it according to the region. \n        image: ami-0965bd5ba4d59211c\n\n      - name: compute-1-aws\n        providerSpec:\n          # Name of the provider instance.\n          name: aws-1\n          # Region of the nodepool.\n          region: eu-central-2\n          # Availability zone of the nodepool.\n          zone: eu-central-2a\n        count: 2\n        # Instance type name.\n        serverType: t3.medium\n        # AMI ID of the image.\n        # Make sure to update it according to the region. \n        image: ami-0e4d1886bf4bb88d5\n        storageDiskSize: 50\n\n      - name: compute-2-aws\n        providerSpec:\n          # Name of the provider instance.\n          name: aws-1\n          # Region of the nodepool.\n          region: eu-central-2\n          # Availability zone of the nodepool.\n          zone: eu-central-2a\n        count: 2\n        # Instance type name.\n        serverType: t3.medium\n        # AMI ID of the image.\n        # Make sure to update it according to the region. \n        image: ami-0965bd5ba4d59211c\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: aws-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-aws\n          compute:\n            - compute-1-aws\n            - compute-2-aws\n
"},{"location":"input-manifest/providers/aws/#multi-provider-multi-region-clusters-example","title":"Multi provider, multi region clusters example","text":"
kubectl create secret generic aws-secret-1 --namespace=mynamespace --from-literal=accesskey='SLDUTKSHFDMSJKDIALASSD' --from-literal=secretkey='iuhbOIJN+oin/olikDSadsnoiSVSDsacoinOUSHD'\nkubectl create secret generic aws-secret-2 --namespace=mynamespace --from-literal=accesskey='ODURNGUISNFAIPUNUGFINB' --from-literal=secretkey='asduvnva+skd/ounUIBPIUjnpiuBNuNipubnPuip'\n
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: AWSExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n\n  providers:\n    - name: aws-1\n      providerType: aws\n      secretRef:\n        name: aws-secret-1\n        namespace: mynamespace\n    - name: aws-2\n      providerType: aws\n      secretRef:\n        name: aws-secret-2\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-aws-1\n        providerSpec:\n          # Name of the provider instance.\n          name: aws-1\n          region: eu-central-1\n          # Availability zone of the nodepool.\n          zone: eu-central-1a\n        count: 1\n        # Instance type name.\n        serverType: t3.medium\n        # AMI ID of the image.\n        # Make sure to update it according to the region. \n        image: ami-0965bd5ba4d59211c\n\n      - name: control-aws-2\n        providerSpec:\n          # Name of the provider instance.\n          name: aws-2\n          # Region of the nodepool.\n          region: eu-north-1\n          # Availability zone of the nodepool.\n          zone: eu-north-1a\n        count: 2\n        # Instance type name.\n        serverType: t3.medium\n        # AMI ID of the image.\n        # Make sure to update it according to the region. \n        image: ami-03df6dea56f8aa618\n\n      - name: compute-aws-1\n        providerSpec:\n          # Name of the provider instance.\n          name: aws-1\n          # Region of the nodepool.\n          region: eu-central-2\n          # Availability zone of the nodepool.\n          zone: eu-central-2a\n        count: 2\n        # Instance type name.\n        serverType: t3.medium\n        # AMI ID of the image.\n        # Make sure to update it according to the region. \n        image: ami-0e4d1886bf4bb88d5\n        storageDiskSize: 50\n\n      - name: compute-aws-2\n        providerSpec:\n          # Name of the provider instance.\n          name: aws-2\n          # Region of the nodepool.\n          region: eu-north-3\n          # Availability zone of the nodepool.\n          zone: eu-north-3a\n        count: 2\n        # Instance type name.\n        serverType: t3.medium\n        # AMI ID of the image.\n        # Make sure to update it according to the region. \n        image: ami-03df6dea56f8aa618\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: aws-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-aws-1\n            - control-aws-2\n          compute:\n            - compute-aws-1\n            - compute-aws-2\n
"},{"location":"input-manifest/providers/azure/","title":"Azure","text":"

Azure provider requires you to input clientsecret, subscriptionid, tenantid, and clientid.

"},{"location":"input-manifest/providers/azure/#compute-and-dns-example","title":"Compute and DNS example","text":"
apiVersion: v1\nkind: Secret\nmetadata:\n  name: azure-secret\ndata:\n  clientid: QWJjZH5FRmd+SDZJamtsc35BQkMxNXNFRkdLNTRzNzhYfk9sazk=\n  # all resources you define will be charged here\n  clientsecret: NmE0ZGZzZzctc2Q0di1mNGFkLWRzdmEtYWQ0djYxNmZkNTEy\n  subscriptionid: NTRjZGFmYTUtc2R2cy00NWRzLTU0NnMtZGY2NTFzZmR0NjE0\n  tenantid: MDI1NXNjMjMtNzZ3ZS04N2c2LTk2NGYtYWJjMWRlZjJnaDNs\ntype: Opaque\n
"},{"location":"input-manifest/providers/azure/#create-azure-credentials","title":"Create Azure credentials","text":""},{"location":"input-manifest/providers/azure/#prerequisites","title":"Prerequisites","text":"
  1. Install Azure CLI by following this guide.
  2. Login to Azure this guide.
"},{"location":"input-manifest/providers/azure/#creating-azure-credentials-for-claudie","title":"Creating Azure credentials for Claudie","text":"
  1. Login to Azure with the following command:

    az login\n

  2. Permissions file for the new role that claudie service principal will use:

    cat > policy.json <<EOF\n{\n   \"Name\":\"Resource Group Management\",\n   \"Id\":\"bbcd72a7-2285-48ef-bn72-f606fba81fe7\",\n   \"IsCustom\":true,\n   \"Description\":\"Create and delete Resource Groups.\",\n   \"Actions\":[\n      \"Microsoft.Resources/subscriptions/resourceGroups/write\",\n      \"Microsoft.Resources/subscriptions/resourceGroups/delete\"\n   ],\n   \"AssignableScopes\":[\"/\"]\n}\nEOF\n

  3. Create a role based on the policy document:

    az role definition create --role-definition policy.json\n

  4. Create a service account to access virtual machine resources as well as DNS:

    az ad sp create-for-rbac --name claudie-sp\n
    {\n  \"clientId\": \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\",\n  \"displayName\": \"claudie-sp\",\n  \"clientSecret\": \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\",\n  \"tenant\": \"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\"\n}\n

  5. Assign required roles for the service principal:

    {\n  az role assignment create --assignee claudie-sp --role \"Virtual Machine Contributor\"\n  az role assignment create --assignee claudie-sp --role \"Network Contributor\"\n  az role assignment create --assignee claudie-sp --role \"Resource Group Management\"\n}\n

"},{"location":"input-manifest/providers/azure/#dns-requirements","title":"DNS requirements","text":"

If you wish to use Azure as your DNS provider where Claudie creates DNS records pointing to Claudie managed clusters, you will need to create a public DNS zone by following this guide.

Azure is not my domain registrar

If you haven't acquired a domain via Azure and wish to utilize Azure for hosting your zone, you can refer to this guide on Azure nameservers. However, if you prefer not to use the entire domain, an alternative option is to delegate a subdomain to Azure.

"},{"location":"input-manifest/providers/azure/#input-manifest-examples","title":"Input manifest examples","text":""},{"location":"input-manifest/providers/azure/#single-provider-multi-region-cluster-example","title":"Single provider, multi region cluster example","text":""},{"location":"input-manifest/providers/azure/#create-a-secret-for-azure-provider","title":"Create a secret for Azure provider","text":"

The secret for an Azure provider must include the following mandatory fields: clientsecret, subscriptionid, tenantid, and clientid.

kubectl create secret generic azure-secret-1 --namespace=mynamespace --from-literal=clientsecret='Abcd~EFg~H6Ijkls~ABC15sEFGK54s78X~Olk9' --from-literal=subscriptionid='6a4dfsg7-sd4v-f4ad-dsva-ad4v616fd512' --from-literal=tenantid='54cdafa5-sdvs-45ds-546s-df651sfdt614' --from-literal=clientid='0255sc23-76we-87g6-964f-abc1def2gh3l'\n
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: AzureExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: azure-1\n      providerType: azure\n      secretRef:\n        name: azure-secret-1\n        namespace: mynamespace\n  nodePools:\n    dynamic:\n      - name: control-az\n        providerSpec:\n          # Name of the provider instance.\n          name: azure-1\n          # Location of the nodepool.\n          region: West Europe\n          # Zone of the nodepool.\n          zone: \"1\"\n        count: 2\n        # VM size name.\n        serverType: Standard_B2s\n        # URN of the image.\n        image: Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts:22.04.202212120\n\n      - name: compute-1-az\n        providerSpec:\n          # Name of the provider instance.\n          name: azure-1\n          # Location of the nodepool.\n          region: Germany West Central\n          # Zone of the nodepool.\n          zone: \"1\"\n        count: 2\n        # VM size name.\n        serverType: Standard_B2s\n        # URN of the image.\n        image: Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts:22.04.202212120\n        storageDiskSize: 50\n\n      - name: compute-2-az\n        providerSpec:\n          # Name of the provider instance.\n          name: azure-1\n          # Location of the nodepool.\n          region: West Europe\n          # Zone of the nodepool.\n          zone: \"1\"\n        count: 2\n        # VM size name.\n        serverType: Standard_B2s\n        # URN of the image.\n        image: Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts:22.04.202212120\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: azure-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-az\n          compute:\n            - compute-2-az\n            - compute-1-az\n
"},{"location":"input-manifest/providers/azure/#multi-provider-multi-region-clusters-example","title":"Multi provider, multi region clusters example","text":"
kubectl create secret generic azure-secret-1 --namespace=mynamespace --from-literal=clientsecret='Abcd~EFg~H6Ijkls~ABC15sEFGK54s78X~Olk9' --from-literal=subscriptionid='6a4dfsg7-sd4v-f4ad-dsva-ad4v616fd512' --from-literal=tenantid='54cdafa5-sdvs-45ds-546s-df651sfdt614' --from-literal=clientid='0255sc23-76we-87g6-964f-abc1def2gh3l'\n\nkubectl create secret generic azure-secret-2 --namespace=mynamespace --from-literal=clientsecret='Efgh~ijkL~on43noi~NiuscviBUIds78X~UkL7' --from-literal=subscriptionid='0965bd5b-usa3-as3c-ads1-csdaba6fd512' --from-literal=tenantid='55safa5d-dsfg-546s-45ds-d51251sfdaba' --from-literal=clientid='076wsc23-sdv2-09cA-8sd9-oigv23npn1p2'\n
name: AzureExampleManifest\napiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: AzureExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: azure-1\n      providerType: azure\n      secretRef:\n        name: azure-secret-1\n        namespace: mynamespace\n\n    - name: azure-2\n      providerType: azure\n      secretRef:\n        name: azure-secret-2\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-az-1\n        providerSpec:\n          # Name of the provider instance.\n          name: azure-1\n          # Location of the nodepool.\n          region: West Europe\n          # Zone of the nodepool.\n          zone: \"1\"\n        count: 1\n        # VM size name.\n        serverType: Standard_B2s\n        # URN of the image.\n        image: Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts:22.04.202212120\n\n      - name: control-az-2\n        providerSpec:\n          # Name of the provider instance.\n          name: azure-2\n          # Location of the nodepool.\n          region: Germany West Central\n          # Zone of the nodepool.\n          zone: \"2\"\n        count: 2\n        # VM size name.\n        serverType: Standard_B2s\n        # URN of the image.\n        image: Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts:22.04.202212120\n\n      - name: compute-az-1\n        providerSpec:\n          # Name of the provider instance.\n          name: azure-1\n          # Location of the nodepool.\n          region: Germany West Central\n          # Zone of the nodepool.\n          zone: \"2\"\n        count: 2\n        # VM size name.\n        serverType: Standard_B2s\n        # URN of the image.\n        image: Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts:22.04.202212120\n        storageDiskSize: 50\n\n      - name: compute-az-2\n        providerSpec:\n          # Name of the provider instance.\n          name: azure-2\n          # Location of the nodepool.\n          region: West Europe\n          # Zone of the nodepool.\n          zone: \"3\"\n        count: 2\n        # VM size name.\n        serverType: Standard_B2s\n        # URN of the image.\n        image: Canonical:0001-com-ubuntu-minimal-jammy:minimal-22_04-lts:22.04.202212120\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: azure-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-az-1\n            - control-az-2\n          compute:\n            - compute-az-1\n            - compute-az-2\n
"},{"location":"input-manifest/providers/cloudflare/","title":"Cloudflare","text":"

Cloudflare provider requires apitoken token field in string format.

"},{"location":"input-manifest/providers/cloudflare/#dns-example","title":"DNS example","text":"
apiVersion: v1\nkind: Secret\nmetadata:\n  name: cloudflare-secret\ndata:\n  apitoken: a3NsSVNBODc4YTZldFlBZlhZY2c1aVl5ckZHTmxDeGM=\ntype: Opaque\n
"},{"location":"input-manifest/providers/cloudflare/#create-cloudflare-credentials","title":"Create Cloudflare credentials","text":"

You can create Cloudflare API token by following this guide. The required permissions for the zone you want to use are:

Zone:Read\nDNS:Read\nDNS:Edit\n
"},{"location":"input-manifest/providers/cloudflare/#dns-setup","title":"DNS setup","text":"

If you wish to use Cloudflare as your DNS provider where Claudie creates DNS records pointing to Claudie managed clusters, you will need to create a public DNS zone by following this guide.

Cloudflare is not my domain registrar

If you haven't acquired a domain via Cloudflare and wish to utilize Cloudflare for hosting your zone, you can refer to this guide on Cloudflare nameservers. However, if you prefer not to use the entire domain, an alternative option is to delegate a subdomain to Cloudflare.

"},{"location":"input-manifest/providers/cloudflare/#input-manifest-examples","title":"Input manifest examples","text":""},{"location":"input-manifest/providers/cloudflare/#load-balancing-example","title":"Load balancing example","text":"

Showcase example

To make this example functional, you need to specify control plane and node pools. This current showcase will produce an error if used as is.

"},{"location":"input-manifest/providers/cloudflare/#create-a-secret-for-cloudflare-and-aws-providers","title":"Create a secret for Cloudflare and AWS providers","text":"

The secret for an Cloudflare provider must include the following mandatory fields: apitoken.

kubectl create secret generic cloudflare-secret-1 --namespace=mynamespace --from-literal=apitoken='kslISA878a6etYAfXYcg5iYyrFGNlCxc'\n

The secret for an AWS provider must include the following mandatory fields: accesskey and secretkey.

kubectl create secret generic aws-secret-1 --namespace=mynamespace --from-literal=accesskey='SLDUTKSHFDMSJKDIALASSD' --from-literal=secretkey='iuhbOIJN+oin/olikDSadsnoiSVSDsacoinOUSHD'\n

apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: CloudflareExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: cloudflare-1\n      providerType: cloudflare\n      secretRef:\n        name: cloudflare-secret-1\n        namespace: mynamespace\n\n    - name: aws-1\n      providerType: aws\n      secretRef:\n        name: aws-secret-1\n        namespace: mynamespace\n\n  nodePools: \n    dynamic:\n      - name: loadbalancer\n        providerSpec:\n          name: aws-1\n          region: eu-central-1\n          zone: eu-central-1c\n        count: 2\n        serverType: t3.medium\n        image: ami-0965bd5ba4d59211c\n\n  kubernetes:\n    clusters:\n      - name: cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control: []\n          compute: []\n\n  loadBalancers:\n    roles:\n      - name: apiserver\n        protocol: tcp\n        port: 6443\n        targetPort: 6443\n        targetPools: []\n    clusters:\n      - name: apiserver-lb-prod\n        roles:\n          - apiserver\n        dns:\n          dnsZone: dns-zone\n          provider: cloudflare-1\n          hostname: my.fancy.url\n        targetedK8s: prod-cluster\n        pools:\n          - loadbalancer\n
"},{"location":"input-manifest/providers/gcp/","title":"GCP","text":"

GCP provider requires you to input multiline credentials as well as specific GCP project ID gcpproject where to provision resources.

"},{"location":"input-manifest/providers/gcp/#compute-and-dns-example","title":"Compute and DNS example","text":"
apiVersion: v1\nkind: Secret\nmetadata:\n  name: gcp-secret\ndata:\n  credentials: >-\n    ewogICAgICAgICAidHlwZSI6InNlcnZpY2VfYWNjb3VudCIsCiAgICAgICAgICJwcm9qZWN0X2lkIjoicHJvamVjdC1jbGF1ZGllIiwKICAgICAgICAgInByaXZhdGVfa2V5X2lkIjoiYnNrZGxvODc1czkwODczOTQ3NjNlYjg0ZTQwNzkwM2xza2RpbXA0MzkiLAogICAgICAgICAicHJpdmF0ZV9rZXkiOiItLS0tLUJFR0lOIFBSSVZBVEUgS0VZLS0tLS1cblNLTE9vc0tKVVNEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkXG5NSUlFdlFJQkFEQU5CZ2txaGtpXG4tLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tXG4iLAogICAgICAgICAiY2xpZW50X2VtYWlsIjoiY2xhdWRpZUBwcm9qZWN0LWNsYXVkaWUtMTIzNDU2LmlhbS5nc2VydmljZWFjY291bnQuY29tIiwKICAgICAgICAgImNsaWVudF9pZCI6IjEwOTg3NjU0MzIxMTIzNDU2Nzg5MCIsCiAgICAgICAgICJhdXRoX3VyaSI6Imh0dHBzOi8vYWNjb3VudHMuZ29vZ2xlLmNvbS9vL29hdXRoMi9hdXRoIiwKICAgICAgICAgInRva2VuX3VyaSI6Imh0dHBzOi8vb2F1dGgyLmdvb2dsZWFwaXMuY29tL3Rva2VuIiwKICAgICAgICAgImF1dGhfcHJvdmlkZXJfeDUwOV9jZXJ0X3VybCI6Imh0dHBzOi8vd3d3Lmdvb2dsZWFwaXMuY29tL29hdXRoMi92MS9jZXJ0cyIsCiAgICAgICAgICJjbGllbnRfeDUwOV9jZXJ0X3VybCI6Imh0dHBzOi8vd3d3Lmdvb2dsZWFwaXMuY29tL3JvYm90L3YxL21ldGFkYXRhL3g1MDkvY2xhdWRpZSU0MGNsYXVkaWUtcHJvamVjdC0xMjM0NTYuaWFtLmdzZXJ2aWNlYWNjb3VudC5jb20iCiAgICAgIH0=\n  gcpproject: cHJvamVjdC1jbGF1ZGll # base64 created from GCP project ID\ntype: Opaque\n
"},{"location":"input-manifest/providers/gcp/#create-gcp-credentials","title":"Create GCP credentials","text":""},{"location":"input-manifest/providers/gcp/#prerequisites","title":"Prerequisites","text":"
  1. Install gcoud CLI on your machine by following this guide.
  2. Initialize gcloud CLI by following this guide.
  3. Authorize cloud CLI by following this guide
"},{"location":"input-manifest/providers/gcp/#creating-gcp-credentials-for-claudie","title":"Creating GCP credentials for Claudie","text":"
  1. Create a GCP project:

    gcloud projects create claudie-project\n

  2. Set the current project to claudie-project:

    gcloud config set project claudie-project\n

  3. Attach billing account to your project:

    gcloud alpha billing accounts projects link claudie-project (--account-id=ACCOUNT_ID | --billing-account=ACCOUNT_ID)\n

  4. Enable Compute Engine API and Cloud DNS API:

    {\n  gcloud services enable compute.googleapis.com\n  gcloud services enable dns.googleapis.com\n}\n

  5. Create a service account:

    gcloud iam service-accounts create claudie-sa\n

  6. Attach roles to the servcie account:

    {\n  gcloud projects add-iam-policy-binding claudie-project --member=serviceAccount:claudie-sa@claudie-project.iam.gserviceaccount.com --role=roles/compute.admin\n  gcloud projects add-iam-policy-binding claudie-project --member=serviceAccount:claudie-sa@claudie-project.iam.gserviceaccount.com --role=roles/dns.admin\n}\n

  7. Recover service account keys for claudie-sa:

    gcloud iam service-accounts keys create claudie-credentials.json --iam-account=claudie-sa@claudie-project.iam.gserviceaccount.com\n

"},{"location":"input-manifest/providers/gcp/#dns-setup","title":"DNS setup","text":"

If you wish to use GCP as your DNS provider where Claudie creates DNS records pointing to Claudie managed clusters, you will need to create a public DNS zone by following this guide.

GCP is not my domain registrar

If you haven't acquired a domain via GCP and wish to utilize GCP for hosting your zone, you can refer to this guide on GCP nameservers. However, if you prefer not to use the entire domain, an alternative option is to delegate a subdomain to GCP.

"},{"location":"input-manifest/providers/gcp/#input-manifest-examples","title":"Input manifest examples","text":""},{"location":"input-manifest/providers/gcp/#single-provider-multi-region-cluster-example","title":"Single provider, multi region cluster example","text":""},{"location":"input-manifest/providers/gcp/#create-a-secret-for-cloudflare-and-gcp-providers","title":"Create a secret for Cloudflare and GCP providers","text":"

The secret for an GCP provider must include the following mandatory fields: gcpproject and credentials.

# The ./claudie-credentials.json file is the file created in #Creating GCP credentials for Claudie step 7.\nkubectl create secret generic gcp-secret-1 --namespace=mynamespace --from-literal=gcpproject='project-claudie' --from-file=credentials=./claudie-credentials.json\n

apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: GCPExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: gcp-1\n      providerType: gcp\n      secretRef:\n        name: gcp-secret-1\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-gcp\n        providerSpec:\n          # Name of the provider instance.\n          name: gcp-1\n          # Region of the nodepool.\n          region: europe-west1\n          # Zone of the nodepool.\n          zone: europe-west1-c\n        count: 1\n        # Machine type name.\n        serverType: e2-medium\n        # OS image name.\n        image: ubuntu-os-cloud/ubuntu-2204-jammy-v20221206\n\n      - name: compute-1-gcp\n        providerSpec:\n          # Name of the provider instance.\n          name: gcp-1\n          # Region of the nodepool.\n          region: europe-west3\n          # Zone of the nodepool.\n          zone: europe-west3-a\n        count: 2\n        # Machine type name.\n        serverType: e2-medium\n        # OS image name.\n        image: ubuntu-os-cloud/ubuntu-2204-jammy-v20221206\n        storageDiskSize: 50\n\n      - name: compute-2-gcp\n        providerSpec:\n          # Name of the provider instance.\n          name: gcp-1\n          # Region of the nodepool.\n          region: europe-west2\n          # Zone of the nodepool.\n          zone: europe-west2-a\n        count: 2\n        # Machine type name.\n        serverType: e2-medium\n        # OS image name.\n        image: ubuntu-os-cloud/ubuntu-2204-jammy-v20221206\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: gcp-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-gcp\n          compute:\n            - compute-1-gcp\n            - compute-2-gcp\n
"},{"location":"input-manifest/providers/gcp/#multi-provider-multi-region-clusters-example","title":"Multi provider, multi region clusters example","text":""},{"location":"input-manifest/providers/gcp/#create-a-secret-for-cloudflare-and-gcp-providers_1","title":"Create a secret for Cloudflare and GCP providers","text":"

The secret for an GCP provider must include the following mandatory fields: gcpproject and credentials.

# The ./claudie-credentials.json file is the file created in #Creating GCP credentials for Claudie step 7.\nkubectl create secret generic gcp-secret-1 --namespace=mynamespace --from-literal=gcpproject='project-claudie' --from-file=credentials=./claudie-credentials.json\nkubectl create secret generic gcp-secret-2 --namespace=mynamespace --from-literal=gcpproject='project-claudie' --from-file=credentials=./claudie-credentials-2.json\n

apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: GCPExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: gcp-1\n      providerType: gcp\n      secretRef:\n        name: gcp-secret-1\n        namespace: mynamespace\n    - name: gcp-2\n      providerType: gcp\n      secretRef:\n        name: gcp-secret-2\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-gcp-1\n        providerSpec:\n          # Name of the provider instance.\n          name: gcp-1\n          # Region of the nodepool.\n          region: europe-west1\n          # Zone of the nodepool.\n          zone: europe-west1-c\n        count: 1\n        # Machine type name.\n        serverType: e2-medium\n        # OS image name.\n        image: ubuntu-os-cloud/ubuntu-2204-jammy-v20221206\n\n      - name: control-gcp-2\n        providerSpec:\n          # Name of the provider instance.\n          name: gcp-2\n          # Region of the nodepool.\n          region: europe-west1\n          # Zone of the nodepool.\n          zone: europe-west1-a\n        count: 2\n        # Machine type name.\n        serverType: e2-medium\n        # OS image name.\n        image: ubuntu-os-cloud/ubuntu-2204-jammy-v20221206\n\n      - name: compute-gcp-1\n        providerSpec:\n          # Name of the provider instance.\n          name: gcp-1\n          # Region of the nodepool.\n          region: europe-west3\n          # Zone of the nodepool.\n          zone: europe-west3-a\n        count: 2\n        # Machine type name.\n        serverType: e2-medium\n        # OS image name.\n        image: ubuntu-os-cloud/ubuntu-2204-jammy-v20221206\n        storageDiskSize: 50\n\n      - name: compute-gcp-2\n        providerSpec:\n          # Name of the provider instance.\n          name: gcp-2\n          # Region of the nodepool.\n          region: europe-west1\n          # Zone of the nodepool.\n          zone: europe-west1-c\n        count: 2\n        # Machine type name.\n        serverType: e2-medium\n        # OS image name.\n        image: ubuntu-os-cloud/ubuntu-2204-jammy-v20221206\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: gcp-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-gcp-1\n            - control-gcp-2\n          compute:\n            - compute-gcp-1\n            - compute-gcp-2\n
"},{"location":"input-manifest/providers/genesiscloud/","title":"Genesis Cloud","text":"

Genesis cloud provider requires apitoken token field in string format.

"},{"location":"input-manifest/providers/genesiscloud/#compute-example","title":"Compute example","text":"
apiVersion: v1\nkind: Secret\nmetadata:\n  name: genesiscloud-secret\ndata:\n  apitoken: GCAAAZZZZnnnnNNNNxXXX123BBcc123qqcva\ntype: Opaque\n
"},{"location":"input-manifest/providers/genesiscloud/#create-genesis-cloud-api-token","title":"Create Genesis Cloud API token","text":"

You can create Genesis Cloud API token by following this guide. The token must be able to have access to the following compute resources.

Instances, Network, Volumes\n
"},{"location":"input-manifest/providers/genesiscloud/#input-manifest-examples","title":"Input manifest examples","text":""},{"location":"input-manifest/providers/genesiscloud/#single-provider-multi-region-cluster-example","title":"Single provider, multi region cluster example","text":""},{"location":"input-manifest/providers/genesiscloud/#create-a-secret-for-genesis-cloud-provider","title":"Create a secret for Genesis cloud provider","text":"
kubectl create secret generic genesiscloud-secret --namespace=mynamespace --from-literal=apitoken='GCAAAZZZZnnnnNNNNxXXX123BBcc123qqcva'\n
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: genesis-example\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: genesiscloud\n      providerType: genesiscloud\n      secretRef:\n        name: genesiscloud-secret\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control\n        providerSpec:\n          name: genesiscloud\n          region: ARC-IS-HAF-1\n        count: 1\n        serverType: vcpu-2_memory-4g_disk-80g\n        image: \"Ubuntu 22.04\"\n        storageDiskSize: 50\n\n      - name: compute\n        providerSpec:\n          name: genesiscloud\n          region: ARC-IS-HAF-1\n        count: 3\n        serverType: vcpu-2_memory-4g_disk-80g\n        image: \"Ubuntu 22.04\"\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: genesiscloud-cluster\n        version: v1.27.0\n        network: 172.16.2.0/24\n        pools:\n          control:\n            - control\n          compute:\n            - compute\n
"},{"location":"input-manifest/providers/hetzner/","title":"Hetzner","text":"

Hetzner provider requires credentials token field in string format, and Hetzner DNS provider requires apitoken field in string format.

"},{"location":"input-manifest/providers/hetzner/#compute-example","title":"Compute example","text":"
apiVersion: v1\nkind: Secret\nmetadata:\n  name: hetzner-secret\ndata:\n  credentials: a3NsSVNBODc4YTZldFlBZlhZY2c1aVl5ckZHTmxDeGNJQ28wNjBIVkV5Z2pGczIxbnNrZTc2a3NqS2tvMjFscA==\ntype: Opaque\n
"},{"location":"input-manifest/providers/hetzner/#dns-example","title":"DNS example","text":"
apiVersion: v1\nkind: Secret\nmetadata:\n  name: hetznerdns-secret\ndata:\n  apitoken: a1V0UmcxcGdqQ1JhYXBQbWQ3cEFJalZnaHVyWG8xY24=\ntype: Opaque\n
"},{"location":"input-manifest/providers/hetzner/#create-hetzner-api-credentials","title":"Create Hetzner API credentials","text":"

You can create Hetzner API credentials by following this guide. The required permissions for the zone you want to use are:

Read & Write\n
"},{"location":"input-manifest/providers/hetzner/#create-hetzner-dns-credentials","title":"Create Hetzner DNS credentials","text":"

You can create Hetzner DNS credentials by following this guide.

DNS provider specification

The provider for DNS is different from the one for the Cloud.

"},{"location":"input-manifest/providers/hetzner/#dns-setup","title":"DNS setup","text":"

If you wish to use Hetzner as your DNS provider where Claudie creates DNS records pointing to Claudie managed clusters, you will need to create a public DNS zone by following this guide.

Hetzner is not my domain registrar

If you haven't acquired a domain via Hetzner and wish to utilize Hetzner for hosting your zone, you can refer to this guide on Hetzner nameservers. However, if you prefer not to use the entire domain, an alternative option is to delegate a subdomain to Hetzner.

"},{"location":"input-manifest/providers/hetzner/#input-manifest-examples","title":"Input manifest examples","text":""},{"location":"input-manifest/providers/hetzner/#single-provider-multi-region-cluster-example","title":"Single provider, multi region cluster example","text":""},{"location":"input-manifest/providers/hetzner/#create-a-secret-for-hetzner-provider","title":"Create a secret for Hetzner provider","text":"

The secret for an Hetzner provider must include the following mandatory fields: credentials.

kubectl create secret generic hetzner-secret-1 --namespace=mynamespace --from-literal=credentials='kslISA878a6etYAfXYcg5iYyrFGNlCxcICo060HVEygjFs21nske76ksjKko21lp'\n
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: HetznerExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: hetzner-1\n      providerType: hetzner\n      secretRef:\n        name: hetzner-secret-1\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-htz\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-1\n          # Region of the nodepool.\n          region: hel1\n          # Datacenter of the nodepool.\n          zone: hel1-dc2\n        count: 1\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n\n      - name: compute-1-htz\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-1\n          # Region of the nodepool.\n          region: fsn1\n          # Datacenter of the nodepool.\n          zone: fsn1-dc14\n        count: 2\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n        storageDiskSize: 50\n\n      - name: compute-2-htz\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-1\n          # Region of the nodepool.\n          region: nbg1\n          # Datacenter of the nodepool.\n          zone: nbg1-dc3\n        count: 2\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: hetzner-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-htz\n          compute:\n            - compute-1-htz\n            - compute-2-htz\n
"},{"location":"input-manifest/providers/hetzner/#multi-provider-multi-region-clusters-example","title":"Multi provider, multi region clusters example","text":""},{"location":"input-manifest/providers/hetzner/#create-a-secret-for-hetzner-provider_1","title":"Create a secret for Hetzner provider","text":"

The secret for an Hetzner provider must include the following mandatory fields: credentials.

kubectl create secret generic hetzner-secret-1 --namespace=mynamespace --from-literal=credentials='kslISA878a6etYAfXYcg5iYyrFGNlCxcICo060HVEygjFs21nske76ksjKko21lp'\nkubectl create secret generic hetzner-secret-2 --namespace=mynamespace --from-literal=credentials='kslIIOUYBiuui7iGBYIUiuybpiUB87bgPyuCo060HVEygjFs21nske76ksjKko21l'\n
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: HetznerExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: hetzner-1\n      providerType: hetzner\n      secretRef:\n        name: hetzner-secret-1\n        namespace: mynamespace\n    - name: hetzner-2\n      providerType: hetzner\n      secretRef:\n        name: hetzner-secret-2\n        namespace: mynamespace        \n\n  nodePools:\n    dynamic:\n      - name: control-htz-1\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-1\n          # Region of the nodepool.\n          region: hel1\n          # Datacenter of the nodepool.\n          zone: hel1-dc2\n        count: 1\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n\n      - name: control-htz-2\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-2\n          # Region of the nodepool.\n          region: fsn1\n          # Datacenter of the nodepool.\n          zone: fsn1-dc14\n        count: 2\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n\n      - name: compute-htz-1\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-1\n          # Region of the nodepool.\n          region: fsn1\n          # Datacenter of the nodepool.\n          zone: fsn1-dc14\n        count: 2\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n        storageDiskSize: 50\n\n      - name: compute-htz-2\n        providerSpec:\n          # Name of the provider instance.\n          name: hetzner-2\n          # Region of the nodepool.\n          region: nbg1\n          # Datacenter of the nodepool.\n          zone: nbg1-dc3\n        count: 2\n        # Machine type name.\n        serverType: cpx11\n        # OS image name.\n        image: ubuntu-22.04\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: hetzner-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-htz-1\n            - control-htz-2\n          compute:\n            - compute-htz-1\n            - compute-htz-2\n
"},{"location":"input-manifest/providers/oci/","title":"OCI","text":"

OCI provider requires you to input privatekey, keyfingerprint, tenancyocid, userocid, and compartmentocid.

"},{"location":"input-manifest/providers/oci/#compute-and-dns-example","title":"Compute and DNS example","text":"
apiVersion: v1\nkind: Secret\nmetadata:\n  name: oci-secret\ndata:\n  compartmentocid: b2NpZDIuY29tcGFydG1lbnQub2MyLi5hYWFhYWFhYWEycnNmdmx2eGMzNG8wNjBrZmR5Z3NkczIxbnNrZTc2a3Nqa2tvMjFscHNkZnNm    \n  keyfingerprint: YWI6Y2Q6M2Y6MzQ6MzM6MjI6MzI6MzQ6NTQ6NTQ6NDU6NzY6NzY6Nzg6OTg6YWE=\n  privatekey: >-\n    LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQogICAgICAgIE1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2oyL2Fza0pTTG9zYWQKICAgICAgICBNSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkCiAgICAgICAgTUlJRXZRSUJBREFOQmdrcWhraUc5dzBCQVFFRkFBU0NCS2N3Z2dTakFnRUFBb0lCQVFDajIvYXNrSlNMb3NhZAogICAgICAgIE1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2oyL2Fza0pTTG9zYWQKICAgICAgICBNSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkCiAgICAgICAgTUlJRXZRSUJBREFOQmdrcWhraUc5dzBCQVFFRkFBU0NCS2N3Z2dTakFnRUFBb0lCQVFDajIvYXNrSlNMb3NhZAogICAgICAgIE1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2oyL2Fza0pTTG9zYWQKICAgICAgICBNSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkCiAgICAgICAgTUlJRXZRSUJBREFOQmdrcWhraUc5dzBCQVFFRkFBU0NCS2N3Z2dTakFnRUFBb0lCQVFDajIvYXNrSlNMb3NhZAogICAgICAgIE1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2oyL2Fza0pTTG9zYWQKICAgICAgICBNSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkCiAgICAgICAgTUlJRXZRSUJBREFOQmdrcWhraUc5dzBCQVFFRkFBU0NCS2N3Z2dTakFnRUFBb0lCQVFDajIvYXNrSlNMb3NhZAogICAgICAgIE1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2oyL2Fza0pTTG9zYWQKICAgICAgICBNSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkCiAgICAgICAgTUlJRXZRSUJBREFOQmdrcWhraUc5dzBCQVFFRkFBU0NCS2N3Z2dTakFnRUFBb0lCQVFDajIvYXNrSlNMb3NhZAogICAgICAgIE1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2oyL2Fza0pTTG9zYWQKICAgICAgICBNSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkCiAgICAgICAgTUlJRXZRSUJBREFOQmdrcWhraUc5dzBCQVFFRkFBU0NCS2N3Z2dTakFnRUFBb0lCQVFDajIvYXNrSlNMb3NhZAogICAgICAgIE1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2oyL2Fza0pTTG9zYWQKICAgICAgICBNSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkCiAgICAgICAgTUlJRXZRSUJBREFOQmdrcWhraUc5dzBCQVFFRkFBU0NCS2N3Z2dTakFnRUFBb0lCQVFDajIvYXNrSlNMb3NhZAogICAgICAgIE1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2oyL2Fza0pTTG9zYWQKICAgICAgICBNSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNqMi9hc2tKU0xvc2FkCiAgICAgICAgTUlJRXZRSUJBREFOQmdrcWhraUc5dzBCQVFFRkFBU0NCS2N3Z2dTakFnRUFBb0lCQVFDajIvYXNrSlNMb3NhZAogICAgICAgIE1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2oyLz09CiAgICAgICAgLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0=\n  tenancyocid: b2NpZDIudGVuYW5jeS5vYzIuLmFhYWFhYWFheXJzZnZsdnhjMzRvMDYwa2ZkeWdzZHMyMW5za2U3NmtzamtrbzIxbHBzZGZzZnNnYnJ0Z2hz\n  userocid: b2NpZDIudXNlci5vYzIuLmFhYWFhYWFhYWFueXJzZnZsdnhjMzRvMDYwa2ZkeWdzZHMyMW5za2U3NmtzamtrbzIxbHBzZGZzZg==\ntype: Opaque\n
"},{"location":"input-manifest/providers/oci/#create-oci-credentials","title":"Create OCI credentials","text":""},{"location":"input-manifest/providers/oci/#prerequisites","title":"Prerequisites","text":"
  1. Install OCI CLI by following this guide.
  2. Configure OCI CLI by following this guide.
"},{"location":"input-manifest/providers/oci/#creating-oci-credentials-for-claudie","title":"Creating OCI credentials for Claudie","text":"
  1. Export your tenant id:

    export tenancy_ocid=\"ocid\"\n

    Find your tenant id

    You can find it under Identity & Security tab and Compartments option.

  2. Create OCI compartment where Claudie deploys its resources:

    {\n  oci iam compartment create --name claudie-compartment --description claudie-compartment --compartment-id $tenancy_ocid\n}\n

  3. Create the claudie user:

    oci iam user create --name claudie-user --compartment-id $tenancy_ocid --description claudie-user --email <email address>\n

  4. Create a group that will hold permissions for the user:

    oci iam group create --name claudie-group --compartment-id $tenancy_ocid --description claudie-group\n

  5. Generate policy file with necessary permissions:

    {\ncat > policy.txt <<EOF\n[\n  \"Allow group claudie-group to manage instance-family in compartment claudie-compartment\",\n  \"Allow group claudie-group to manage volume-family in compartment claudie-compartment\",\n  \"Allow group claudie-group to manage virtual-network-family in tenancy\",\n  \"Allow group claudie-group to manage dns-zones in compartment claudie-compartment\",\n  \"Allow group claudie-group to manage dns-records in compartment claudie-compartment\"\n]\nEOF\n}\n

  6. Create a policy with required permissions:

    oci iam policy create --name claudie-policy --statements file://policy.txt --compartment-id $tenancy_ocid --description claudie-policy\n

  7. Declare user_ocid and group_ocid:

    {\n  group_ocid=$(oci iam group list | jq -r '.data[] | select(.name == \"claudie-group\") | .id')\n  user_ocid=$(oci iam user list | jq -r '.data[] | select(.name == \"claudie-user\") | .id')\n}\n

  8. Attach claudie-user to claudie-group:

    oci iam group add-user --group-id $group_ocid --user-id $user_ocid\n

  9. Generate key pair for claudie-user and enter N/A for no passphrase:

    oci setup keys --key-name claudie-user --output-dir .\n

  10. Upload the public key to use for the claudie-user:

    oci iam user api-key upload --user-id $user_ocid --key-file claudie-user_public.pem\n

  11. Export compartment_ocid and fingerprint, to use them when creating provider secret.

      compartment_ocid=$(oci iam compartment list | jq -r '.data[] | select(.name == \"claudie-compartment\") | .id')\n  fingerprint=$(oci iam user api-key list --user-id $user_ocid | jq -r '.data[0].fingerprint')\n

"},{"location":"input-manifest/providers/oci/#dns-setup","title":"DNS setup","text":"

If you wish to use OCI as your DNS provider where Claudie creates DNS records pointing to Claudie managed clusters, you will need to create a public DNS zone by following this guide.

OCI is not my domain registrar

You cannot buy a domain from Oracle at this time so you can update nameservers of your OCI hosted zone by following this guide on changing nameservers. However, if you prefer not to use the entire domain, an alternative option is to delegate a subdomain to OCI.

"},{"location":"input-manifest/providers/oci/#iam-policies-required-by-claudie","title":"IAM policies required by Claudie","text":"
\"Allow group <GROUP_NAME> to manage instance-family in compartment <COMPARTMENT_NAME>\"\n\"Allow group <GROUP_NAME> to manage volume-family in compartment <COMPARTMENT_NAME>\"\n\"Allow group <GROUP_NAME> to manage virtual-network-family in tenancy\"\n\"Allow group <GROUP_NAME> to manage dns-zones in compartment <COMPARTMENT_NAME>\",\n\"Allow group <GROUP_NAME> to manage dns-records in compartment <COMPARTMENT_NAME>\",\n
"},{"location":"input-manifest/providers/oci/#input-manifest-examples","title":"Input manifest examples","text":""},{"location":"input-manifest/providers/oci/#single-provider-multi-region-cluster-example","title":"Single provider, multi region cluster example","text":""},{"location":"input-manifest/providers/oci/#create-a-secret-for-oci-provider","title":"Create a secret for OCI provider","text":"

The secret for an OCI provider must include the following mandatory fields: compartmentocid, userocid, tenancyocid, keyfingerprint and privatekey.

# Refer to values exported in \"Creating OCI credentials for Claudie\" section\nkubectl create secret generic oci-secret-1 --namespace=mynamespace --from-literal=compartmentocid=$compartment_ocid --from-literal=userocid=$user_ocid --from-literal=tenancyocid=$tenancy_ocid --from-literal=keyfingerprint=$fingerprint --from-file=privatekey=./claudie-user_public.pem\n
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: OCIExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: oci-1\n      providerType: oci\n      secretRef:\n        name: oci-secret-1\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-oci\n        providerSpec:\n          # Name of the provider instance.\n          name: oci-1\n          # Region of the nodepool.\n          region: eu-milan-1\n          # Availability domain of the nodepool.\n          zone: hsVQ:EU-MILAN-1-AD-1\n        count: 1\n        # VM shape name.\n        serverType: VM.Standard2.2\n        # OCID of the image.\n        # Make sure to update it according to the region.\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n\n      - name: compute-1-oci\n        providerSpec:\n          # Name of the provider instance.\n          name: oci-1\n          # Region of the nodepool.\n          region: eu-frankfurt-1\n          # Availability domain of the nodepool.\n          zone: hsVQ:EU-FRANKFURT-1-AD-1\n        count: 2\n        # VM shape name.\n        serverType: VM.Standard2.1\n        # OCID of the image.\n        # Make sure to update it according to the region.\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n        storageDiskSize: 50\n\n      - name: compute-2-oci\n        providerSpec:\n          # Name of the provider instance.\n          name: oci-1\n          # Region of the nodepool.\n          region: eu-frankfurt-1\n          # Availability domain of the nodepool.\n          zone: hsVQ:EU-FRANKFURT-1-AD-2\n        count: 2\n        # VM shape name.\n        serverType: VM.Standard2.1\n        # OCID of the image.\n        # Make sure to update it according to the region.\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: oci-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-oci\n          compute:\n            - compute-1-oci\n            - compute-2-oci\n
"},{"location":"input-manifest/providers/oci/#multi-provider-multi-region-clusters-example","title":"Multi provider, multi region clusters example","text":""},{"location":"input-manifest/providers/oci/#create-a-secret-for-oci-provider_1","title":"Create a secret for OCI provider","text":"

The secret for an OCI provider must include the following mandatory fields: compartmentocid, userocid, tenancyocid, keyfingerprint and privatekey.

# Refer to values exported in \"Creating OCI credentials for Claudie\" section\nkubectl create secret generic oci-secret-1 --namespace=mynamespace --from-literal=compartmentocid=$compartment_ocid --from-literal=userocid=$user_ocid --from-literal=tenancyocid=$tenancy_ocid --from-literal=keyfingerprint=$fingerprint --from-file=privatekey=./claudie-user_public.pem\n\nkubectl create secret generic oci-secret-2 --namespace=mynamespace --from-literal=compartmentocid=$compartment_ocid2 --from-literal=userocid=$user_ocid2 --from-literal=tenancyocid=$tenancy_ocid2 --from-literal=keyfingerprint=$fingerprint2 --from-file=privatekey=./claudie-user_public2.pem\n
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: OCIExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: oci-1\n      providerType: oci\n      secretRef:\n        name: oci-secret-1\n        namespace: mynamespace\n    - name: oci-2\n      providerType: oci\n      secretRef:\n        name: oci-secret-2\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-oci-1\n        providerSpec:\n          # Name of the provider instance.\n          name: oci-1\n          # Region of the nodepool.\n          region: eu-milan-1\n          # Availability domain of the nodepool.\n          zone: hsVQ:EU-MILAN-1-AD-1\n        count: 1\n        # VM shape name.\n        serverType: VM.Standard2.2\n        # OCID of the image.\n        # Make sure to update it according to the region.\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n\n      - name: control-oci-2\n        providerSpec:\n          # Name of the provider instance.\n          name: oci-2\n          # Region of the nodepool.\n          region: eu-frankfurt-1\n          # Availability domain of the nodepool.\n          zone: hsVQ:EU-FRANKFURT-1-AD-3\n        count: 2\n        # VM shape name.\n        serverType: VM.Standard2.1\n        # OCID of the image.\n        # Make sure to update it according to the region.\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n\n      - name: compute-oci-1\n        providerSpec:\n          # Name of the provider instance.\n          name: oci-1\n          # Region of the nodepool.\n          region: eu-frankfurt-1\n          # Availability domain of the nodepool.\n          zone: hsVQ:EU-FRANKFURT-1-AD-1\n        count: 2\n        # VM shape name.\n        serverType: VM.Standard2.1\n        # OCID of the image.\n        # Make sure to update it according to the region.\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n        storageDiskSize: 50\n\n      - name: compute-oci-2\n        providerSpec:\n          # Name of the provider instance.\n          name: oci-2\n          # Region of the nodepool.\n          region: eu-milan-1\n          # Availability domain of the nodepool.\n          zone: hsVQ:EU-MILAN-1-AD-1\n        count: 2\n        # VM shape name.\n        serverType: VM.Standard2.1\n        # OCID of the image.\n        # Make sure to update it according to the region..\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: oci-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-oci-1\n            - control-oci-2\n          compute:\n            - compute-oci-1\n            - compute-oci-2\n
"},{"location":"input-manifest/providers/oci/#flex-instances-example","title":"Flex instances example","text":""},{"location":"input-manifest/providers/oci/#create-a-secret-for-oci-provider_2","title":"Create a secret for OCI provider","text":"

The secret for an OCI provider must include the following mandatory fields: compartmentocid, userocid, tenancyocid, keyfingerprint and privatekey.

# Refer to values exported in \"Creating OCI credentials for Claudie\" section\nkubectl create secret generic oci-secret-1 --namespace=mynamespace --from-literal=compartmentocid=$compartment_ocid --from-literal=userocid=$user_ocid --from-literal=tenancyocid=$tenancy_ocid --from-literal=keyfingerprint=$fingerprint --from-file=privatekey=./claudie-user_public.pem\n
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: OCIExampleManifest\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: oci-1\n      providerType: oci\n      secretRef:\n        name: oci-secret-1\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: oci\n        providerSpec:\n          # Name of the provider instance.\n          name: oci-1\n          # Region of the nodepool.\n          region: eu-frankfurt-1\n          # Availability domain of the nodepool.\n          zone: hsVQ:EU-FRANKFURT-1-AD-1\n        count: 2\n        # VM shape name.\n        serverType: VM.Standard.E4.Flex\n        # further describes the selected server type.\n        machineSpec:\n          # use 2 ocpus.\n          cpuCount: 2\n          # use 8 gb of memory.\n          memory: 8\n        # OCID of the image.\n        # Make sure to update it according to the region.\n        image: ocid1.image.oc1.eu-frankfurt-1.aaaaaaaavvsjwcjstxt4sb25na65yx6i34bzdy5oess3pkgwyfa4hxmzpqeq\n        storageDiskSize: 50\n\n  kubernetes:\n    clusters:\n      - name: oci-cluster\n        version: v1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - oci\n          compute:\n            - oci\n
"},{"location":"input-manifest/providers/on-prem/","title":"On premise nodes","text":"

Claudie is designed to leverage your existing infrastructure and utilise it for building Kubernetes clusters together with supported cloud providers. However, Claudie operates under a few assumptions:

  1. Accessibility of Machines: Claudie requires access to the machines specified by the provided endpoint. It needs the ability to connect to these machines in order to perform necessary operations.

  2. Connectivity between Static Nodes: Static nodes within the infrastructure should be able to communicate with each other using the specified endpoints. This connectivity is important for proper functioning of the Kubernetes cluster.

  3. SSH Access with Root Privileges: Claudie relies on SSH access to the nodes using the SSH key provided in the input manifest. The SSH key should grant root privileges to enable Claudie to perform required operations on the nodes.

  4. Meeting the Kubernetes nodes requirements: Learn more.

By ensuring that these assumptions are met, Claudie can effectively utilise your infrastructure and build Kubernetes clusters while collaborating with the supported cloud providers.

"},{"location":"input-manifest/providers/on-prem/#private-key-example-secret","title":"Private key example secret","text":"
apiVersion: v1\nkind: Secret\nmetadata:\n  name: static-node-key\ndata:\n  privatekey: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBbzJEOGNYb0Uxb3VDblBYcXFpVW5qbHh0c1A4YXlKQW4zeFhYdmxLOTMwcDZBUzZMCncvVW03THFnbUhpOW9GL3pWVnB0TDhZNmE2NWUvWjk0dE9SQ0lHY0VJendpQXF3M3M4NGVNcnoyQXlrSWhsWE0KVEpSS3J3SHJrbDRtVlBvdE9paDFtZkVTenFMZ25TMWdmQWZxSUVNVFdOZlRkQmhtUXpBNVJFT2NpQ1Q1dFRnMApraDI1SmVHeU9qR3pzaFhkKzdaVi9PUXVQUk5Mb2lrQzFDVFdtM0FSVFFDeUpZaXR5bURVeEgwa09wa2VyODVoCmpFRTRkUnUxVzQ2WDZkdEUrSlBZNkNKRlR2c1VUcGlqT3QzQmNTSTYyY2ZyYmFRYXhvQXk2bEJLVlB1cm1xYm0Kb09JNHVRUWJWRGt5Q3V4MzcwSTFjTUVzWkszYVNBa0ZZSUlMRndJREFRQUJBb0lCQUVLUzFhc2p6bTdpSUZIMwpQeTBmd0xPWTVEVzRiZUNHSlVrWkxIVm9YK2hwLzdjVmtXeERMQjVRbWZvblVSWFZvMkVIWFBDWHROeUdERDBLCnkzUGlnek9TNXJPNDRCNzRzQ1g3ZW9Dd1VRck9vS09rdUlBSCtUckE3STRUQVVtbE8rS3o4OS9MeFI4Z2JhaCsKZ2c5b1pqWEpQMHYzZmptVGE3QTdLVXF3eGtzUEpORFhyN0J2MkhGc3ZueHROTkhWV3JBcjA3NUpSU2U3akJIRgpyQnpIRGFOUUhjYWwybTJWbDAvbGM4SVgyOEIwSXBYOEM5ajNqVGUwRS9XOVYyaURvM0ZvbmZzVU1BSm9KeW1nCkRzRXFxb25Cc0ZFeE9iY1BUNlh4SHRLVHVXMkRDRHF3c20xTVM2L0xUZzRtMFZ0alBRbGE5cnd0Z1lQcEtVSWYKbkRya3ZBRUNnWUVBOC9EUTRtNWF4UE0xL2d4UmVFNVZJSEMzRjVNK0s0S0dsdUNTVUNHcmtlNnpyVmhOZXllMwplbWpUV21lUmQ4L0szYzVxeGhJeGkvWE8vc0ZvREthSjdHaVl4L2RiOEl6dlJZYkw2ZHJiOVh0aHVObmhJWTlkCmJPd0VhbWxXZGxZbzlhUTBoYTFpSHpoUHVhMjN0TUNiM2xpZzE3MVZuUURhTXlhS3plaVMxUmNDZ1lFQXEzU2YKVEozcDRucmh4VjJiMEJKUStEdjkrRHNiZFBCY0pPbHpYVVVodHB6d3JyT3VKdzRUUXFXeG1pZTlhK1lpSzd0cAplY2YyOEltdHY0dy9aazg1TUdmQm9hTkpwdUNmNWxoMElseDB3ZXROQXlmb3dTNHZ3dUlZNG1zVFlvcE1WV20yClV5QzlqQ1M4Q0Y2Y1FrUVdjaVVlc2dVWHFocE50bXNLTG9LWU9nRUNnWUVBNWVwZVpsd09qenlQOGY4WU5tVFcKRlBwSGh4L1BZK0RsQzRWa1FjUktXZ1A2TTNKYnJLelZZTGsySXlva1VDRjRHakI0TUhGclkzZnRmZTA2TFZvMQorcXptK3Vub0xNUVlySllNMFQvbk91cnNRdmFRR3pwdG1zQ2t0TXJOcEVFMjM3YkJqaERKdjVVcWgxMzFISmJCCkVnTEVyaklVWkNNdWhURlplQk14ZVVjQ2dZRUFqZkZPc0M5TG9hUDVwVnVKMHdoVzRDdEtabWNJcEJjWk1iWFQKUERRdlpPOG9rbmxPaENheTYwb2hibTNYODZ2aVBqSTVjQWlMOXpjRUVNQWEvS2c1d0VrbGxKdUtMZzFvVTFxSApTcXNnUGlwKzUwM3k4M3M1THkzZlRCTTVTU3NWWnVETmdLUnFSOHRobjh3enNPaU5iSkl1aDFLUDlOTXg0d05hCnVvYURZQUVDZ1lFQW5xNzJJUEU1MlFwekpjSDU5RmRpbS8zOU1KYU1HZlhZZkJBNXJoenZnMmc5TW9URXpWKysKSVZ2SDFTSjdNTTB1SVBCa1FpbC91V083bU9DR2hHVHV3TGt3Uy9JU1FjTmRhSHlTRDNiZzdndzc5aG1UTVhiMgozVFpCTjdtb3FWM0VhRUhWVU1nT1N3dHUySTlQN1RJNGJJV0RQUWxuWE53Q0tCWWNKanRraWNRPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=\ntype: Opaque\n
"},{"location":"input-manifest/providers/on-prem/#input-manifest-example","title":"Input manifest example","text":""},{"location":"input-manifest/providers/on-prem/#private-cluster-example","title":"Private cluster example","text":"
kubectl create secret generic static-node-key --namespace=mynamespace --from-file=privatekey=private.pem\n
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: PrivateClusterExample\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  nodePools:\n    static:\n        - name: control\n          nodes:\n            - endpoint: \"192.168.10.1\"\n              secretRef:\n                name: static-node-key\n                namespace: mynamespace\n\n        - name: compute\n          nodes:\n            - endpoint: \"192.168.10.2\"\n              secretRef:\n                name: static-node-key\n                namespace: mynamespace\n            - endpoint: \"192.168.10.3\"\n              secretRef:\n                name: static-node-key\n                namespace: mynamespace\n\n  kubernetes:\n    clusters:\n      - name: private-cluster\n        version: 1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control\n          compute:\n            - compute\n
"},{"location":"input-manifest/providers/on-prem/#hybrid-cloud-example","title":"Hybrid cloud example","text":""},{"location":"input-manifest/providers/on-prem/#create-secret-for-private-key","title":"Create secret for private key","text":"
kubectl create secret generic static-node-key --namespace=mynamespace --from-file=privatekey=private.pem\n

To see how to configure Hetzner or any other credentials for hybrid cloud, refer to their docs.

apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: HybridCloudExample\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n  providers:\n    - name: hetzner-1\n      providerType: hetzner\n      secretRef:\n        name: hetzner-secret-1\n        namespace: mynamespace\n\n  nodePools:\n    dynamic:\n      - name: control-htz\n        providerSpec:\n          name: hetzner-1\n          region: fsn1\n          zone: fsn1-dc14\n        count: 3\n        serverType: cpx11\n        image: ubuntu-22.04\n\n    static:\n        - name: datacenter-1\n          nodes:\n            - endpoint: \"192.168.10.1\"\n              secretRef:\n                name: static-node-key\n                namespace: mynamespace\n            - endpoint: \"192.168.10.2\"\n              secretRef:\n                name: static-node-key\n                namespace: mynamespace\n            - endpoint: \"192.168.10.3\"\n              secretRef:\n                name: static-node-key\n                namespace: mynamespace\n\n  kubernetes:\n    clusters:\n      - name: hybrid-cluster\n        version: 1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control-hetzner\n          compute:\n            - datacenter-1\n
"},{"location":"latency-limitations/latency-limitations/","title":"Latency-imposed limitations","text":"

The general rule of thumb is that every 100 km of distance adds roughly ~1ms of latency. Therefore in the following subsections, we describe what problems might and will most probably arise when working with high latency using etcd and Longhorn.

"},{"location":"latency-limitations/latency-limitations/#etcd-limitations","title":"etcd limitations","text":"

A distance between etcd nodes in the multi-cloud environment of more than 600 km can be detrimental to cluster health. In a scenario like this, an average deployment time can double compared to a scenario with etcd nodes in different availability zones within the same cloud provider. Besides this, the total number of the etcd Slow Applies increases rapidly, and a Round-trip time varies from ~0.05s to ~0.2s, whereas in a single-cloud scenario with etcd nodes in a different AZs the range is from ~0.003s to ~0.025s.

In multi-cloud clusters, a request to a KubeAPI lasts generally from ~0.025s to ~0.25s. On the other hand, in a one-cloud scenario, they last from ~0.005s to ~0.025s.

You can read more about this topic here, and for distances above 600 km, we recommend customizing further the etcd deployment (see).

"},{"location":"latency-limitations/latency-limitations/#longhorn-limitations","title":"Longhorn limitations","text":"

There are basically these three problems when dealing with a high latency in Longhorn:

Generally, a single volume with 3 replicas can tolerate a maximum network latency of around 100ms. In the case of a multiple-volume scenario, the maximum network latency can be no more than 20ms. The network latency has a significant impact on IO performance and total network bandwidth. See more about CPU and network requirements here

"},{"location":"latency-limitations/latency-limitations/#how-to-avoid-high-latency-problems","title":"How to avoid high latency problems","text":"

When dealing with RWO volumes you can avoid mount failures caused by high latency by setting Longhorn to only use storage on specific nodes (follow this tutorial) and using nodeAffinity or nodeSelector to schedule your workload pods only to the nodes that have replicas of the volumes or are close to them.

"},{"location":"latency-limitations/latency-limitations/#how-to-mitigate-high-latency-problems-with-rwx-volumes","title":"How to mitigate high latency problems with RWX volumes","text":"

To mitigate high latency issues with RWX volumes you can maximize these Longhorn settings:

Thanks to maximizing these settings you should successfully mount a RWX volume for which a latency between a node with a share-manager pod and a node with a workload pod + replica is ~200ms. However, it will take from 7 to 10 minutes. Also, there are some resource requirements on the nodes and limitations on the maximum size of the RWX volumes. For example, you will not succeed in mounting even a 1Gi RWX volume for which a latency between a node with a share-manager pod and a node with a workload pod + replica is ~200ms, if the nodes have only 2 shared vCPUs and 4GB RAM. This applies even when there are no other workloads in the cluster. Your nodes need at least 2vCPU and 8GB RAM. Generally, the more CPU you assign to the Longhorn manager the more you can mitigate the issue with high latency and RWX volumes.

Keep in mind, that using machines with higher resources and maximizing these Longhorn settings doesn't necessarily guarantee successful mount of the RWX volumes. It also depends on the size of these volumes. For example, even after maximizing these settings and using nodes with 2vCPU and 8GB RAM with ~200ms latency between them, you will fail to mount a 10Gi volume to the workload pod in case you try to mount multiple volumes at once. In case you do it one by one, you should be good.

To conclude, maximizing these Longhorn settings can help to mitigate the high latency issue when mounting RWX volumes, but it is resource-hungry and it also depends on the size of the RWX volume + the total number of the RWX volumes that are attaching at once.

"},{"location":"loadbalancing/loadbalancing-solution/","title":"Claudie load balancing solution","text":""},{"location":"loadbalancing/loadbalancing-solution/#loadbalancer","title":"Loadbalancer","text":"

To create a highly available kubernetes cluster, Claudie creates load balancers for the kubeAPI server. These load balancers use Nginx to load balance the traffic among the cluster nodes. Claudie also supports definition of custom load balancers for the applications running inside the cluster.

"},{"location":"loadbalancing/loadbalancing-solution/#concept","title":"Concept","text":""},{"location":"loadbalancing/loadbalancing-solution/#example-diagram","title":"Example diagram","text":""},{"location":"loadbalancing/loadbalancing-solution/#definitions","title":"Definitions","text":""},{"location":"loadbalancing/loadbalancing-solution/#role","title":"Role","text":"

Claudie uses the concept of roles while configuring the load balancers from the input manifest. Each role represents a loadbalancer configuration for a particular use. Roles are then assigned to the load balancer cluster. A single load balancer cluster can have multiple roles assigned.

"},{"location":"loadbalancing/loadbalancing-solution/#targeted-kubernetes-cluster","title":"Targeted kubernetes cluster","text":"

Load balancer gets assigned to a kubernetes cluster with the field targetedK8s. This field is using the name of the kubernetes cluster as a value. Currently, a single load balancer can only be assigned to a single kubernetes cluster.

Among multiple load balancers targeting the same kubernetes cluster only one of them can have the API server role (i.e. the role with target port 6443) attached to it.

"},{"location":"loadbalancing/loadbalancing-solution/#dns","title":"DNS","text":"

Claudie creates and manages the DNS for the load balancer. If the user adds a load balancer into their infrastructure via Claudie, Claudie creates a DNS A record with the public IP of the load balancer machines behind it. When the load balancer configuration changes in any way, that is a node is added/removed, the hostname or the target changes, the DNS record is reconfigured by Claudie on the fly. This rids the user of the need to manage DNS.

"},{"location":"loadbalancing/loadbalancing-solution/#nodepools","title":"Nodepools","text":"

Loadbalancers are build from user defined nodepools in pools field, similar to how kubernetes clusters are defined. These nodepools allow the user to change/scale the load balancers according to their needs without any fuss. See the nodepool definition for more information.

"},{"location":"loadbalancing/loadbalancing-solution/#an-example-of-load-balancer-definition","title":"An example of load balancer definition","text":"

See an example load balancer definition in our reference example input manifest.

"},{"location":"loadbalancing/loadbalancing-solution/#notes","title":"Notes","text":""},{"location":"loadbalancing/loadbalancing-solution/#cluster-ingress-controller","title":"Cluster ingress controller","text":"

You still need to deploy your own ingress controller to use the load balancer. It needs to be set up to use nodeport with the ports configured under roles in the load balancer definition.

"},{"location":"monitoring/grafana/","title":"Prometheus Monitoring","text":"

In our environment, we rely on Claudie to export Prometheus metrics, providing valuable insights into the state of our infrastructure and applications. To utilize Claudie's monitoring capabilities, it's essential to have Prometheus installed. With this setup, you can gain visibility into various metrics such as:

You can find Claudie dashboard here.

"},{"location":"monitoring/grafana/#configure-scraping-metrics","title":"Configure scraping metrics","text":"

We recommend using the Prometheus Operator for managing Prometheus deployments efficiently.

  1. Create RBAC that allows Prometheus to scrape metrics from Claudie\u2019s pods:

    apiVersion: rbac.authorization.k8s.io/v1\nkind: Role\nmetadata:\n  name: claudie-pod-reader\n  namespace: claudie\nrules:\n- apiGroups: [\"\"]\n  resources: [\"pods\"]\n  verbs: [\"get\", \"list\"]\n---\napiVersion: rbac.authorization.k8s.io/v1\nkind: RoleBinding\nmetadata:\n  name: claudie-pod-reader-binding\n  namespace: claudie\nsubjects:\n# this SA is created by https://github.com/prometheus-operator/kube-prometheus\n# in your case you might need to bind this Role to a different SA\n- kind: ServiceAccount\n  name: prometheus-k8s\n  namespace: monitoring\nroleRef:\n  kind: Role\n  name: claudie-pod-reader\n  apiGroup: rbac.authorization.k8s.io\n

  2. Create Prometheus PodMonitor to scrape metrics from Claudie\u2019s pods

    apiVersion: monitoring.coreos.com/v1\nkind: PodMonitor\nmetadata:\n  name: claudie-metrics\n  namespace: monitoring\n  labels:\n    name: claudie-metrics\nspec:\n  namespaceSelector:\n    matchNames:\n      - claudie\n  selector:\n    matchLabels:\n      app.kubernetes.io/part-of: claudie\n  podMetricsEndpoints:\n  - port: metrics\n

  3. Import our dashboard into your Grafana instance:

That's it! Now you have set up RBAC for Prometheus, configured a PodMonitor to scrape metrics from Claudie's pods, and imported a Grafana dashboard to visualize the metrics.

"},{"location":"node-local-dns/node-local-dns/","title":"Deploying Node-Local-DNS","text":"

Claudie doesn't deploy node-local-dns by default. In this section we'll walk through an example of how to deploy node-local-dns for a claudie created cluster.

"},{"location":"node-local-dns/node-local-dns/#1-download-nodelocaldnsyaml","title":"1. Download nodelocaldns.yaml","text":"

Based on the kubernetes version you are using in your cluster download the nodelocaldns.yaml from the kubernetes repository

Make sure to download the YAML for the right kubernetes version, e.g. for kubernetes version 1.27 you would use:

wget https://raw.githubusercontent.com/kubernetes/kubernetes/release-1.27/cluster/addons/dns/nodelocaldns/nodelocaldns.yaml\n
"},{"location":"node-local-dns/node-local-dns/#2-modify-downloaded-nodelocaldnsyaml","title":"2. Modify downloaded nodelocaldns.yaml","text":"

We'll need to replace the references to __PILLAR__DNS__DOMAIN__ and some of the references to __PILLAR__LOCAL__DNS__

To replace __PILLAR__DNS__DOMAIN__ execute:

sed -i \"s/__PILLAR__DNS__DOMAIN__/cluster.local/g\" nodelocaldns.yaml\n

To replace __PILLAR__LOCAL__DNS__ find the references and change it to 169.254.20.10 as shown below:

    ...\n      containers:\n      - name: node-cache\n        image: registry.k8s.io/dns/k8s-dns-node-cache:1.22.20\n        resources:\n          requests:\n            cpu: 25m\n            memory: 5Mi\n-       args: [ \"-localip\", \"__PILLAR__LOCAL__DNS__,__PILLAR__DNS__SERVER__\", \"-conf\", \"/etc/Corefile\", \"-upstreamsvc\", \"kube-dns-upstream\" ]\n+       args: [ \"-localip\", \"169.254.20.10\", \"-conf\", \"/etc/Corefile\", \"-upstreamsvc\", \"kube-dns-upstream\" ]\n        securityContext:\n          capabilities:\n            add:\n            - NET_ADMIN\n        ports:\n        - containerPort: 53\n          name: dns\n          protocol: UDP\n        - containerPort: 53\n          name: dns-tcp\n          protocol: TCP\n        - containerPort: 9253\n          name: metrics\n          protocol: TCP\n        livenessProbe:\n          httpGet:\n-           host: __PILLAR__LOCAL__DNS__\n+           host: 169.254.20.10\n            path: /health\n            port: 8080\n          initialDelaySeconds: 60\n          timeoutSeconds: 5\n    ...\n
"},{"location":"node-local-dns/node-local-dns/#3-apply-the-modified-manifest","title":"3. Apply the modified manifest.","text":"

kubectl apply -f ./nodelocaldns.yaml

"},{"location":"roadmap/roadmap/","title":"Roadmap for Claudie","text":"

v0.8.1: - [x] Support for more cloud providers - [x] OCI - [x] AWS - [x] Azure - [x] Cloudflare - [x] GenesisCloud - [x] Hybrid-cloud support (on-premises) - [x] arm64 support for the nodepools - [x] App-level metrics - [x] Autoscaler

"},{"location":"sitemap/sitemap/","title":"Sitemap","text":"

This section contains a brief descriptions about main parts of the Claudie's documentation.

"},{"location":"sitemap/sitemap/#getting-started","title":"Getting Started","text":"

The \"Getting Started\" section is where you'll learn how to begin using Claudie. We'll guide you through the initial steps and show you how to set things up, so you can start using the software right away.

You'll also find helpful information on how to customize Claudie to suit your needs, including specifications for the settings you can adjust, and examples of how to use configuration files to get started.

By following the steps in this section, you'll have everything you need to start using Claudie with confidence!

"},{"location":"sitemap/sitemap/#input-manifest","title":"Input manifest","text":"

This section contains examples of YAML files of the InputManifest CRD that tell Claudie what should an infrastructure look like. Besides these files, you can also find an API reference for the InputManifest CRD there.

"},{"location":"sitemap/sitemap/#how-claudie-works","title":"How Claudie works","text":"

In this section, we'll show you how Claudie works and guide you through our workflow. We'll explain how we store and manage data, balance the workload across different parts of the system, and automatically adjust resources to handle changes in demand.

By following our explanations, you'll gain a better understanding of how Claudie operates and be better equipped to use it effectively.

"},{"location":"sitemap/sitemap/#claudie-use-cases","title":"Claudie Use Cases","text":"

The \"Claudie Use Cases\" section includes examples of different ways you can use Claudie to solve various problems. We've included these examples to help you understand the full range of capabilities Claudie offers and to show you how it can be applied in different scenarios.

By exploring these use cases, you'll get a better sense of how Claudie can be a valuable tool for your work.

"},{"location":"sitemap/sitemap/#faq","title":"FAQ","text":"

You may find helpful answers in our FAQ section.

"},{"location":"sitemap/sitemap/#roadmap-for-claudie","title":"Roadmap for Claudie","text":"

In this section, you'll find a roadmap for Claudie that outlines the features we've already added and those we plan to add in the future.

By checking out the roadmap, you'll be able to stay informed about the latest updates and see how Claudie is evolving to meet the needs of its users.

"},{"location":"sitemap/sitemap/#contributing","title":"Contributing","text":"

In this section, we've gathered all the information you'll need if you want to help contribute to the Claudie project or release a new version of the software.

By checking out this section, you'll get a better sense of what's involved in contributing and how you can be part of making Claudie even better.

"},{"location":"sitemap/sitemap/#changelog","title":"Changelog","text":"

The \"changelog\" section is where you can find information about all the changes, updates, and issues related to each version of Claudie.

"},{"location":"sitemap/sitemap/#latency-limitations","title":"Latency limitations","text":"

In this section, we describe a latency limitations, which you should take into an account, when desiging your infrastructure.

"},{"location":"sitemap/sitemap/#troubleshooting","title":"Troubleshooting","text":"

In case you run into issues, we recommend following some of the trobleshooting guides in this section.

"},{"location":"sitemap/sitemap/#creating-claudie-backup","title":"Creating Claudie Backup","text":"

This section describes steps to back up claudie and its dependencies.

"},{"location":"sitemap/sitemap/#claudie-hardening","title":"Claudie Hardening","text":"

This section describes how to further configure the default claudie deployment. It is highly recommended that you read this section.

"},{"location":"sitemap/sitemap/#prometheus-monitoring","title":"Prometheus Monitoring","text":"

In this section we walk you through the setup of Claudie's Prometheus metrics to gain visibility into various metrics that Claudie exposes.

"},{"location":"sitemap/sitemap/#updating-claudie","title":"Updating Claudie","text":"

This section describes how to execute updates, such as OS or kubernetes version, in Claudie.

"},{"location":"sitemap/sitemap/#deploying-node-local-dns","title":"Deploying Node-Local-DNS","text":"

Claudie doesn't deploy Node-Local-DNS in the default mode, thus you have to install it independently. This section provides a step-by-step guide on how to do it.

"},{"location":"sitemap/sitemap/#command-cheat-sheet","title":"Command Cheat Sheet","text":"

The \"Command Cheat Sheet\" section contains a useful kubectl commands to interact with Claudie.

"},{"location":"sitemap/sitemap/#version-matrix","title":"Version matrix","text":"

In this section, you can find supported Kubernetes and OS versions for the latest Claudie versions.

"},{"location":"storage/storage-solution/","title":"Claudie storage solution","text":""},{"location":"storage/storage-solution/#concept","title":"Concept","text":"

Running stateful workloads is a complex task, even more so when considering the multi-cloud environment. Claudie therefore needs to be able to accommodate stateful workloads, regardless of the underlying infrastructure providers.

Claudie orchestrates storage on the kubernetes cluster nodes by creating one \"storage cluster\" across multiple providers. This \"storage cluster\" has a series of zones, one for each cloud provider instance. Each zone then stores its own persistent volume data.

This concept is translated into longhorn implementation, where each zone is represented by a Storage Class which is backed up by the nodes defined under the same cloud provider instance. Furthermore, each node uses separate disk to the one, where OS is installed, to assure clear data separation. The size of the storage disk can be configured in storageDiskSize field of the nodepool specification.

"},{"location":"storage/storage-solution/#longhorn","title":"Longhorn","text":"

A Claudie-created cluster comes with the longhorn deployment preinstalled and ready to be used. By default, only worker nodes are used to store data.

Longhorn installed in the cluster is set up in a way that it provides one default StorageClass called longhorn, which, if used, creates a volume that is then replicated across random nodes in the cluster.

Besides the default storage class, Claudie can also create custom storage classes, which force persistent volumes to be created on specific nodes based on the provider instance they have. In other words, you can use a specific provider instance to provision nodes for your storage needs, while using another provider instance for computing tasks.

"},{"location":"storage/storage-solution/#example","title":"Example","text":"

To follow along, have a look at the example of InputManifest below.

storage-classes-example.yaml
apiVersion: claudie.io/v1beta1\nkind: InputManifest\nmetadata:\n  name: ExampleManifestForStorageClasses\n  labels:\n    app.kubernetes.io/part-of: claudie\nspec:\n\n  providers:\n    - name: storage-provider\n      providerType: hetzner\n      secretRef:\n        name: storage-provider-secrets\n        namespace: claudie-secrets\n\n    - name: compute-provider\n      providerType: hetzner\n      secretRef:\n        name: storage-provider-secrets\n        namespace: claudie-secrets\n\n    - name: dns-provider\n      providerType: cloudflare\n      secretRef:\n        name: dns-provider-secret\n        namespace: claudie-secrets\n\n  nodePools:\n    dynamic:\n        - name: control\n          providerSpec:\n            name: compute-provider\n            region: hel1\n            zone: hel1-dc2\n          count: 3\n          serverType: cpx21\n          image: ubuntu-22.04\n\n        - name: datastore\n          providerSpec:\n            name: storage-provider\n            region: hel1\n            zone: hel1-dc2\n          count: 5\n          serverType: cpx21\n          image: ubuntu-22.04\n          storageDiskSize: 800\n          taints:\n            - key: node-type\n              value: datastore\n              effect: NoSchedule\n\n        - name: compute\n          providerSpec:\n            name: compute-provider\n            region: hel1\n            zone: hel1-dc2\n          count: 10\n          serverType: cpx41\n          image: ubuntu-22.04\n          taints:\n            - key: node-type\n              value: compute\n              effect: NoSchedule\n\n        - name: loadbalancer\n          providerSpec:\n            name: compute-provider\n            region: hel1\n            zone: hel1-dc2\n          count: 1\n          serverType: cpx21\n          image: ubuntu-22.04\n\n  kubernetes:\n    clusters:\n      - name: my-awesome-claudie-cluster\n        version: 1.27.0\n        network: 192.168.2.0/24\n        pools:\n          control:\n            - control\n          compute:\n            - datastore\n            - compute\n\n  loadBalancers:\n    roles:\n      - name: apiserver\n        protocol: tcp\n        port: 6443\n        targetPort: 6443\n        targetPools: \n          - control\n\n    clusters:\n      - name: apiserver-lb\n        roles:\n          - apiserver\n        dns:\n          dnsZone: dns-zone\n          provider: dns-provider\n        targetedK8s: my-awesome-claudie-cluster\n        pools:\n          - loadbalancer\n

When Claudie applies this input manifest, the following storage classes are installed:

Now all you have to do is specify correct storage class when defining your PVCs.

In case you are interested in using different cloud provider for datastore-nodepool or compute-nodepool of this InputManifest example, see the list of supported providers instance

For more information on how Longhorn works you can check out Longhorn's official documentation.

"},{"location":"troubleshooting/troubleshooting/","title":"Troubleshooting guide","text":"

In progress

As we continue expanding our troubleshooting guide, we understand that issues may arise during your usage of Claudie. Although the guide is not yet complete, we encourage you to create a GitHub issue if you encounter any problems. Your feedback and reports are highly valuable to us in improving our platform and addressing any issues you may face.

"},{"location":"troubleshooting/troubleshooting/#claudie-cluster-not-starting","title":"Claudie cluster not starting","text":"

Claudie relies on all services to be interconnected. If any of these services fail to create due to node unavailability or resource constraints, Claudie will be unable to provision your cluster.

  1. Check if all Claudie services are running:

    kubectl get pods -n claudie\n
    NAME                                   READY   STATUS      RESTARTS        AGE\nansibler-5c6c776b75-82c2q              1/1     Running     0               8m10s\nbuilder-59f9d44596-n2qzm               1/1     Running     0               8m10s\nmanager-5d76c89b4d-tb6h4               1/1     Running     1 (6m37s ago)   8m10s\ncreate-table-job-jvs9n                 0/1     Completed   1               8m10s\ndynamodb-68777f9787-8wjhs              1/1     Running     0               8m10s\nclaudie-operator-5755b7bc69-5l84h      1/1     Running     0               8m10s\nkube-eleven-64468cd5bd-qp4d4           1/1     Running     0               8m10s\nkuber-698c4564c-dhsvg                  1/1     Running     0               8m10s\nmake-bucket-job-fb5sp                  0/1     Completed   0               8m10s\nminio-0                                1/1     Running     0               8m10s\nminio-1                                1/1     Running     0               8m10s\nminio-2                                1/1     Running     0               8m10s\nminio-3                                1/1     Running     0               8m10s\nmongodb-67bf769957-9ct5z               1/1     Running     0               8m10s\nterraformer-fd664b7ff-dd2h7            1/1     Running     0               8m9s\n
  2. Check the InputManifest resource status to find out what is the actual cluster state.

    kubectl get inputmanifests.claudie.io resourceName -o jsonpath={.status}\n
      {\n    \"clusters\": {\n      \"one-of-my-cluster\": {\n        \"message\": \" installing VPN\",\n        \"phase\": \"ANSIBLER\",\n        \"state\": \"IN_PROGRESS\"\n      }\n    },\n    \"state\": \"IN_PROGRESS\"\n  }    \n
  3. Examine claudie-operator service logs. The claudie-operator service logs will provide insights into any issues during cluster bootstrap and identify the problematic service. If cluster creation fails despite all Claudie pods being scheduled, it may suggest lack of permissions for Claudie providers' credentials. In this case, operator logs will point to Terrafomer service, and Terraformer service logs will provide detailed error output.

    kubectl -n claudie logs -l app.kubernetes.io/name=claudie-operator\n
    6:04AM INF Using log with the level \"info\" module=claudie-operator\n6:04AM INF Claudie-operator is ready to process input manifests module=claudie-operator\n6:04AM INF Claudie-operator is ready to watch input manifest statuses module=claudie-operator\n

    Debug log level

    Using debug log level will help here with identifying the issue closely. This guide shows how you can set it up during step 5.

    Claudie benefit!

    The great thing about Claudie is that it utilizes open source tools to set up and configure infrastructure based on your preferences. As a result, the majority of errors can be easily found and resolved through online resources.

"},{"location":"troubleshooting/troubleshooting/#terraformer-service-not-starting","title":"Terraformer service not starting","text":"

Terraformer relies on MinIO and DynamoDB datastores to be configured via jobs make-bucket-job and create-table-job respectively. If these jobs fail to configure the datastores, or the datastores themselves fail to start, Terraformer will also fail to start.

"},{"location":"troubleshooting/troubleshooting/#datastore-initialization-jobs","title":"Datastore initialization jobs","text":"

The create-table-job is responsible for creating necessary tables in the DynamoDB datastore, while the make-bucket-job creates a bucket in the MinIO datastore. If these jobs encounter scheduling problems or experience slow autoscaling, they may fail to complete within the designated time frame. To handle this, we have set the backoffLimit of both jobs to fail after approximately 42 minutes. If you encounter any issues with these jobs or believe the backoffLimit should be adjusted, please create an issue.

"},{"location":"troubleshooting/troubleshooting/#networking-issues","title":"Networking issues","text":""},{"location":"troubleshooting/troubleshooting/#wireguard-mtu","title":"Wireguard MTU","text":"

We use Wireguard for secure node-to-node connectivity. However, it requires setting the MTU value to match that of Wireguard. While the host system interface MTU value is adjusted accordingly, networking issues may arise for services hosted on Claudie managed Kubernetes clusters. For example, we observed that the GitHub actions runner docker container had to be configured with an MTU value of 1380 to avoid network errors during docker build process.

"},{"location":"troubleshooting/troubleshooting/#hetzner-and-oci-node-pools","title":"Hetzner and OCI node pools","text":"

We're experiencing networking issues caused by the blacklisting of public IPs owned by Hetzner and OCI. This problem affects the Ansibler and Kube-eleven services, which fail when attempting to add GPG keys to access the Google repository for package downloads. Unfortunately, there's no straightforward solution to bypass this issue. The recommended approach is to allow the services to fail, remove failed cluster and attempt provisioning a new cluster with newly allocated IP addresses that are not blocked by Google.

"},{"location":"troubleshooting/troubleshooting/#resolving-issues-with-terraform-state-lock","title":"Resolving issues with Terraform state lock","text":"

~During normal operation, the content of this section should not be required. If you ended up here, it means there was likely a bug somewhere in Claudie. Please open a bug report in that case and use the content of this section to troubleshoot your way out of it.

First of all you have to get into the directory in the terraformer pod, where all terraform files are located. In order to do that, follow these steps:

"},{"location":"troubleshooting/troubleshooting/#locked-state","title":"Locked state","text":"

Once you are in the directory with all TF files, run the following command:

terraform force-unlock <lock-id>\n

The lock-id is generally shown in the error message.

"},{"location":"update/update/","title":"Updating Claudie","text":"

In this section we'll describe how you can update resources that claudie creates based on changes in the manifest.

"},{"location":"update/update/#updating-kubernetes-version","title":"Updating Kubernetes Version","text":"

Updating the Kubernetes version is as easy as incrementing the version in the Input Manifest of the already build cluster.

# old version\n...\nkubernetes:\n  clusters:\n    - name: claudie-cluster\n      version: v1.27.0\n      network: 192.168.2.0/24\n      pools:\n        ...\n
# new version\n...\nkubernetes:\n  clusters:\n    - name: claudie-cluster\n      version: 1.28.0\n      network: 192.168.2.0/24\n      pools:\n        ...\n

When re-applied this will trigger a new workflow for the cluster that will result in the updated kubernetes version.

Downgrading a version is not supported once you've upgraded a cluster to a newer version

"},{"location":"update/update/#updating-dynamic-nodepool","title":"Updating Dynamic Nodepool","text":"

Nodepools specified in the InputManifest are immutable. Once created, they cannot be updated/changed. This decision was made to force the user to perform a rolling update by first deleting the nodepool and replacing it with a new version with the new desired state. A couple of examples are listed below.

"},{"location":"update/update/#updating-the-os-image","title":"Updating the OS image","text":"
# old version\n...\n- name: hetzner\n  providerSpec:\n    name: hetzner-1\n    region: fsn1\n    zone: fsn1-dc14\n  count: 1\n  serverType: cpx11\n  image: ubuntu-22.04\n...\n
# new version\n...\n- name: hetzner-1 # NOTE the different name.\n  providerSpec:\n    name: hetzner-1\n    region: fsn1\n    zone: fsn1-dc14\n  count: 1\n  serverType: cpx11\n  image: ubuntu-24.04\n...\n

When re-applied this will trigger a new workflow for the cluster that will result first in the addition of the new nodepool and then the deletion of the old nodepool.

"},{"location":"update/update/#changing-the-server-type-of-a-dynamic-nodepool","title":"Changing the Server Type of a Dynamic Nodepool","text":"

The same concept applies to changing the server type of a dynamic nodepool.

# old version\n...\n- name: hetzner\n  providerSpec:\n    name: hetzner-1\n    region: fsn1\n    zone: fsn1-dc14\n  count: 1\n  serverType: cpx11\n  image: ubuntu-22.04\n...\n
# new version\n...\n- name: hetzner-1 # NOTE the different name.\n  providerSpec:\n    name: hetzner-1\n    region: fsn1\n    zone: fsn1-dc14\n  count: 1\n  serverType: cpx21\n  image: ubuntu-22.04\n...\n

When re-applied this will trigger a new workflow for the cluster that will result in the updated server type of the nodepool.

"},{"location":"use-cases/use-cases/","title":"Use-cases and customers","text":"

We foresee the following use-cases of the Claudie platform

"},{"location":"use-cases/use-cases/#1-cloud-bursting","title":"1. Cloud-bursting","text":"

A company uses advanced cloud features in one of the hyper-scale providers (e.g. serverless Lambda and API Gateway functionality in AWS). They run a machine-learning application that they need to train for a pattern on a dataset. The learning phase requires significant compute resources. Claudie allows to extend the cluster in AWS (needed in order to access the AWS functionality) to Hetzner for saving the infrastructure costs of the machine-learning case.

Typical client profiles:

"},{"location":"use-cases/use-cases/#2-cost-saving","title":"2. Cost-saving","text":"

A company would like to utilize their on-premise or leased resources that they already invested into, but would like to:

  1. extend the capacity
  2. access managed features of a hyper-scale provider (AWS, GCP, ...)
  3. get the workload physically closer to a client (e. g. to South America)

Typical client profile:

"},{"location":"use-cases/use-cases/#3-smart-layer-as-a-service-on-top-of-simple-cloud-providers","title":"3. Smart-layer-as-a-Service on top of simple cloud-providers","text":"

An existing customer of medium-size provider (e.g. Exoscale) would like to utilize features that are typical for hyper-scale providers. Their current provider does neither offer nor plan to offer such an advanced functionality.

Typical client profile:

"},{"location":"use-cases/use-cases/#4-service-interconnect","title":"4. Service interconnect","text":"

A company would like to access on-premise-hosted services and cloud-managed services from within the same cluster. For on-premise services the on-premise cluster node would egress the traffic. The cloud-hosted cluster nodes would deal with the egress traffic to the cloud-managed services.

Typical client profile:

"},{"location":"version-matrix/version-matrix/","title":"Version matrix","text":"

In the following table, you can find the supported Kubernetes and OS versions for the latest Claudie versions.

Claudie Version Kubernetes versions OS versions v0.6.x 1.24.x, 1.25.x, 1.26.x Ubuntu 22.04 v0.7.0 1.24.x, 1.25.x, 1.26.x Ubuntu 22.04 v0.7.1-x 1.25.x, 1.26.x, 1.27.x Ubuntu 22.04 v0.8.0 1.25.x, 1.26.x, 1.27.x Ubuntu 22.04 v0.8.1 1.27.x, 1.28.x, 1.29.x Ubuntu 22.04 v0.9.0 1.27.x, 1.28.x, 1.29.x, 1.30.x Ubuntu 22.04 (Ubuntu 24.04 on Hetzner and Azure)"}]} \ No newline at end of file diff --git a/v0.9.0/sitemap.xml b/v0.9.0/sitemap.xml index 3364b53ee..e015734c0 100644 --- a/v0.9.0/sitemap.xml +++ b/v0.9.0/sitemap.xml @@ -2,242 +2,242 @@ https://docs.claudie.io/v0.9.0/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/CHANGELOG/changelog-0.1.x/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/CHANGELOG/changelog-0.2.x/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/CHANGELOG/changelog-0.3.x/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/CHANGELOG/changelog-0.4.x/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/CHANGELOG/changelog-0.5.x/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/CHANGELOG/changelog-0.6.x/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/CHANGELOG/changelog-0.7.x/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/CHANGELOG/changelog-0.8.x/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/CHANGELOG/changelog-0.9.x/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/autoscaling/autoscaling/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/claudie-workflow/claudie-workflow/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/commands/commands/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/contributing/contributing/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/contributing/local-testing/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/contributing/release/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/creating-claudie-backup/creating-claudie-backup/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/docs-guides/deployment-workflow/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/docs-guides/development/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/faq/FAQ/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/feedback/feedback-form/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/getting-started/detailed-guide/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/getting-started/get-started-using-claudie/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/hardening/hardening/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/http-proxy/http-proxy/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/input-manifest/api-reference/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/input-manifest/example/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/input-manifest/external-templates/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/input-manifest/gpu-example/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/input-manifest/providers/aws/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/input-manifest/providers/azure/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/input-manifest/providers/cloudflare/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/input-manifest/providers/gcp/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/input-manifest/providers/genesiscloud/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/input-manifest/providers/hetzner/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/input-manifest/providers/oci/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/input-manifest/providers/on-prem/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/latency-limitations/latency-limitations/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/loadbalancing/loadbalancing-solution/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/monitoring/grafana/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/node-local-dns/node-local-dns/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/roadmap/roadmap/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/sitemap/sitemap/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/storage/storage-solution/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/troubleshooting/troubleshooting/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/update/update/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/use-cases/use-cases/ - 2024-10-18 + 2024-10-31 daily https://docs.claudie.io/v0.9.0/version-matrix/version-matrix/ - 2024-10-18 + 2024-10-31 daily \ No newline at end of file diff --git a/v0.9.0/sitemap.xml.gz b/v0.9.0/sitemap.xml.gz index 1c78546ff..f681447fc 100644 Binary files a/v0.9.0/sitemap.xml.gz and b/v0.9.0/sitemap.xml.gz differ diff --git a/v0.9.0/version-matrix/version-matrix/index.html b/v0.9.0/version-matrix/version-matrix/index.html index 66eeaf035..1d233358f 100644 --- a/v0.9.0/version-matrix/version-matrix/index.html +++ b/v0.9.0/version-matrix/version-matrix/index.html @@ -1502,6 +1502,11 @@

Version matrix