From 8c0cbbbb310bfaff01509585c2b0036631d8a065 Mon Sep 17 00:00:00 2001 From: Mohammed Naser Date: Fri, 4 Aug 2023 10:30:43 -0400 Subject: [PATCH] docs: refactor into tabs --- docs/user/getting-started.md | 274 ++++++++++++++++++----------------- mkdocs.yml | 4 + 2 files changed, 143 insertions(+), 135 deletions(-) diff --git a/docs/user/getting-started.md b/docs/user/getting-started.md index 38cfcb49..0b0fa8cb 100644 --- a/docs/user/getting-started.md +++ b/docs/user/getting-started.md @@ -36,172 +36,173 @@ You can create clusters using several different methods which all end up using the Magnum API. You can either use the OpenStack CLI, OpenStack Horizon dashboard, Terraform, Ansible or the Magnum API directly. -#### OpenStack CLI - -The OpenStack CLI is the easiest way to create a Kubernetes cluster from your -terminal directly. You can use the `openstack coe cluster create` command to -create a Kubernetes cluster with the Cluster API driver for Magnum. - -Before you get started, you'll have to make sure that you have the cluster -templates you want to use available in your environment. You can create them -using the OpenStack CLI: - -```bash -for version in v1.23.17 v1.24.15 v1.25.11 v1.26.6 v1.27.3; do \ - curl -LO https://object-storage.public.mtl1.vexxhost.net/swift/v1/a91f106f55e64246babde7402c21b87a/magnum-capi/ubuntu-2204-kube-${version}.qcow2; \ - openstack image create ubuntu-2204-kube-${version} --disk-format=qcow2 --container-format=bare --property os_distro=ubuntu --file=ubuntu-2204-kube-${version}.qcow2; \ - openstack coe cluster template create \ - --image $(openstack image show ubuntu-2204-kube-${version} -c id -f value) \ - --external-network public \ - --dns-nameserver 8.8.8.8 \ - --master-lb-enabled \ - --master-flavor m1.medium \ - --flavor m1.medium \ - --network-driver calico \ - --docker-storage-driver overlay2 \ - --coe kubernetes \ - --label kube_tag=${version} \ - k8s-${version}; -done; -``` +=== "OpenStack CLI" -Once you've got a cluster template, you can create a cluster using the OpenStack -CLI: + The OpenStack CLI is the easiest way to create a Kubernetes cluster from + your terminal directly. You can use the `openstack coe cluster create` + command to create a Kubernetes cluster with the Cluster API driver for Magnum. -```console -$ openstack coe cluster create --cluster-template -``` + Before you get started, you'll have to make sure that you have the cluster + templates you want to use available in your environment. You can create + them using the OpenStack CLI: -You'll be able to view teh status of the deployment using the OpenStack CLI: + ```bash + for version in v1.23.17 v1.24.15 v1.25.11 v1.26.6 v1.27.3; do \ + curl -LO https://object-storage.public.mtl1.vexxhost.net/swift/v1/a91f106f55e64246babde7402c21b87a/magnum-capi/ubuntu-2204-kube-${version}.qcow2; \ + openstack image create ubuntu-2204-kube-${version} --disk-format=qcow2 --container-format=bare --property os_distro=ubuntu --file=ubuntu-2204-kube-${version}.qcow2; \ + openstack coe cluster template create \ + --image $(openstack image show ubuntu-2204-kube-${version} -c id -f value) \ + --external-network public \ + --dns-nameserver 8.8.8.8 \ + --master-lb-enabled \ + --master-flavor m1.medium \ + --flavor m1.medium \ + --network-driver calico \ + --docker-storage-driver overlay2 \ + --coe kubernetes \ + --label kube_tag=${version} \ + k8s-${version}; + done; + ``` -```console -$ openstack coe cluster show -``` + Once you've got a cluster template, you can create a cluster using the + OpenStack CLI: + + ```console + $ openstack coe cluster create --cluster-template + ``` + + You'll be able to view the status of the deployment using the OpenStack CLI: -#### OpenStack Horizon + ```console + $ openstack coe cluster show + ``` -The OpenStack Horizon dashboard is the easiest way to create a Kubernetes using -a simple web interface. In order to get started, you can review the list of -current cluster templates in your environment by navigating using the left -sidebar to *Project* > *Container Infra* > *Cluster Templates*. +=== "OpenStack Horizon" -![Cluster template list](../static/user/getting-started/cluster-template-list.png) + The OpenStack Horizon dashboard is the easiest way to create a Kubernetes + using a simple web interface. In order to get started, you can review the + list of current cluster templates in your environment by navigating using + the left sidebar to *Project* > *Container Infra* > *Cluster Templates*. -In order to launch an new cluster, you will need to navigate to *Project* > -*Container Infra* > *Clusters* and click on the *Launch Cluster* button. + ![Cluster template list](../static/user/getting-started/cluster-template-list.png) -![Cluster list with create button](../static/user/getting-started/cluster-list-create.png) + In order to launch an new cluster, you will need to navigate to *Project* > + *Container Infra* > *Clusters* and click on the *Launch Cluster* button. -There is a set of required fields that you will need to fill out in order to -launch a cluster, the first of which are related to it's basic configuration, -the required fields are: + ![Cluster list with create button](../static/user/getting-started/cluster-list-create.png) -* **Cluster Name** - The name of the cluster that will be created. + There is a set of required fields that you will need to fill out in order + to launch a cluster, the first of which are related to it's basic + configuration, the required fields are: -* **Cluster Template** - The cluster template that will be used to create the cluster. + * **Cluster Name** + The name of the cluster that will be created. -* **Keypair** - The SSH key pair that will be used to access the cluster. + * **Cluster Template** + The cluster template that will be used to create the cluster. -In this example, we're going to create a cluster with the name of `test-cluster`, -running Kuberentes 1.27.3 so using the `k8s-v1.27.3` cluster template, and using -the `admin_key` SSH key pair. + * **Keypair** + The SSH key pair that will be used to access the cluster. -![Cluster create information](../static/user/getting-started/cluster-create-info.png) + In this example, we're going to create a cluster with the name of + `test-cluster`, running Kuberentes 1.27.3 so using the `k8s-v1.27.3` + cluster template, and using the `admin_key` SSH key pair. -The next step is deciding on the size of the cluster and selecting if auto scaling -will be enabled for the cluster. The required fields are: + ![Cluster create information](../static/user/getting-started/cluster-create-info.png) -* **Number of Master Nodes** - The number of master nodes that will be created in the cluster. + The next step is deciding on the size of the cluster and selecting if auto + scaling will be enabled for the cluster. The required fields are: -* **Flavor of Master Nodes** - The flavor of the master nodes that will be created in the cluster. + * **Number of Master Nodes** + The number of master nodes that will be created in the cluster. -* **Number of Worker Nodes** - The number of worker nodes that will be created in the cluster. + * **Flavor of Master Nodes** + The flavor of the master nodes that will be created in the cluster. -* **Flavor of Worker Nodes** - The flavor of the worker nodes that will be created in the cluster. + * **Number of Worker Nodes** + The number of worker nodes that will be created in the cluster. -In addition, if you want to enable auto scaling, you will need to provide the -following information: + * **Flavor of Worker Nodes** + The flavor of the worker nodes that will be created in the cluster. -* **Auto-scale Worker Nodes** - Whether or not to enable auto scaling for the worker nodes. + In addition, if you want to enable auto scaling, you will need to provide the + following information: -* **Minimum Number of Worker Nodes** - The minimum number of worker nodes that will be created in the cluster, the - auto scaler will not scale below this number even if the cluster is under - utilized. + * **Auto-scale Worker Nodes** + Whether or not to enable auto scaling for the worker nodes. -* **Maximum Number of Worker Nodes** - The maximum number of worker nodes that will be created in the cluster, the - auto scaler will not scale above this number even if the cluster is over - utilized. + * **Minimum Number of Worker Nodes** + The minimum number of worker nodes that will be created in the cluster, + the auto scaler will not scale below this number even if the cluster is + under utilized. -In this example, we're going to create a cluster with 3 master node and 4 worker -nodes, using the `m1.medium` flavor for both the master and worker nodes, and we -will enable auto scaling with a minimum of 2 worker nodes and a maximum of 10 -worker nodes. + * **Maximum Number of Worker Nodes** + The maximum number of worker nodes that will be created in the cluster, + the auto scaler will not scale above this number even if the cluster is + over utilized. -![Cluster create size](../static/user/getting-started/cluster-create-size.png) + In this example, we're going to create a cluster with 3 master node and 4 + worker nodes, using the `m1.medium` flavor for both the master and worker + nodes, and we will enable auto scaling with a minimum of 2 worker nodes and + a maximum of 10 worker nodes. -The next step is managing the network configuration of the cluster. The required -fields are: + ![Cluster create size](../static/user/getting-started/cluster-create-size.png) -* **Enable Load Balancer for Master Nodes** - This is required to be **enabled** for the Cluster API driver for Magnum to - work properly. + The next step is managing the network configuration of the cluster. The + required fields are: -* **Create New Network** - This will determine if a new network will be created for the cluster or if an - existing network will be used. It's useful to use an existing network if you - want to attach the cluster to an existing network with other resources. + * **Enable Load Balancer for Master Nodes** + This is required to be **enabled** for the Cluster API driver for Magnum + to work properly. -* **Cluster API** - This setting controls if the API will get a floating IP address assigned to - it. You can set this to _Accessible on private network only_ if you are using - an existing network and don't want to expose the API to the public internet. - Otherwise, you should set it to _Accessible on the public internet_ to allow - access to the API from the external network. + * **Create New Network** + This will determine if a new network will be created for the cluster or if + an existing network will be used. It's useful to use an existing network + if you want to attach the cluster to an existing network with other + resources. -In this example, we're going to make sure we have the load balancer enabled for -the master nodes, we're going to create a new network for the cluster, and we're -going to make sure that the API is accessible on the public internet. + * **Cluster API** + This setting controls if the API will get a floating IP address assigned + to it. You can set this to _Accessible on private network only_ if you + are using an existing network and don't want to expose the API to the + public internet. Otherwise, you should set it to _Accessible on the public + internet_ to allow access to the API from the external network. -![Cluster create network](../static/user/getting-started/cluster-create-network.png) + In this example, we're going to make sure we have the load balancer enabled + for the master nodes, we're going to create a new network for the cluster, + and we're going to make sure that the API is accessible on the public internet. -For the next step, we need to decide if we want to enable auto-healing for the -cluster which automatically detects nodes that are unhealthy and replaces them -with new nodes. The required fields are: + ![Cluster create network](../static/user/getting-started/cluster-create-network.png) -* **Automatically Repair Unhealthy Nodes** - Whether or not to enable auto-healing for the cluster. + For the next step, we need to decide if we want to enable auto-healing for + the cluster which automatically detects nodes that are unhealthy and + replaces them with new nodes. The required fields are: -In this example, we're going to enable auto-healing for the cluster since it -will help keep the cluster healthy. + * **Automatically Repair Unhealthy Nodes** + Whether or not to enable auto-healing for the cluster. -![Cluster create management](../static/user/getting-started/cluster-create-mgmt.png) + In this example, we're going to enable auto-healing for the cluster since it + will help keep the cluster healthy. -Finally, you can override labels for the cluster in the _Advanced_ section, we -do not recommend changing these unless you know what you're doing. Once you're -ready, you can click on the _Submit_ button to create the cluster. The page -will show your cluster being created. + ![Cluster create management](../static/user/getting-started/cluster-create-mgmt.png) -![Cluster list after creation](../static/user/getting-started/cluster-postcreate-list.png) + Finally, you can override labels for the cluster in the _Advanced_ section, + we do not recommend changing these unless you know what you're doing. Once + you're ready, you can click on the _Submit_ button to create the cluster. + The page will show your cluster being created. -If you click on the cluster, you'll be able to track the progress of the cluster -creation, more specifically in the _Status Reason_ field, seen below: + ![Cluster list after creation](../static/user/getting-started/cluster-postcreate-list.png) -![Cluster show after creation](../static/user/getting-started/cluster-postcreate-show.png) + If you click on the cluster, you'll be able to track the progress of the + cluster creation, more specifically in the _Status Reason_ field, seen below: -Once the cluster is created, you'll be able to see the cluster details, -including the health status as well: + ![Cluster show after creation](../static/user/getting-started/cluster-postcreate-show.png) -![Cluster show after creation](../static/user/getting-started/cluster-created-show.png) + Once the cluster is created, you'll be able to see the cluster details, + including the health status as well: + + ![Cluster show after creation](../static/user/getting-started/cluster-created-show.png) At this point, you should have a ready cluster and you can proceed to the [Accessing](#accessing) section to learn how to access the cluster. @@ -212,14 +213,15 @@ In order to access the Kubernetes cluster, you will have to request for a `KUBECONFIG` file generated by the Cluster API driver for Magnum. You can do this using a few several ways, we cover a few of them in this section. -#### OpenStack CLI +=== "OpenStack CLI" -You can use the OpenStack CLI to request a `KUBECONFIG` file for a Kubernetes -cluster. You can do this using the `openstack coe cluster config` command: + You can use the OpenStack CLI to request a `KUBECONFIG` file for a + Kubernetes cluster. You can do this using the `openstack coe cluster config` + command: -```console -$ openstack coe cluster config -``` + ```console + $ openstack coe cluster config + ``` ### Upgrading @@ -238,12 +240,14 @@ In order to upgrade a cluster, you must have a cluster template pointing at the image for the new Kubernetes version and the `kube_tag` label must be updated to point at the new Kubernetes version. -Once you have this cluster template, you can trigger an upgrade by using the -OpenStack CLI: +=== "OpenStack CLI" -```console -$ openstack coe cluster upgrade -``` + Once you have this cluster template, you can trigger an upgrade by using the + OpenStack CLI: + + ```console + $ openstack coe cluster upgrade + ``` ### Node group role Roles can be used to show the purpose of a node group, and multiple node groups can be given the same role if they share a common purpose. diff --git a/mkdocs.yml b/mkdocs.yml index 64f6d2d1..5b7e47a7 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -2,12 +2,16 @@ site_name: Cluster API driver for Magnum theme: name: material features: + - content.tabs.link - navigation.tracking markdown_extensions: - admonition - def_list - pymdownx.details - pymdownx.superfences + - pymdownx.tabbed: + alternate_style: true plugins: - literate-nav: nav_file: SUMMARY.md +copyright: Cluster API driver for Magnum is a community effort led by VEXXHOST, Inc.