diff --git a/doc/source/deployment/configure.rst b/doc/source/deployment/configure.rst index aa2ff1990..1744ad946 100644 --- a/doc/source/deployment/configure.rst +++ b/doc/source/deployment/configure.rst @@ -76,7 +76,7 @@ MariaDB, RabbitMQ, and PostgreSQL. The group `airship-openstack-control-workers` specifies the list of CaaS Platform worker nodes that make up the OpenStack control plane. The OpenStack control plane includes Keystone, Glance, Cinder, Nova, Neutron, -Horizon, Heat, MariaDB, RabbitMQ and so on. +Horizon, Heat, MariaDB, and RabbitMQ. The group `airship-openstack-compute-workers` defines the CaaS Platform worker nodes used as OpenStack Compute Nodes. Nova Compute, Libvirt, Open vSwitch (OVS) @@ -93,7 +93,7 @@ See also .. tip:: Do not add `localhost` as a host in your inventory. - It is a host specially considered by Ansible. + It is a host with special meaning to Ansible. If you want to create an inventory node for your local machine, add your machine's hostname inside your inventory, and specify this host variable: **ansible_connection: local** @@ -128,10 +128,10 @@ For example: Configure for Kubernetes ------------------------ -socok8s relies on kubectl and Helm commands to configure your OpenStack -deployment. You need to provide a `kubeconfig` file on the `deployer` node, -in your workspace. You can fetch this file from the Velum UI on your -SUSE CaaS Platform cluster. +SUSE Containerized OpenStack relies on kubectl and Helm commands to configure +your OpenStack deployment. You need to provide a `kubeconfig` file on the +`deployer` node, in your workspace. You can fetch this file from the Velum UI +on your SUSE CaaS Platform cluster. Configure the VIP that will be used for OpenStack service public endpoints -------------------------------------------------------------------------- @@ -162,6 +162,7 @@ For example: socok8s_dcm_vip: "192.168.51.35" +.. _configurecloudscaleprofile: Configure Cloud Scale Profile ----------------------------- @@ -207,8 +208,8 @@ group vars. extravars can be used to override any deployment code. Use it at your own risk. -socok8s is flexible, and allows you to override the value of any upstream Helm -chart value with the appropriate overrides. +SUSE Containerized OpenStack is flexible, and allows you to override the value +of any upstream Helm chart value with the appropriate overrides. .. note :: diff --git a/doc/source/deployment/deploy.rst b/doc/source/deployment/deploy.rst index d22e1c3e7..0ea356370 100644 --- a/doc/source/deployment/deploy.rst +++ b/doc/source/deployment/deploy.rst @@ -136,7 +136,7 @@ Here is a sample output of the Shipyard `describe` command: Logs ++++ -To check Airship logs, run the Shipyard logs CLI command, for example, +To check Airship logs, run the Shipyard logs CLI command, for example: .. code-block:: console @@ -152,8 +152,8 @@ For example, to retrieve the test output from the Keystone Rally test, run: Run Developer Mode ------------------ -If you want to patch upstream Helm charts and/or build your own container -images, you need to set the following environment variables before deployment: +If you want to patch upstream Helm charts or build your own container images, +you need to set the following environment variables before deployment: .. code-block:: console diff --git a/doc/source/deployment/requirements.rst b/doc/source/deployment/requirements.rst index 67d0b0c00..23cab7b53 100644 --- a/doc/source/deployment/requirements.rst +++ b/doc/source/deployment/requirements.rst @@ -10,7 +10,7 @@ requirements. Infrastructure -------------- -* The `Deployer` must run openSUSE Leap 15 or SUSE Linux Enterprise 15. See the page +* The `Deployer` must run openSUSE Leap 15 or SUSE Linux Enterprise 15. See :ref:`setupdeployer` for required deployment tools and packages. .. note:: @@ -71,18 +71,19 @@ run and compute nodes where customer workloads are hosted. For a minimal cloud, you should plan one worker node for the control plane, and one or more worker nodes as OpenStack compute nodes. -To ensure high availability, we recommend three worker nodes designated for -the Airship and OpenStack control plane, and additional number of worker nodes -allocated for compute. +For a high availability (HA) cloud, we recommend three worker nodes designated +for the Airship and OpenStack control plane, and additional worker nodes +allocated for compute. For detailed information about scale profiles, see +:ref:`configurecloudscaleprofile`. Network Requirements -------------------- -* CaaS Platform networking and spec +* CaaS Platform networking Create necessary CaaS Platform networks before deploying SUSE Containerized OpenStack. Separating traffic by function is recommended but not required. -* Storage Network and spec +* Storage Network A separate storage network can be created to isolate storage traffic. This separate network should be present on the Caas Platform and ses_config.yml mon_host: section. diff --git a/doc/source/deployment/ses-integration.rst b/doc/source/deployment/ses-integration.rst index 9c2141f19..c73b22d26 100644 --- a/doc/source/deployment/ses-integration.rst +++ b/doc/source/deployment/ses-integration.rst @@ -37,20 +37,18 @@ SUSE Enterprise Storage Integration } -For SES deployments that have version 5.5 and higher, there is a Salt runner -that can create all the users and pools OpenStack services require. It also -generates a yaml configuration that is needed to integrate with SUSE -Containerized OpenStack Cloud. The integration runner creates separate users -for Cinder, Cinder backup, and Glance. Both the Cinder and Nova services -will have the same user, as Cinder needs access to create objects that Nova -uses. +For SES deployments using version 5.5 and higher, a Salt runner can create all +the users and pools OpenStack services require. It also generates a yaml +configuration that is needed to integrate with SUSE Containerized OpenStack. +The integration runner creates separate users for Cinder, Cinder backup, and +Glance. Both the Cinder and Nova services will have the same user, as Cinder +needs access to create objects that Nova uses. Log in as root to run the SES 5.5 Salt runner on the salt admin host. -root # .. code-block:: bash - salt-run --out=yaml openstack.integrate prefix=mycloud + # salt-run --out=yaml openstack.integrate prefix=mycloud The prefix parameter allows pools to be created with the specified prefix. In this way, multiple cloud deployments can use different users and pools on diff --git a/doc/source/deployment/setup-deployer.rst b/doc/source/deployment/setup-deployer.rst index 21c9df194..eb5997207 100644 --- a/doc/source/deployment/setup-deployer.rst +++ b/doc/source/deployment/setup-deployer.rst @@ -68,11 +68,12 @@ Deployer. Set up your workspace with the following steps: export SOCOK8S_WORKSPACE_BASEDIR=~/socok8s-workspace -Cloning repository ------------------------ +Cloning the repository +---------------------- -To get started, clone the socok8s GitHub repository. This repository uses -submodules, so you need to get all the code to make sure the playbooks work. +To get started, clone the `socok8s GitHub repository `_. +This repository uses submodules, so you need to get all the code to make sure +the playbooks work. :: @@ -93,7 +94,7 @@ Platform worker node. .. note :: - 1. To generate the key, you can use ssh-keygen -t rsa + 1. To generate the key, use ssh-keygen -t rsa 2. To copy the ssh key to each node, use the ssh-copy-id command, for example: ssh-copy-id root@192.168.122.1 diff --git a/doc/source/glossary.rst b/doc/source/glossary.rst index c0b138acc..822cf1aad 100644 --- a/doc/source/glossary.rst +++ b/doc/source/glossary.rst @@ -17,6 +17,10 @@ Glossary Deployer openSUSE Leap 15 host used to deploy CCP. + LOCI + The official OpenStack project to build Lightweight Open Container + Initiative (OCI) compliant images of OpenStack projects. + SES SUSE Enterprise Storage