-
Notifications
You must be signed in to change notification settings - Fork 1
RSP Deployment instructions on Openstack with Magnum
Assumptions:
- We are running on an (admin) node with Docker installed
- The floating IP that our configuration uses as the LoadBalancer is available (192.41.122.16 for our production service)
Phalanx configuration for latest version used for the RSP:UK can be found here:
https://github.com/lsst-uk/phalanx
user@admin-node
Placed under ${HOME:?}/clouds-iris.yaml or elsewhere (but note that the ansibler container path below should then change)
user@admin-node
Placed under ${HOME:?}/RSP-openrc.sh or elsewhere (but note that the ansibler container path below should then change)
user@admin-node
clientname=ansibler-iris
sudo docker run \
--rm \
--tty \
--interactive \
--name "${clientname:?}" \
--hostname "${clientname:?}" \
--volume "${HOME:?}/clouds-iris.yaml:/etc/openstack/clouds.yaml:ro,z" \
--volume "${HOME:?}/RSP-openrc.sh:/etc/openstack/RSP-openrc.sh:ro,z" \
ghcr.io/wfau/atolmis/ansible-client:2022.07.25 \
bash
root@ansibler-iris
source /etc/openstack/RSP-openrc.sh
root@ansibler-iris
git clone https://github.com/stvoutsin/phlx-installer
root@ansibler-iris
If you haven't already, optionally create template ..
pushd phlx-installer/scripts/openstack/
./create-magnum-template.sh stv-template-large
popd
The template we've setup and are using has the following attributes:
> Master Flavor ID: qserv-utility
> Volume Driver: cinder
> Image ID: fedora-coreos-35.20211203.3.0
> Network Driver: calico
> boot_volume_size: 50
Note: You may need to modify the parameters in create_cluster.sh (i.e. keypair, template name, number of worker nodes)
root@ansibler-iris
pushd phlx-installer/scripts/openstack/
./create-magnum-cluster.sh stv-rsp-prod-blue
popd
Wait until the cluster has been created
root@ansibler-iris
pushd phlx-installer/scripts/openstack/
./open-ports.sh stv-rsp-prod-blue
popd
Note: We need to ensure that the IP address that we will point the Load Balancer to is open
openstack coe cluster config ${cluster-name}
This will create a copy under this directory, named config. Grab a copy of that
Exit the Ansibler container
exit
First make sure we have exited the ansibler-container. After clone the phlx-installer on the admin node
user@admin-node
git clone https://github.com/stvoutsin/phlx-installer
user@admin-node
> Copy into: phlx-installer/kube/config
user@admin-node
sudo docker build phlx-installer/ --tag installer
user@admin-node
export VAULT_ROLE_ID=
export VAULT_SECRET_ID=
export VAULT_ADDR=
export REPO=
export BRANCH=
export ENVIRONMENT=
export CUR_DIRECTORY=/home/ubuntu # Or whichever directory you have cloned to
user@admin-node
sudo docker run \
-it \
--hostname installer \
--env REPO=${REPO:?} \
--env VAULT_ADDR=${VAULT_ADDR:?} \
--env VAULT_ROLE_ID=${VAULT_ROLE_ID:?} \
--env VAULT_SECRET_ID=${VAULT_SECRET_ID:?} \
--env BRANCH=${BRANCH:?} \
--env ENVIRONMENT=${ENVIRONMENT:?} \
--volume ${CUR_DIRECTORY:?}"/phlx-installer/certs:/etc/kubernetes/certs" \
--volume ${CUR_DIRECTORY:?}"/phlx-installer/kube/config:/root/.kube/config" \
--volume ${CUR_DIRECTORY:?}"/phlx-installer/scripts/install.sh:/root/install.sh" \
--volume ${CUR_DIRECTORY:?}"/phlx-installer/scripts/helper.sh:/root/helper.sh" \
installer
The load balancer pool can be found in the horizon dashboard, or using the openstack client
Navigate to the dynamically created load balancer (named something like: [kube_service_065a6815-cea1-42c9-b825-bee91c6b3591_ingress-nginx_ingress-nginx-controller).
Change pool TCP_443_pool, create Health Monitor:
- Delay: 5
- Timeout: 5
- Max Retries: 3
- Max Retries Down: 3
- HTTP Method: GET
- URL Path: /
- Expected Codes: 400
Change pool TCP_80_pool, create Health Monitor:
- Delay: 5
- Timeout: 5
- Max Retries: 3
- Max Retries Down: 3
- HTTP Method: GET
- URL Path: /
- Expected Codes: 404