-
Notifications
You must be signed in to change notification settings - Fork 1
RSP Deployment instructions on Openstack with Magnum
Assumptions:
- We are running on an (admin) node with Docker installed
- The floating IP that our configuration uses as the LoadBalancer is available (192.41.122.16 for our production service)
Phalanx configuration for latest version used for the RSP:UK can be found here:
https://github.com/lsst-uk/phalanx
user@admin-node
Placed under ${HOME:?}/clouds-iris.yaml or elsewhere (but note that the ansibler container path below should then change)
user@admin-node
Placed under ${HOME:?}/RSP-openrc.sh or elsewhere (but note that the ansibler container path below should then change)
user@admin-node
clientname=ansibler-iris
sudo docker run \
--rm \
--tty \
--interactive \
--name "${clientname:?}" \
--hostname "${clientname:?}" \
--volume "${HOME:?}/clouds-iris.yaml:/etc/openstack/clouds.yaml:ro,z" \
--volume "${HOME:?}/RSP-openrc.sh:/etc/openstack/RSP-openrc.sh:ro,z" \
ghcr.io/wfau/atolmis/ansible-client:2022.07.25 \
bash
root@ansibler-iris
source /etc/openstack/RSP-openrc.sh
root@ansibler-iris
git clone https://github.com/stvoutsin/phlx-installer
root@ansibler-iris
If you haven't already, optionally create template ..
pushd phlx-installer/scripts/openstack/
./create-magnum-template.sh stv-template-large
popd
The template we've setup and are using has the following attributes:
> Master Flavor ID: qserv-utility
> Volume Driver: cinder
> Image ID: fedora-coreos-35.20211203.3.0
> Network Driver: calico
> boot_volume_size: 50
Note: You may need to modify the parameters in create_cluster.sh (i.e. keypair, template name, number of worker nodes)
root@ansibler-iris
pushd phlx-installer/scripts/openstack/
./create-magnum-cluster.sh stv-rsp-prod-blue
popd
Wait until the cluster has been created
root@ansibler-iris
pushd phlx-installer/scripts/openstack/
./open-ports.sh stv-rsp-prod-blue
popd
Note: We need to ensure that the IP address that we will point the Load Balancer to is open
openstack coe cluster config ${cluster-name}
This will create a copy under this directory, named config. Grab a copy of that
Exit the Ansibler container
exit
First make sure we have exited the ansibler-container. After clone the phlx-installer on the admin node
user@admin-node
git clone https://github.com/stvoutsin/phlx-installer
user@admin-node
> Copy into: phlx-installer/kube/config
user@admin-node
sudo docker build phlx-installer/ --tag installer
user@admin-node
export VAULT_TOKEN=YOUR_VAULT_TOKEN
export REPO=YOUR_PHALANX_REPO
export BRANCH=YOUR_PHALANX_BRANCH
export ENVIRONMENT=YOUR_PHALANX_ENVIRONMENT
export CUR_DIRECTORY=/home/ubuntu # Or whichever directory you have cloned to
export VAULT_TOKEN_LEASE_DURATION=2592000
user@admin-node
sudo docker run \
-it \
--hostname installer \
--env REPO=${REPO:?} \
--env VAULT_TOKEN=${VAULT_TOKEN:?} \
--env BRANCH=${BRANCH:?} \
--env ENVIRONMENT=${ENVIRONMENT:?} \
--env VAULT_TOKEN_LEASE_DURATION=${VAULT_TOKEN_LEASE_DURATION:?} \
--volume ${CUR_DIRECTORY:?}"/phlx-installer/certs:/etc/kubernetes/certs" \
--volume ${CUR_DIRECTORY:?}"/phlx-installer/kube/config:/root/.kube/config" \
--volume ${CUR_DIRECTORY:?}"/phlx-installer/scripts/install.sh:/root/install.sh" \
--volume ${CUR_DIRECTORY:?}"/phlx-installer/scripts/helper.sh:/root/helper.sh" \
installer
Note, Ensure that the Nodes listed in the Openstack Load Balancer pool that was created, each have a replica of the Nginx Controller running
The load balancer pool can be found in the horizon dashboard, or using the openstack client
To get list of nginx ingress controllers currently running:
kubectl get pods -o wide -n ingress-nginx