-
Notifications
You must be signed in to change notification settings - Fork 0
Docker in production
The VM provided to us by the VIP IT department is a member of a Docker Swarm, meaning that we can connect to the swarm remotely with any of our machines and inspect the services inside. This allows anyone on our team to interact with the live production stack and communicate with the services deployed to it directly. In the future, we can add new nodes (new physical hosts running Docker) to our Swarm, to have high-availability deployments over multiple machines.
First, have someone on the team add you to our Docker Hub organization. You'll need a Docker ID to do so, so create one here, and tell whoever is in charge what your Docker ID is. This will be needed for any deployments, as you'll be pushing images to our private repositories as part of the deployment process.
If you're using a version of Docker on Windows or macOS, you can sign into your Docker ID with the GUI for Docker. In macOS, this lies in your menubar, whereas in Windows, this (probably) lies in your system tray. Then, you'll be able to swap to the gatechswapr
organization in the same UI, allowing you to open a connection to the Docker swarm.
If you want to configure your terminal session to use the Docker swarm, there is a convenient script called prod-docker.sh
in the root of the provisioning repository. Run:
source prod-docker.sh
This script sets some Docker configuration environment variables to communicate with our production swarm locally. Note that you may not be able to connect off-campus without the use of a VPN.
You'll then be able to use regular docker
commands to communicate with our swarm. This connection only persists as long as your terminal session is active. To use your local Docker host, you can either start a new terminal session, or unset any environment variables set in prod-docker.sh
.
To test your connectivity to the swarm, run the following command after connecting, and verify the output. You should see at least one node available in our swarm.
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
xxxxxxxxxxxxxxxxxxxxxxxxx * swapr Ready Active Leader
The first step of any deployment is to test, of course! Make sure that any existing tests are passing with new code, and that new tests have been written to cover any new code written.
Container images should be built from the verified-working release code for all services with updates, and then pushed to their respective repositories in our Docker Hub organization.
For most purposes, you can use swaprcli to build and push images without having to memorize Docker commands. Make sure to read the guide on swaprcli to learn how to install it before using it.
From the provisioning directory:
python -m swaprcli images build
This command will build images for all containers that are not pulled from public Docker Hub repositories. Make sure to read the output and look for any errors that may have occurred when building container images. Troubleshoot any errors before pushing the images.
To push images:
python -m swaprcli images push
This command will push the latest local builds of container images to our Docker Hub repositories.
Make sure you're connected to our Docker Swarm as shown at the beginning of this page.
To deploy our stack, run from the provisioning directory:
python -m swaprcli production deploy
Make sure that any database migrations are run immediately after deploying. To do so:
python -m swaprcli production migrate
Make sure everything is working. If something is broken, use docker
commands to view logs and troubleshoot what's going on:
docker service swapr_www logs backend
Good luck!
Brad Reardon