-
Notifications
You must be signed in to change notification settings - Fork 19
wall_e PROD infrastructure
When there is a push to the master branch of WALL_E repo or a PR is merged, that commit is pushed to the Jenkins wall_e job, which will automatically validate the code's formatting before the Jenkins job packages and deploys the WALL_E code into a docker container that sets up and runs the bot.
as you can see from our Jenkinsfile, we first Validate Formatting
before we just Deploy to PROD Guild
.
When I added .frequency
command, that brought some ugly dependencies with it: numpy
and matplotlib
.
Those 2 added significantly to the build time. It went from taking about 5-10 minutes to 40+ minutes.
And obviously having a developer wait 40+ minutes after each push to see if the code on their PR was successfully deployed to the staging guild [back when we had a staging guild] is horrible UI.
So my solution to this was to create an additional docker image between python:3.9.20-alpine
and the production_master_wall_e
image that is built when there is a push to https://github.com/csss/wall_e.
Thus sfucsssorg/wall_e_python
was born.
the image is built on every push to the master
branch of https://github.com/CSSS/wall_e_python_base, which takes the base it creates, pushes it to the docker ECR repository sfucsssorg/wall_e_python which wall-e-validate-and-deploy will later use to build the latest push to master
branch of https://github.com/csss/wall_e
circa 2018-2019, we used to have more active developers, and some of the developers were trying to run WALL_e on their computers and there seemed to be quite some frustration caused by the dependencies of wall_e.
In an effort to try to make it easier for others to run and develop wall_e on their computer even when the dependencies don't work well, I decided to just add another docker image to the stack and this docker image would contain any other requirements that WALL_E needs that were not already installed in the sfucsssorg/wall_e_python
docker image [with the exception of the wall_e_models
python module].
This also means that when WALL_e is run locally using the dockerized version, a docker container runs with those pip modules installed in it [instead of on the developer's home computer] and then the WALL_E code would be mounted into that docker container
the API that provides the data for https://walle.sfucsss.org/
Background for PRODUCTION_MASTER_member_update_listener
in order for WALL_E to be able to store reminders and other things in the database, it needed a python based ORM in order to be able to interact with the database. I chose Django-ORM simply because it was the one I was most familiar and comfortable with [courtesy of the csss-site].
And with Django, any particular business logic is encapsulated in something called a "django app" and hence the wall_e_models django app was born.
Originally, the wall_e_models
django app was under the WALL_E repo.
This was because prior to the creation of either PRODUCTION_MASTER_member_update_listener
or PRODUCTION_MASTER_leveling_api
, no other repo the CSSS had needed access to WALL_E's database.
However, the creation of the above mentioned services required one of 2 approaches:
- either place the code for
PRODUCTION_MASTER_member_update_listener
orPRODUCTION_MASTER_leveling_api
in the WALL_E repo so that the sharing of the djangowall_e_models
can be done very easily - stick to the idea that each repo should house the code for a specific service and create some way for the different repos to share that
wall_e_models
django app.
I am a fan of single-purpose repos so I went with option 2. and this created a need for a decision about how to share that django app across 3 services.
This is done via git submodules.
this makes the process of updating the wall_e_models different in the following ways.
- when cloning
wall_e
orwall_e_member_update_listener
orwall_e_api
, you need to usegit clone --recurse-submodules
instead or justgit clone
to ensure that on checkout, the submodules code is also placed where it needs to be. - Updates