The project uses a Python 3.11 runtime environment and Flask REST-plus framework for the API.
The application uses SQLAlchemy as our ORM to interact with the database.
For the application directory structure, see flask-RESTplus quickstart guide.
The application directory is structured as follows:
|-- app
|-- api
|-- NAMESPACE_MODULE_NAME
|-- Namespace (Contains api namespace)
|-- MODULE_NAME
|-- Models (Contains all the database model definition used by SQLAlchemy)
|-- Resources (Contains all the routes and views to handle incoming requests)
|-- utils (Contains utility shared across modules)
|-- tests (Unit/Integration tests for the application)
|-- app.sh (Shell script used by the python OpenShift s2i image to run the application)
|-- Dockerfile (Dockerfile for running the application locally using Docker)
|-- requirements.txt (Libraries required by the project)
If running on your host machine, the application assumes you already have a working postgres DB with the required schema and tables and have the connection details in the .env file.
Follow the .env-example
template to create an .env
file with valid values
before running the application.
A. OS Level Installation
- Create a virtual environment with python 3.11 and activate it
virtualenv -p python3.11 .venv
source .venv/bin/activate
- Install the requirements
pip install -r requirements.txt
NOTE: For Ubuntu/Debian based systems, you may have to install libpq-dev. You can do this via:
sudo apt install libpq-dev
https://stackoverflow.com/questions/11618898/pg-config-executable-not-found
- Run the application
flask run
B. Using a docker container
- Switch current directory to the project root
cd ../
- Issue the makefile command that runs the backend
make be
Flask supports click commands which lets you run one-off commands from the command line without having to run the complete app.
To see the list of all click commands, checkout register_commands
method
under the __init__.py
file.
The MDS document generator service makes use of the Common Services team's Document Generator Service.
*Pre-requisite to running the transfer-files, verify-files, and reorganize-files commands: ensure that a Celery worker is running. You can use the command ps aux
to check.
I want to transfer all documents that exist on the filesystem to the object store
- Get a list of untransferred files:
flask untransferred-files
- Transfer those files to the object store:
flask transfer-files
You can view the output of the task in the celery logfile. - Check the result of the transfer in the logfile or in the result backend. Act accordingly depending on the status of the job.
- Get a list of untransferred files:
flask untransferred-files
If the previous transfer task was successful, this list should be empty. If it contains documents, you should see the reason why in the job results. - Double-check that the transfer task was successful and that all locally-stored files match their corresponding files on the object store:
flask verify-files
If you want, log into Cyberduck or another tool to view the files that were transferred to the object store.
See here for a more detailed instructional workflow.
The application uses pytest
to run the tests and coverage
to output the
results. The testing structure is based around flask testing
documentation.
To run tests, use the following command:
pytest --cov=app tests/ --cov-report xml
V1:
MDS was originally working with a custom keycloak instance (hosted on openshift?) and a local keycloak container for development. This was hard to work with locally and maintain (patching, upgrades etc..)
V2:
The project then moved on to shared keycloak instance (silver SSO) hosted by the platform team and the shared instance had multiple realms provisioned for each ministry division / program.
Benefits:
- Not having to maintain the keycloak instance (the platform team does it for us)
- Having a full realm with admin access, the team could configure and control the parameters as required
There were several issues with this approach for the platform team (it was great for teams that had full realms though!)
Cons:
- Having a large number of realms on keycloak created severe performance penalty as the number of projects kept getting higher.
- Most teams required the same features (IDIR, BCeID login) but had different implementations.
- Some teams required specalized features that needed instance level changes (hard coupling)
Since most projects need the standard setup anyways, it was decided to move to an offering that does the basic things very well and specialized requirements will be handled by the teams themselves, with custom keycloak implementations.
Read More about standard service here
V3:
MDS requirements falls under the standard service category so we have to migrate to Gold SSO.
Gold SSO fundamentally differs from silver by offering Clients
instead of an entire Realm
, this way the implementation is standardized.
Notable implementation details of Gold SSO:
- All projects on Gold SSO get clients per project / webapp or integration.
- Each client has role management within the context of the client.
Cons:
- Roles cannot be shared across clients
- BC Service Card login is not supported
- Service Accounts do not have ability to get roles (Future backlog item)
We need to support multiple SSO providers using JWT manager (existing silver and the new gold sso) because of few reasons:
Technical reasons:
- Each integration client has a different
audience
attribute. Earlier in themds
realm, each client, service account had the sameaudience
attribute. - The property where the client scope is seeded is defaulted to
Other reasons:
- We need to provide sufficient time for our integration partners to move to a new client credential and sso provider.
- Gold SSO is not feature complete and able to support roles for service accounts yet.
In order to handle the above cases, we have a jwt-manager implementation that works with multiple OIDC audiences and configurations.
core-api currently works with the following sso providers:
- Gold SSO - All Environments
The gold SSO is based off Keycloack IDM
The SSO login is used for authentication and role assignments for all of MDS users.
- Create a new integration in Gold SSO or a new client in any OIDC authentication provider.
- Add the audience and well known config for the OIDC provider as environment variables for
core-api
- Create an instance of the JwtManager for every OIDC provider in
extensions.py
ofcore-api
- Initialize the provider in
__init__.py
ofcore-api
- Update
getJwtManager
for switching to the correct provider based on the jwt token