Welcome to Squarefactory's official face pixeliser repository. In this repository, you will find all code necessary to train and deploy a custom retinaface model using Squarefactory's MLOPS platform, Isquare.
This repository is an implementation of the retinaface model from RetinaFace: Single-stage Dense Face Localisation in the Wild.
You can follow the tutorial as well as the material in this repository to learn how to train and deploy your machine learning models using isquare.ai. At the end of the tutorial, you'll know how to:
- Train a model using isquare.ai (You will not have to change your code).
- Write a deployment script as well as an environment file for you trained model.
- Deploy the model and learn how to send a video stream (your webcam) and perform real-time inference with a deep learning model.
The following set-up instruction are for use on a local machine.
The versions of torch, torchtext and torchvision which are in the requirements correspond to the ones that come pre-installed on the nvidia-image that we use for training on isquare. As these version do not exist in pip, a slight edit of the requirements is necessary before performing a local set up.
Simply comment out the first 3 requirements and un-comment the commented ones, as explained on the requirement file.
Then, the package can be installed as follows :
Before using an MLOPS platform to monitor and scale your trainings and deployments, you can test your models locally. Since you may not have sufficient coomputing power to train your model using your machine, we will only test our deployment script locally. To do this, you need to first configure your environment and install the relevant python packages.
conda create -n face-pixelizer python=3.8
conda activate face-pixelizer
pip install retinaface_pytorch==0.0.8 --no-deps
pip install -e .
pip install opencv-python==4.5.3.36
retinaface-pytorch needs to be installed separately as its requirements are out-dated, but this does not cause any compatibility issue.
The WIDERFACE dataset can be downloaded using this script. The wget package is needed for this.
pip install wget==3.2
python train/data_download.py -az
python train/train.py --epochs <N_EPOCHS>
additionnal arguments :
--no-landmarks : trains a model to perform facial detection alone, without facial landmarks detection. Probably impairs performance.
--weights_path : path to pretrained weights if you want to start with a pretrained model. (The backbone weights are pretrained by default. Path to the backbone weights can be changed in the config.)
--config : change path to config file. (default : ./retina/config.yml)
After this, you're all set to test your model locally. If you have a cuda-compatible GPU, it will be used, but the script will also run in reasonable time on your CPU. We provide an picture taken at the street parade in Zurich as example image, but you can use any private image with this script. However, to see an effect, the picture should contain at least one face. Let's try it out
python face_pixelizer.py --image_path imgs/example_01.jpg
If you used our provided image, you should see the following output:
Some of the code and the utilities used in this repository are taken from the retinaface-pytorch package, sometimes adapted. \