A tutorial on how to deploy a simple facial expression recognition model on local machine, Heroku and AWS.
Setting up a machine you sit in front of or SSH into easily.
git clone [email protected]:Chien10/facial-expression-recognition.git
cd facial-expression-recognition
- I use
conda
to manage Python version and useMakefile
to make set-up straightforward. You can read more aboutMakefile
here. - Conda is an open-source package management system and environment management system running on Linux, macOS and Windows. To install
conda
, you follow this instruction from the official website. Close and re-open your terminal after installing and check ifconda
command is valid. - The
Makefile
gives you the ability to run command defined within with withmake <command name>
. I encourage you to take a look at the file. Run the following command to create an environment namedfer
(you can guess what it's short for!):
make conda-update
If you edit the environment.yml, just run the above command again to get the latest changes.
- Next, activate the environment:
conda activate fer
Every time you work in this directory, remember to start your session with the previous command.
- Rememeber to add
export PYTHONPATH=.:$PYTHONPATH
to your~/.bashrc
so that you can import packages defined. - Lastly, download the facial expression recognition model and put the model within
fer/fer_models
:
mkdir fer/fer_models
cd fer/fer_models
wget https://assignonec1practicalds.s3.ap-southeast-1.amazonaws.com/ovo_hog_4x4_svm.joblib
cd ..
cd ..
- It's easy to run the application on your local machine.
- After finishing the Setting up section, set
FLASK_APP=app.py
withset FLASK_APP=app.py
on your shell. - Then, set
FLASK_ENV=development
with the same command. - Move to
api_server
and you can launch your app withflask run
and enjoy the app athttp://127.0.0.1:5000/
orhttp://localhost:5000/
.
- Now we'll move to a next level: deploying your app to a service from your local machine.
- Follow the subsequent steps to deploy your app to
Heroku
:
- Make sure your project is tracked by Git.
- Install Heroku CLI.
- Login to
Heroku
via this command:heroku login
. - Create a new empty application on
Heroku
with:heroku create
. - You can use
git remote -v
after the fourth step to confirm the remote namedheroku
has been set for your app. - To deploy the app, push the repo to the remote branch we just made:
git push heroku master
. - Check if the dynos is running with
heroku ps
. - If all the previous eight steps were finished successfully, you can enjoy your app now at the URL provided by
Heroku
. (If you have a problem finding the URL, look at the line saying something like this: https://vast-harbor-73788.herokuapp.com/ deployed to Heroku on the shell). - (Optional) To prevent traffics coming to the app:
heroku maintenance:on
. - (Optional) To completely stop the app:
heroku ps:scale web=0
. Make sure to turn off other process types defined inProcfile
. If you just want to turn the app off for error fixing, remember to turn it on later with:heroku ps:scale web=0
.
- You can skip the setting-up part with
conda
by usingDocker
which is another way ensures that the Python version is correct, install dependencies, check out the whole repo, cuda version, etc. Virtual environment is not enough when it comes to gpu version and even though this tutorial does not require cuda, it's convenient to useDocker
. - Install docker with this instruction from the Docker's website.
- Stay in the current directory, run:
docker build -t fer:1.0 -f api_server/Dockerfile .
. - Inspect all the images and their attributes with:
docker images
. - You can run the server with:
docker run -p 5000:5000 --name fer fer:1.0
. - You can inspect all running and stopped containers:
docker ps
anddocker ps -a
. - Your app is running on port 5000, make sure the service is active with:
sudo lsof -i -P -n | grep LISTEN
. - When you've done with the app, stop the running container:
docker stop <CONTAINER_ID>
. If you want to remove it:docker rm <CONTAINER_ID>
. - You can remove a Docker image with:
docker image rm <IMAGE_NAME>
. - You can now deploy the container to multiple platforms.
- It's straightforward to deploy the application to AWS EC2.
- Log into your AWS account and initiate an EC2 instance (Ubuntu Server 20.04 LTS (HVM) with
t2.micro
is enough). - Add
Security Group
whose inbound has the following configuration:
- Port range:
5000
- Protocol:
TCP
- Source:
0.0.0.0/0
- Connect to your instance via
ssh
, then clone the repo and install requirements as you did in your local machine. - Lanch the application with
lask run --host=0.0.0.0 --port=5000
.