Skip to content

Commit

Permalink
Merge pull request #122 from cosanlab/0.4.0
Browse files Browse the repository at this point in the history
0.4.0 Release
  • Loading branch information
ejolly authored Jun 10, 2022
2 parents 2359982 + 2edb894 commit fe878b9
Show file tree
Hide file tree
Showing 32 changed files with 4,857 additions and 904 deletions.
7 changes: 3 additions & 4 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -17,10 +17,6 @@ docs/_build

#Ignore resources folder except config files and viz model
resources/
!resources/pyfeat_aus_to_landmarks.h5
!resources/pyfeat_aus_to_landmarks.joblib
!resources/ResMaskNet_fer2013_config.json
!resources/reference_3d_68_points_trans.npy
!resources/model_list.json

# PytestCache
Expand Down Expand Up @@ -97,3 +93,6 @@ _Notebooks/
# Ignore data files for AU viz model building
data/*
!data/.gitkeep

# Ignore development notebooks and code
dev/
12 changes: 11 additions & 1 deletion .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,15 @@
"python.testing.pytestEnabled": true,
"python.testing.unittestEnabled": false,
"python.testing.autoTestDiscoverOnSaveEnabled": true,
"editor.insertSpaces": true
"editor.insertSpaces": true,
"files.exclude": {
"**/.git": true,
"**/.svn": true,
"**/.hg": true,
"**/CVS": true,
"**/.DS_Store": true,
"**/Thumbs.db": true,
"docs/**/*.csv": true,
"docs/**/*.mp4": true
}
}
74 changes: 5 additions & 69 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

Py-FEAT is a suite for facial expressions (FEX) research written in Python. This package includes tools to detect faces, extract emotional facial expressions (e.g., happiness, sadness, anger), facial muscle movements (e.g., action units), and facial landmarks, from videos and images of faces, as well as methods to preprocess, analyze, and visualize FEX data.

For detailed examples, tutorials, and API please refer to the [Py-FEAT website](https://cosanlab.github.io/py-feat/).
For detailed examples, tutorials, contribution guidelines, and API please refer to the [Py-FEAT website](https://cosanlab.github.io/py-feat/).

## Installation
Option 1: Easy installation for quick use
Expand All @@ -20,78 +20,14 @@ git clone https://github.com/cosanlab/feat.git
cd feat && python setup.py install -e .
```

## Usage examples
### 1. Detect FEX data from images or videos
FEAT is intended for use in Jupyter Notebook or Jupyter Lab environment. In a notebook cell, you can run the following to detect faces, facial landmarks, action units, and emotional expressions from images or videos. On the first execution, it will automatically download the default model files. You can also change the detection models from the [list of supported models](https://cosanlab.github.io/feat/content/intro.html#available-models).

```python
from feat.detector import Detector
detector = Detector()
# Detect FEX from video
out = detector.detect_video("input.mp4")
# Detect FEX from image
out = detector.detect_image("input.png")
```

### 2. Visualize FEX data
Visualize FEX detection results.
```python
from feat.detector import Detector
detector = Detector()
out = detector.detect_image("input.png")
out.plot_detections()
```
### 3. Preprocessing & analyzing FEX data
We provide a number of preprocessing and analysis functionalities including baselining, feature extraction such as timeseries descriptors and wavelet decompositions, predictions, regressions, and intersubject correlations. See examples in our [tutorial](https://cosanlab.github.io/py-feat/content/analysis.html#).

## Supported Models
Please respect the usage licenses for each model.

Face detection models
- [FaceBoxes](https://github.com/zisianw/FaceBoxes.PyTorch)
- [MTCNN](https://github.com/ipazc/mtcnn)
- [RetinaFace](https://github.com/deepinsight/insightface/)
- [img2pose](https://github.com/vitoralbiero/img2pose)

Facial landmark detection models
- [MobileNet](https://github.com/cunjian/pytorch_face_landmark)
- [MobileFaceNet](https://github.com/foamliu/MobileFaceNet)
- [PFLD: Practical Facial Landmark Detector](https://github.com/polarisZhao/PFLD-pytorch)

Action Unit detection models
- FEAT-Random Forest
- FEAT-SVM
- FEAT-Logistic
- [DRML: Deep Region and Multi-Label Learning](https://github.com/AlexHex7/DRML_pytorch)
- [JAANet: Joint AU Detection and Face Alignment via Adaptive Attention](https://github.com/ZhiwenShao/PyTorch-JAANet)

Emotion detection models
- FEAT-Random Forest
- FEAT-Logistic
- [FerNet](https://www.kaggle.com/gauravsharma99/facial-emotion-recognition?select=fer2013)
- [ResMaskNet: Residual Masking Network](https://github.com/phamquiluan/ResidualMaskingNetwork)

Head pose estimation models
- [img2pose](https://github.com/vitoralbiero/img2pose)
- Perspective-n-Point algorithm to solve 3D head pose from 2D facial landmarks (via `cv2`)

## Contributing
1. Fork the repository on GitHub.
2. Run the tests with `pytest tests/` to make confirm that all tests pass on your system. If some tests fail, try to find out why they are failing. Common issues may be not having downloaded model files or missing dependencies.
3. Create your feature AND add tests to make sure they are working.
4. Run the tests again with `pytest tests/` to make sure everything still passes, including your new feature. If you broke something, edit your feature so that it doesn't break existing code.
5. Create a pull request to the main repository's `master` branch.

### Adding new notebook examples
*Note*: You should execute all notebook example cells *locally* before committing changes to github as our CI workflow **does not** execute notebooks; it just renders them into pages. That's why there's also no need to commit `notebooks/_build` to git as github actions will auto-generate that folder and deploy it to the `gh-pages` branch of the repo.
**Note:** If you forked or cloned this repo prior to 04/26/2022, you'll want to create a new fork or clone as we've used `git-filter-repo` to clean up large files in the history. If you prefer to keep working on that old version, you can find an [archival repo here](https://github.com/cosanlab/py-feat-archive)

1. Make sure to install the development requirements: `pip install -r requirements-dev.txt`
2. Add notebooks or markdown to the `notebooks/content` directory
3. Add images to the `notebooks/content/images` directory
4. Update the TOC as needed: `notebooks/_toc.yml` file
5. Build the HTML: `jupyter-book build notebooks`
6. View the rendered HTML by open the following in your browser: `notebooks/_build/html/index.html`
## Testing

The test have been relocated to `feat/tests/`.
Please ensure all tests pass before creating any pull request or larger change to the code base.

## Continuous Integration

Expand Down
8 changes: 4 additions & 4 deletions docs/_toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,10 @@ parts:
- file: basic_tutorials/05_fex_analysis
- caption: Advanced Tutorials
chapters:
- file: extra_tutorials/loadingOtherFiles
- file: extra_tutorials/train_hogs
- file: extra_tutorials/trainAUvisModel
- file: extra_tutorials/extract_labels_and_landmarks
- file: extra_tutorials/06_trainAUvisModel
- file: extra_tutorials/07_extract_labels_and_landmarks
- file: extra_tutorials/08_loadingOtherFiles
- file: extra_tutorials/09_train_hogs
- caption: API
chapters:
- file: pages/api
Expand Down
37 changes: 20 additions & 17 deletions docs/basic_tutorials/01_basics.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -33,7 +33,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 5,
"metadata": {},
"outputs": [
{
Expand All @@ -42,17 +42,22 @@
"text": [
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/mobilenet0.25_Final.pth\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/mobilenet_224_model_best_gdconv_external.pth.tar\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/DRMLNetParams.pth\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/hog_pca_all_emotio.joblib\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/hog_pca_all_emotio.joblib\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/hog_scalar_aus.joblib\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/svm_568.joblib\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/hog_pca_all_emotio.joblib\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/hog_scalar_aus.joblib\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/ResMaskNet_Z_resmasking_dropout1_rot30.pth\n"
]
},
{
"data": {
"text/plain": [
"feat.detector.Detector(face_model=retinaface, landmark_model=mobilenet, au_model=drml, emotion_model=resmasknet, facepose_model=pnp)"
"feat.detector.Detector(face_model=retinaface, landmark_model=mobilenet, au_model=svm, emotion_model=resmasknet, facepose_model=img2pose)"
]
},
"execution_count": 1,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
Expand All @@ -74,12 +79,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"After initializing a detector you can easily swap one or more underlying models using the `.change_model` method. You can also disable any models by setting them to `None`:"
"After initializing a detector you can easily swap one or more underlying models using the `.change_model` method."
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 6,
"metadata": {},
"outputs": [
{
Expand All @@ -89,32 +94,30 @@
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/onet.pt\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/pnet.pt\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/rnet.pt\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/mobilenet_224_model_best_gdconv_external.pth.tar\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/hog_pca_all_emotio.joblib\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/hog_pca_all_emotio.joblib\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/hog_scalar_aus.joblib\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/svm_568.joblib\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/hog_pca_all_emotio.joblib\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/hog_scalar_aus.joblib\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/emoSVM38.joblib\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/emo_hog_pca.joblib\n",
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/emo_hog_scalar.joblib\n"
"Using downloaded and verified file: /Users/Esh/Documents/pypackages/py-feat/feat/resources/ResMaskNet_Z_resmasking_dropout1_rot30.pth\n",
"Changing face_model from retinaface -> mtcnn\n"
]
},
{
"data": {
"text/plain": [
"feat.detector.Detector(face_model=mtcnn, landmark_model=None, au_model=svm, emotion_model=svm, facepose_model=pnp)"
"feat.detector.Detector(face_model=mtcnn, landmark_model=mobilenet, au_model=svm, emotion_model=resmasknet, facepose_model=img2pose)"
]
},
"execution_count": 2,
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"detector.change_model(\n",
" face_model=\"MTCNN\", emotion_model=\"svm\", au_model=\"svm\", landmark_model=None\n",
")\n",
"detector.change_model(face_model=\"MTCNN\")\n",
"detector\n"
]
},
Expand All @@ -137,7 +140,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 4,
"metadata": {},
"outputs": [
{
Expand All @@ -146,7 +149,7 @@
"True"
]
},
"execution_count": 5,
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
Expand Down
368 changes: 205 additions & 163 deletions docs/basic_tutorials/02_detector_imgs.ipynb

Large diffs are not rendered by default.

254 changes: 134 additions & 120 deletions docs/basic_tutorials/03_detector_vids.ipynb

Large diffs are not rendered by default.

Loading

0 comments on commit fe878b9

Please sign in to comment.