v0.5.0
Notes
This is a large overhaul and refactor of some of the core testing and API functionality to make future development, maintenance, and testing easier. Notable highlights include:
- tighter integration with
torch
data loaders - dropping
opencv
as a dependency - experimental support for macOS m1 GPUs
- passing keyword arguments to underlying
torch
models for more control
Detector
Changes
New
- you can now pass keyword arguments directly to the underlying pytorch/sklearn models on
Detector
initialization using dictionaries. For example you can do:detector = Detector(facepose_model_kwargs={'keep_top_k': 500})
to initializeimg2pose
to only use 500 instead of 750 features - all
.detect_*
methods can also pass keyword arguments to the underlying pytorch/sklearn models, albeit these will be passed to their underlying__call__
methods - SVM AU model has been retrained with new HOG feature PCA pipeline
- new XGBoost AU model with new HOG feature PCA pipeline
.detect_image
and.detect_video
now display atqdm
progressbar- new
skip_failed_detections
keyword argument to still generate aFex
object when processing multiple images and one or more detections fail
Breaking
- the new default model for landmark detection was changed from
mobilenet
tomobilefacenet
. - the new default model for AU detection was changed to our new
xgb
model which gives continuous valued predictions between 0-1 - remove support for
fer
emotion model - remove support for
jaanet
AU model - remove support for
pnp
facepose detector - drop support for reading and manipulating Affectiva and FACET data
.detect_image
will no longer resize images on load as the new default foroutput_size=None
. If you want to process images withbatch_size > 1
and images differ in size, then you will be required to manually setoutput_size
otherwise py-feat will raise a helpful error message
Fex
Changes
New
- new
.update_sessions()
method that returns a copy of aFex
frame with the.sessions
attribute updated, making it easy to chain operations .predict()
and.regress()
now support passing attributes toX
and orY
using string names that match the attribute names:'emotions'
use all emotion columns (i.e.fex.emotions
)'aus'
use all AU columns (i.e.fex.aus
)'poses'
use all pose columns (i.e.fex.poses
)'landmarks'
use all landmark columns (i.e.fex.landmarks
)'faceboxes'
use all facebox columns (i.e.fex.faceboxes
)- You can also combine feature groups using a comma-separated string e.g.
fex.regress(X='emotions,poses', y='landmarks')
.extract_*
methods now includestd
andsem
. These are also included in.extract_summary()
Breaking
- All
Fex
attributes have been pluralized as indicated below. For the time-being old attribute access will continue to work but will show a warning. We plan to formally drop support in a few versions.landmark
->.landmarks
.facepose
->.poses
.input
->.inputs
.landmark_x
->.landmarks_x
.landmark_y
->.landmarks_y
.facebox
->.faceboxes
Development changes
test_pretrained_models.py
is now more organized usingpytest
classes- added tests for
img2pose
models - added more robust testing for the interaction between
batch_size
andoutput_size