Releases: cosanlab/py-feat
0.6.2
v0.6.1
0.6.1
Notes
This version drops support for Python 3.7 and fixes several dependency related issues:
v0.6.0
0.6.0
Notes
This is a large model-update release. Several users noted issues with our AU models due to problematic HOG feature extraction. We have now retrained all of our models that were affected by this issue. This version will automatically download the new model weights and use them without any additional user input.
Detector
Changes
We have made the decision to make video processing much more memory efficient at the trade-off of increased processing time, e.g. #139. Previously py-feat
would load all frames into RAM and then process them. This was problematic for large videos and would cause kernel panics or system freezes. Now, py-feat
will lazy-load video-frames one at a time, which scales to videos of any length or size assuming that your system has enough RAM to hold a few frames in memory (determined by batch_size
). However, this also makes processing videos a bit slower and GPU benefits less dramatic. We have made this trade-off in favor of an easier end-user experience, but will be watching torch's VideoReader implementation closely and likely use that in future versions.
Fixes
v0.5.1
v0.5.0
Notes
This is a large overhaul and refactor of some of the core testing and API functionality to make future development, maintenance, and testing easier. Notable highlights include:
- tighter integration with
torch
data loaders - dropping
opencv
as a dependency - experimental support for macOS m1 GPUs
- passing keyword arguments to underlying
torch
models for more control
Detector
Changes
New
- you can now pass keyword arguments directly to the underlying pytorch/sklearn models on
Detector
initialization using dictionaries. For example you can do:detector = Detector(facepose_model_kwargs={'keep_top_k': 500})
to initializeimg2pose
to only use 500 instead of 750 features - all
.detect_*
methods can also pass keyword arguments to the underlying pytorch/sklearn models, albeit these will be passed to their underlying__call__
methods - SVM AU model has been retrained with new HOG feature PCA pipeline
- new XGBoost AU model with new HOG feature PCA pipeline
.detect_image
and.detect_video
now display atqdm
progressbar- new
skip_failed_detections
keyword argument to still generate aFex
object when processing multiple images and one or more detections fail
Breaking
- the new default model for landmark detection was changed from
mobilenet
tomobilefacenet
. - the new default model for AU detection was changed to our new
xgb
model which gives continuous valued predictions between 0-1 - remove support for
fer
emotion model - remove support for
jaanet
AU model - remove support for
pnp
facepose detector - drop support for reading and manipulating Affectiva and FACET data
.detect_image
will no longer resize images on load as the new default foroutput_size=None
. If you want to process images withbatch_size > 1
and images differ in size, then you will be required to manually setoutput_size
otherwise py-feat will raise a helpful error message
Fex
Changes
New
- new
.update_sessions()
method that returns a copy of aFex
frame with the.sessions
attribute updated, making it easy to chain operations .predict()
and.regress()
now support passing attributes toX
and orY
using string names that match the attribute names:'emotions'
use all emotion columns (i.e.fex.emotions
)'aus'
use all AU columns (i.e.fex.aus
)'poses'
use all pose columns (i.e.fex.poses
)'landmarks'
use all landmark columns (i.e.fex.landmarks
)'faceboxes'
use all facebox columns (i.e.fex.faceboxes
)- You can also combine feature groups using a comma-separated string e.g.
fex.regress(X='emotions,poses', y='landmarks')
.extract_*
methods now includestd
andsem
. These are also included in.extract_summary()
Breaking
- All
Fex
attributes have been pluralized as indicated below. For the time-being old attribute access will continue to work but will show a warning. We plan to formally drop support in a few versions.landmark
->.landmarks
.facepose
->.poses
.input
->.inputs
.landmark_x
->.landmarks_x
.landmark_y
->.landmarks_y
.facebox
->.faceboxes
Development changes
test_pretrained_models.py
is now more organized usingpytest
classes- added tests for
img2pose
models - added more robust testing for the interaction between
batch_size
andoutput_size
General Fixes
0.4.0
This is version-breaking release! See the full changelog here: https://py-feat.org/pages/changelog.html
0.3.7
Fix import error due to missing init.
Deployed 0.3.7 to Pypi
0.3.6
Trigger Zenodo release
0.2
testing pypi upload
Emotion prediction model files
v0.1 update names