Skip to content

v0.5.0

Compare
Choose a tag to compare
@ejolly ejolly released this 14 Dec 22:22
· 182 commits to main since this release

Notes

This is a large overhaul and refactor of some of the core testing and API functionality to make future development, maintenance, and testing easier. Notable highlights include:

  • tighter integration with torch data loaders
  • dropping opencv as a dependency
  • experimental support for macOS m1 GPUs
  • passing keyword arguments to underlying torch models for more control

Detector Changes

New

  • you can now pass keyword arguments directly to the underlying pytorch/sklearn models on Detector initialization using dictionaries. For example you can do: detector = Detector(facepose_model_kwargs={'keep_top_k': 500}) to initialize img2pose to only use 500 instead of 750 features
  • all .detect_* methods can also pass keyword arguments to the underlying pytorch/sklearn models, albeit these will be passed to their underlying __call__ methods
  • SVM AU model has been retrained with new HOG feature PCA pipeline
  • new XGBoost AU model with new HOG feature PCA pipeline
  • .detect_image and .detect_video now display a tqdm progressbar
  • new skip_failed_detections keyword argument to still generate a Fex object when processing multiple images and one or more detections fail

Breaking

  • the new default model for landmark detection was changed from mobilenet to mobilefacenet.
  • the new default model for AU detection was changed to our new xgb model which gives continuous valued predictions between 0-1
  • remove support for fer emotion model
  • remove support for jaanet AU model
  • remove support for pnp facepose detector
  • drop support for reading and manipulating Affectiva and FACET data
  • .detect_image will no longer resize images on load as the new default for output_size=None. If you want to process images with batch_size > 1 and images differ in size, then you will be required to manually set output_size otherwise py-feat will raise a helpful error message

Fex Changes

New

  • new .update_sessions() method that returns a copy of a Fex frame with the .sessions attribute updated, making it easy to chain operations
  • .predict() and .regress() now support passing attributes to X and or Y using string names that match the attribute names:
    • 'emotions' use all emotion columns (i.e. fex.emotions)
    • 'aus' use all AU columns (i.e. fex.aus)
    • 'poses' use all pose columns (i.e. fex.poses)
    • 'landmarks' use all landmark columns (i.e. fex.landmarks)
    • 'faceboxes' use all facebox columns (i.e. fex.faceboxes)
    • You can also combine feature groups using a comma-separated string e.g. fex.regress(X='emotions,poses', y='landmarks')
  • .extract_* methods now include std and sem. These are also included in .extract_summary()

Breaking

  • All Fex attributes have been pluralized as indicated below. For the time-being old attribute access will continue to work but will show a warning. We plan to formally drop support in a few versions
    • .landmark -> .landmarks
    • .facepose -> .poses
    • .input -> .inputs
    • .landmark_x -> .landmarks_x
    • .landmark_y -> .landmarks_y
    • .facebox -> .faceboxes

Development changes

  • test_pretrained_models.py is now more organized using pytest classes
  • added tests for img2pose models
  • added more robust testing for the interaction between batch_size and output_size

General Fixes

  • data loading with multiple images of potentially different sizes should be faster and more reliable
  • fix bug in resmasknet that would give poor predictions when multiple faces were present and particularly small
  • #150
  • #149
  • #148
  • #147
  • #145
  • #137
  • #134
  • #132
  • #131
  • #130
  • #129
  • #127
  • #121
  • #104