Skip to content

MSA-LMC/MAE-SFER

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MAE-SFER

MAE pre-training models (ViT-base, ViT-small, ViT-tiny) using 270K AffectNet images for static facial expression recognition (SFER).

ViTs pre-trained on AffectNet

Citation

If you find this repo helpful, please consider citing:

@article{li2024emotion,
  title={Emotion separation and recognition from a facial expression by generating the poker face with vision transformers},
  author={Li, Jia and Nie, Jiantao and Guo, Dan and Hong, Richang and Wang, Meng},
  journal={IEEE Transactions on Computational Social Systems},
  year={2024},
  publisher={IEEE}
}
@article{chen2024static,
  title={From static to dynamic: Adapting landmark-aware image models for facial expression recognition in videos},
  author={Chen, Yin and Li, Jia and Shan, Shiguang and Wang, Meng and Hong, Richang},
  journal={IEEE Transactions on Affective Computing},
  year={2024},
  publisher={IEEE}
}

Releases

No releases published

Packages

No packages published

Languages