This is the official implementation of the paper 'Neural Face Rigging for Animating and Retargeting Facial Meshes in the Wild'
Why try NFR?
NFR can transfer facial animations to any customized face mesh, even with different topology, without any labor for manual rigging or data capturing. For facial meshes obtained from any source, you can quickly retarget exising animations onto the mesh and see the results in real-time.
This release is tested under Ubuntu 20.04, with a RTX 4090 GPU. Other GPU models with CUDA should be OK as well.
The testing module utilizes vedo for interactive visualization. Thus a display is required.
Windows is currently not supported unless you manually install the pytorch3d package following their official guide.
- Create an environment called NFR
conda create -n NFR python=3.9
conda activate NFR
- Recommend mamba to accelerate the installation process
conda install mamba -c conda-forge
- Install necessary packages via mamba
mamba install pytorch=1.12.1 cudatoolkit=11.3 pytorch-sparse=0.6.15 pytorch3d=0.7.1 cupy=11.3 numpy=1.23.5 -c pytorch -c conda-forge -c pyg -c pytorch3d
- Install necessary packages via pip
pip install potpourri3d trimesh open3d transforms3d libigl robust_laplacian vedo
-
Download the preprocess data and the pretrained model here: Google Drive. Place them in the root directory of this repo.
-
Run!
python test_user.py -c config/test.yml
- Interactive visualization
Here's the plot when you successfully run the script. You can interact with the sliders and buttons to change the expression of the source mesh, and manually adjust the expression by FACS-like codes.
- Zone 0: The source mesh
- Zone 1: The target mesh (with source mesh's expression transferred)
- Zone 2: The source mesh under ICT Blendshape space
- Zone 3: Interactive buttons and sliders
- Buttons:
- code_idx: input (0-52) the FACS code index to the terminal
- input/next/random: change the source expression index
- iden: change the source identity
- Sliders:
- AU scale: Change the intensity of the FACS code specified by code_idx
- scale: Scale uniformly the target mesh
- x/y/z shift: Shift the target mesh
- Buttons:
Currently we have two pre-processed facial animation sequences, one from ICT and another from Multiface. You can swith between them by changing the dataset
and data_head
variables in the config/test.yml
file.
You can test with your own mesh as the target. This has two requirement:
- There should be no mouth/eye/nose sockets and eye balls inside the face. Otherwise bad deformations may occur on those area.
- The mouth and eyes need to be cut for correct global solving. Please refer to the preprocessed meshes in the
test-mesh
folder as examples. - Remember to roughly align your mesh to the examples in blender via the align.blend file!
The training module will be released later.
@inproceedings{qin2023NFR,
author = {Qin, Dafei and Saito, Jun and Aigerman, Noam and Groueix Thibault and Komura, Taku},
title = {Neural Face Rigging for Animating and Retargeting Facial Meshes in the Wild},
year = {2023},
booktitle = {SIGGRAPH 2023 Conference Papers},
}
This project uses code from ICT, Multiface, Diffusion-Net, data from ICT and Multiface, testing mesh templates from ICT, Multiface, COMA, FLAME, MeshTalk. Thank you!