Skip to content

MeInGame: Create a Game Character Face from a Single Portrait, AAAI 2021

License

Notifications You must be signed in to change notification settings

Chromer163/MeInGame

 
 

Repository files navigation

MeInGame: Create a Game Character Face from a Single Portrait

This is the official PyTorch implementation of the AAAI 2021 paper: J. Lin, Y. Yuan, and Z. Zou, MeInGame: Create a Game Character Face from a Single Portrait, the Association for the Advance of Artificial Intelligence (AAAI), 2021.

3D display of the created game characters (click to view):

Watch the video

[Updates]

  • 2021.05.11: The RGB 3D Face Dataset (Google Drive, a password protected zip file) is now available! Please download and read the LICENSE AGREEMENT carefully, and send the signed license agreement to [email protected] via email to get the PASSWORD for the zip file.

Getting Started

Requirements

Install Dependencies

pip install torch==1.4.0+cu100 torchvision==0.5.0+cu100 -f https://download.pytorch.org/whl/torch_stable.html
pip install opencv-python fvcore h5py scipy scikit-image dlib face-alignment scikit-learn tensorflow-gpu==1.14.0 gast==0.2.2
pip install "git+https://github.com/Agent-INF/pytorch3d.git@3dface"

Testing with pre-trained network

  1. Clone the repository
git clone https://github.com/FuxiCV/MeInGame
cd MeInGame
  1. Prepare the Basel Face Model following the instructions on Deep3DFaceReconstruction, and rename those files as follows:
Deep3DFaceReconstruction/BFM/BFM_model_front.mat -> ./data/models/bfm2009_face.mat
Deep3DFaceReconstruction/BFM/similarity_Lm3D_all.mat -> ./data/models/similarity_Lm3D_all.mat
Deep3DFaceReconstruction/network/FaceReconModel.pb -> ./data/models/FaceReconModel.pb
  1. Download the pre-trained model, put the .pth file into ./checkpoints/celeba_hq_demo subfolder, and the .pkl file into ./data/models subfolder.

  2. Run the code.

python main.py -m test -i demo
# Or
python main.py -m test -i demo -c
# it will run on the CPU, if you don't have a qualified GPU.
  1. ./data/test subfolder contains several test images and ./results subfolder stores their reconstruction results. For each input test image, serveral output files can be obtained after running the demo code:
  • "xxx_input.jpg": an RGB image after alignment, which is the input to the network
  • "xxx_neu.obj": the reconstructed 3D face in neutral expression, which can be viewed in MeshLab.
  • "xxx_uv.png": the uvmap corresponding to the obj file.

Training with CelebA-HQ dataset

Data preparation

  1. Run following command to create training dataset from in-the-wild images.
python create_dataset.py
# You can modify the input_dir to your input images directory.
  1. Download our RGB 3D Face Dataset (Google Drive), unzip it, and place it into the ./data/dataset/celeba_hq_gt subfolder. Please download and read the LICENSE AGREEMENT carefully, and send the signed license agreement to [email protected] via email to get the PASSWORD for the zip file. Note: the permission for application is only open to researchers or faculty of universities or research institutes. It is prohibited for students or business entities to apply.

Training networks

After the dataset is ready, you can train the network with the following command:

python main.py -m train

Citation

Please cite the following paper if this model helps your research:

@inproceedings{lin2021meingame,
    title={MeInGame: Create a Game Character Face from a Single Portrait},
    author={Lin, Jiangke and Yuan, Yi and Zou, Zhengxia},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    year={2021}
}

About

MeInGame: Create a Game Character Face from a Single Portrait, AAAI 2021

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%