Skip to content

FaceScape: a Large-scale High Quality 3D Face Dataset and Detailed Riggable 3D Face Prediction (CVPR2020)

Notifications You must be signed in to change notification settings

FinleyPan/facescape

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

45 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FaceScape

FaceScape provides large-scale high-quality 3D face datasets, parametric models, docs and toolkits about 3D face related technology. [CVPR2020 paper]   [extended arXiv Report]    [supplementary]

Our latest progress will be updated to this repository constantly - [latest update: 2021/8/16]

Data

The data can be downloaded in https://facescape.nju.edu.cn/ after requesting a license key.
New: Share link on Google Drive is available after requesting license key, view here for detail.
New: The bilinear model ver1.6 can be downloaded without requesting a license key, view here for the link and rules.

The available sources include:

Item (Docs) Description Quantity Quality
TU models Topologically uniformed 3D face models
with displacement map and texture map.
16940 models
(847 id × 20 exp)
Detailed geometry,
4K dp/tex maps
Multi-view data Multi-view images, camera parameters
and corresponding 3D face mesh.
>400k images
(359 id × 20 exp
× ≈60 view)
4M~12M pixels
Bilinear model The statistical model to transform the base
shape into the vector space.
4 for different settings Only for base shape.
Info list Gender / age of the subjects. 847 subjects --

The datasets are only released for non-commercial research use. As facial data involves the privacy of participants, we use strict license terms to ensure that the dataset is not abused.

Benchmark for SVFR

We present a benchmark to evaluate the accuracy of single-view face 3D reconstruction (SVFR) methods, view here for the details.

ToolKit

Start using python toolkit here, the demos include:

  • bilinear_model-basic - use facescape bilinear model to generate 3D mesh models.
  • bilinear_model-fit - fit the bilinear model to 2D/3D landmarks.
  • multi-view-project - Project 3D models to multi-view images.
  • landmark - extract landmarks using predefined vertex index.
  • facial_mask - extract facial region from the full head TU-models.
  • render - render TU-models to color images and depth map.
  • alignment - align all the multi-view models.
  • symmetry - get the correspondence of the vertices on TU-models from left side to right side.

Code

The code of detailed riggable 3D face prediction in our paper is released here.

ChangeLog

  • 2021/12/2
    Benchmark to evaluate single-view face reconstruction is available, view here for detail.
  • 2021/8/16
    Share link on google drive is available after requesting license key, view here for detail.
  • 2021/5/13
    Fitting demo is added to toolkit. Please note if you download bilinear model v1.6 before 2021/5/13, you need to download it again, because some parameters required by fitting demo are supplemented.
  • 2021/4/14
    The bilinear model has been updated to 1.6, check it here.
    The new bilinear model now can be downloaded from NJU drive or Google Drive without requesting a license key. Check it here.
    ToolKit and Doc has been updated with new content.
    Some wrong ages and genders in the info list are corrected in "info_list_v2.txt".
  • 2020/9/27
    The code of detailed riggable 3D face prediction is released, check it here.
  • 2020/7/25
    Multi-view data is available for download.
    Bilinear model is updated to ver 1.3, with vertex-color added.
    Info list including gender and age is available in download page.
    Tools and samples are added to this repository.
  • 2020/7/7
    Bilinear model is updated to ver 1.2.
  • 2020/6/13
    The website of FaceScape is online.
    3D models and bilinear models are available for download.
  • 2020/3/31
    The pre-print paper is available on arXiv.

Bibtex

If you find this project helpful to your research, please consider citing:

@InProceedings{yang2020facescape,
  author = {Yang, Haotian and Zhu, Hao and Wang, Yanru and Huang, Mingkai and Shen, Qiu and Yang, Ruigang and Cao, Xun},
  title = {FaceScape: A Large-Scale High Quality 3D Face Dataset and Detailed Riggable 3D Face Prediction},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  month = {June},
  year = {2020},
  page = {601--610}}

Exntended version with the benchmark:

@article{zhu2021facescape,
  title={FaceScape: 3D Facial Dataset and Benchmark for Single-View 3D Face Reconstruction},
  author={Zhu, Hao and Yang, Haotian and Guo, Longwei and Zhang, Yidi and Wang, Yanru and Huang, Mingkai and Shen, Qiu and Yang, Ruigang and Cao, Xun},
  journal={arXiv preprint arXiv:2111.01082},
  year={2021}
}

Acknowledge

The project is supported by CITE Lab of Nanjing University, Baidu Research, and Aiqiyi Inc. The student contributors: Ji Shengyu, Jin Wei, Huang Mingkai, Wang Yanru, Yang Haotian, Zhang Yidi, Xiao Yunze, Ding Yuxin, Guo Longwei.

About

FaceScape: a Large-scale High Quality 3D Face Dataset and Detailed Riggable 3D Face Prediction (CVPR2020)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 96.1%
  • Python 3.8%
  • Shell 0.1%