3D model generation from a single image is a challenging task due to the lack of texture information and limited training data. This model proposes a novel approach for texture estimation from a single image using a generative adversarial network (StyleGAN3) and 3D Dense Face Alignment (3DDFA). The method begins by generating multi-view faces using the latent space of StyleGAN3 using Restyle encoder. Then 3DDFA generate a high-resolution texture map and map it to 3D model that is consistent with the estimated face shape.
- Ubuntu 22.04
- Python 3.8
- PyTorch(2.0 works great)
- OpenCV
- Dlib
- Cython
- Cmake
Download the pretrained encoder from the following links and keep it on pretrained_model
folder
Encoder | Description |
---|---|
ReStyle-pSp Human Faces | ReStyle-pSp trained on the FFHQ dataset over the StyleGAN3 generator. |
ReStyle-e4e Human Faces | ReStyle-e4e trained on the FFHQ dataset over the StyleGAN3 generator. |
- Clone the Repo:
git clone https://github.com/rohit7044/3DGANTex
- Download both the pretrained models mentioned above
- Build the cython version of NMS, Sim3DR, and the faster mesh render on the main directory
sh ./TDDFA_build.sh
- Open
3D-GANTex.py
and make the changes mentioned on the code inside - Finally after making the changes, upon running the code you will get multi_view,texture map and 3d model and it will be saved in
output_data
-
Third Time's the Charm? Image and Video Editing with StyleGAN3
-
InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing
-
3DDFA_V2 Towards Fast, Accurate and Stable 3D Dense Face Alignment
- The 3D face model has uv texture embedded but it only shows the texture on Meshlab and Open3D
- Weak results on images with glasses.
- Better to take portrait image that has only the face like the example mentioned in
input_data
directory.