Skip to content

Train Official PyTorch repo for JoJoGAN: One Shot Face Stylization on Windows 11.

License

Notifications You must be signed in to change notification settings

bortoletti-giorgia/JoJoGAN-Windows

 
 

Repository files navigation

Windows Installation Tutorial

(The JoJoGAN code will not be updated)

Run JoJoGAN locally

Small fixes compared to JoJoGAN-Training-Windows.

Using JoJoGAN-Training-Windows guide on NVIDIA RTX 2060 after running conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch, it was installed pytorch from pytorch/win-64::pytorch-1.13.1-py3.7_cpu_0. In this way, CUDA was not found and Pytorch was executed with CPU so I found the combination of Python packages versions which is in the next section.

Step 0

On Windows 11 install:

Environment Variables should be:

CUDA_PATH = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2
Path = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\bin
      C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\libnvvp
      C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvars64.bat
      C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\Hostx64\x64
      C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin

Step 1

Create a new Anaconda environment.

conda create -n jojo python=3.7
conda activate jojo

Otherwise, download the Anaconda environment and go directly to the Final Step.

conda env create -f environment.yml
conda activate jojo

Step 2

Install Pytorch related.

conda install pytorch==1.10.1 torchvision==0.11.2 torchaudio==0.10.1 cudatoolkit=10.2 -c pytorch

If install Pytorch with Pip as:

pip install torch==1.10.1+cu102 torchvision==0.11.2+cu102 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu102/torch_stable.html

you could have this error:

import dlib
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ImportError: DLL load failed: The specified module could not be found.

Step 3

Install Python requirement packages.

pip install tqdm gdown scikit-learn==0.22 scipy lpips dlib==19.20 opencv-python wandb matplotlib scikit-image pybind11 cmake ninja
conda install -c conda-forge ffmpeg

Step 4

Clone the JoJoGAN project.

git clone https://github.com/mchong6/JoJoGAN.git
cd JoJoGAN

Final Test

(jojo) C:\JoJoGAN>python
Python 3.7.16 (default, Jan 17 2023, 16:06:28) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import dlib
>>> import torch
>>> torch.cuda.is_available()
True
>>>

Import some JoJoGAN specific packages without errors.

(jojo) C:\JoJoGAN>python
Python 3.7.16 (default, Jan 17 2023, 16:06:28) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from model import *
>>> from e4e_projection import projection as e4e_projection
>>> from util import *
>>>

If there is an error than contains torch\utils\cpp_extension.py:237: userwarning: error checking compiler version for cl, make sure that the Path "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\Hostx64\x64" includes a "cl.exe" file otherwise search in the folder "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin" for the right one.

Run JoJoGAN

Activate jojo env and launch jupyter-notebook. Clone this repository and copy run-local.py created from https://github.com/prodramp/DeepWorks/tree/main/JoJoGAN in the main folder. Run the code step-by-step.

Run JoJoGAN in Cluster DEI of University of Padua

If you don’t have at least of 12 GB in your GPU and it’s not RTX 3090 or Tesla V100, you can run the code in SLURM CLUSTER DEI.

Requirements to access SLURM:

Requirements to create the Singularity Container:

  • Ubuntu
  • Singularity

Create Singularity Container

For running your code in SLURM, you need to create a Singularity Container. I created the container in Ubuntu because it requires fewer applications and has fewer conflicts than Windows but Singularity can also be installed on Windows and produces the same results.

Create the container from the Singularity Definition file singularity-container.def.

Open Command Prompt and write: sudo singularity build singularity-container.sif singularity-container.def

If you want to modify something you can run: singularity shell singularity-container.sif.

The singularity-container.sif container contains:

  • Ubuntu 18.04
  • CUDA 11.1 with its location saved in the PATH
  • Ninja package
  • Anaconda 2020
  • An environment Anaconda called jojo with:
    • Python 3.7
    • pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
    • pip install cmake
    • pip install dlib==19.20 tqdm gdown scikit-learn==0.22 scipy lpips opencv-python wandb mat-plotlib scikit-image pybind11 ninja
    • conda install -c conda-forge ffmpeg

Run on Cluster DEI

Login to Windows 11 and download main.job. Be careful to rename it as main.job. Download also main.py if you want to run JoJoGAN with pretrained model or main-create-own-style.py if you want to create a model with your style references. Both of the latter codes require arguments to their invocation. Check main.job to see if they are present with the ones you want.

Open WinSCP and connect to login.dei.unipd.it using SCP protocol.

Your workspace structure should be (“bortoletti” is the example workspace):

    \home\bortoletti
    ├── JoJoGAN                           # clone of https://github.com/bortoletti-giorgia/JoJoGAN-Windows
    ├── ├── inversion_codes               # folder created after execution of main.py
    ├── ├── style_images                  # folder created after execution of main.py
    ├── ├── style_images_aligned          # folder created after execution of main.py
    ├── ├── models                        # folder containing pretrained models
    ├── ├── results                       # folder created after execution of main.py
    ├── ├──  main.py                      # main code to run JoJoGAN with pretrained model
    ├── ├──  main-create-own-style.py     # main code to create a model with your style images 

    ├── out                               # folder with TXT file with errors and shell output of main.job 
    │   main.job                          # JOB file for running JoJoGAN 
    │   singularity-container.sif         # Singularity container for executing the JOB file

In the models folder, there is already dlibshape_predictor_68_face_landmarks.dat used to find faces in images. It is recommended to use and place in models folder the pretrained models e4e_ffhq_encode.pt and stylegan2-ffhq-config-f.pt. They can be downloaded using the first lines of main-create-own-style.py of using stylize.ipynb.

dlibshape_predictor_68_face_landmarks.dat was downloaded using (in Linux):

wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
bzip2 -dk shape_predictor_68_face_landmarks.dat.bz2
mv shape_predictor_68_face_landmarks.dat models/dlibshape_predictor_68_face_landmarks.dat

Open PuTTY and write: sbatch main.job. Now At the end you can find in folder:

  • ./out: one TXT file with a list of errors and one TXT file with output of the job;
  • /JoJoGAN/results: images resulted from main.py or main-create-own-style.py.

JoJoGAN: One Shot Face Stylization

arXiv Open In Colab Replicate Hugging Face Spaces Wandb Report

This is the PyTorch implementation of JoJoGAN: One Shot Face Stylization.

Abstract:
While there have been recent advances in few-shot image stylization, these methods fail to capture stylistic details that are obvious to humans. Details such as the shape of the eyes, the boldness of the lines, are especially difficult for a model to learn, especially so under a limited data setting. In this work, we aim to perform one-shot image stylization that gets the details right. Given a reference style image, we approximate paired real data using GAN inversion and finetune a pretrained StyleGAN using that approximate paired data. We then encourage the StyleGAN to generalize so that the learned style can be applied to all other images.

Updates

  • 2021-12-22 Integrated into Replicate using cog. Try it out Replicate

  • 2022-02-03 Updated the paper. Improved stylization quality using discriminator perceptual loss. Added sketch model

  • 2021-12-26 Added wandb logging. Fixed finetuning bug which begins finetuning from previously loaded checkpoint instead of the base face model. Added art model


  • 2021-12-25 Added arcane_multi model which is trained on 4 arcane faces instead of 1 (if anyone has more clean data, let me know!). Better preserves features

  • 2021-12-23 Paper is uploaded to arxiv.

  • 2021-12-22 Integrated into Huggingface Spaces 🤗 using Gradio. Try it out Hugging Face Spaces

  • 2021-12-22 Added pydrive authentication to avoid download limits from gdrive! Fixed running on cpu on colab.

How to use

Everything to get started is in the colab notebook.

Citation

If you use this code or ideas from our paper, please cite our paper:

@article{chong2021jojogan,
  title={JoJoGAN: One Shot Face Stylization},
  author={Chong, Min Jin and Forsyth, David},
  journal={arXiv preprint arXiv:2112.11641},
  year={2021}
}

Acknowledgments

This code borrows from StyleGAN2 by rosalinity, e4e. Some snippets of colab code from StyleGAN-NADA

About

Train Official PyTorch repo for JoJoGAN: One Shot Face Stylization on Windows 11.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.4%
  • Other 0.6%