Mingxuan Teng, Jiakai Shi, Jiaming Yang
This code provides the DRPI-CGAN model for solving problems of facial image manipulation detection and recovery.
- This code supports Python 3.7+.
- Install PyTorch (pytorch.org)
pip install -r requirements.txt
- The correlation layer in PWC-Net is implemented in CUDA using CuPy, which is why CuPy is a required dependency. It can be installed using
pip install cupy
or alternatively using one of the provided binary packages as outlined in the CuPy repository.
- Download pretrained weights files from here.
- Upzip weights file. You should see
discriminator.pth
andgenerator.pth
. These are pretrained weights files for discriminator and generator respectively.
- This code supports running local flow detection by using pretrained weights of DRN-C-26.
- Run
bash drn/weights/download_weights.sh
from the root directory of this repository. - You should see
global.pth
andlocal.pth
, wherelocal.pth
is the pretrained weights for local flow detection.
Note: Our models are trained on faces cropped by the dlib CNN face detector. Although in both scripts we included the --no_crop
option to run the models without face crops, it is used for images with already cropped faces. Please make sure your images have the size of 400 * 400 if you choose no crop.
To overfit the DRPI-CGAN model to a single pair of modified image and original image, run the following command from the root directory of this repository:
python main.py overfit --basepath=<path_to_your_dataset> --outputpath=<path_to_your_outputs>
This command will write loss values in losses.csv
to <your_path_to_outputs>
.
To train the DRPI-CGAN model on the training dataset, run the following command from the root directory of this repository:
python main.py train --basepath=<path_to_your_dataset> --outputpath=<path_to_your_outputs> --gen_checkpoint=<path_to_pretrained_generator_weights> --dis_checkpoint=<path_to_pretrained_discriminator_weights>
This command will write loss values in losses.csv
, generator/discriminator checkpoints and visualizations to <your_path_to_outputs>
.
To train the DRPI-CGAN model on the validation dataset, run the following command from the root directory of this repository:
python main.py evaluate --basepath=<path_to_your_dataset> --outputpath=<path_to_your_outputs>
This command will write 4 evaluation result csv files and visualizations to <your_path_to_outputs>
.
This code supports to run the local flow detection in FALDetector for performance comparison with DRPI-CGAN.
To run the pretrained DRN-C-26 model on the validation dataset, run the following command from the root directory of this repository:
python main.py pretrain --basepath=<path_to_your_dataset> --outputpath=<path_to_your_outputs>
This command will write 4 evaluation result csv files and visualizations to <your_path_to_outputs>
.
This code also supports to test the usability of DRN-C-26 and PWC-Net for generating optical flow.
To test the pretrained DRN-C-26 model and PWC-Net model on a single image pair, run the following command from the root directory of this repository:
python test.py --modify=<path_to_your_modified_image> --origin=<path_to_your_original_image> --model="./drn/weights/local.pth" --output_dir=<path_to_your_outputs>
A validation set consisting of 500 original and 500 modified images each from Flickr-Faces-HQ Dataset (FFHQ) on Kaggle can be downloaded here.
After unzipping the file, you should see two folder modified
and reference
, where reference
contains 500 original images and modified
contains 500 modified images.
This repository borrows partially from the DCGAN Pytorch Tutorial, pytorch-CycleGAN-and-pix2pix, FALDetector, drn, pwc-net and the PyTorch torchvision models repositories.