Skip to content

Latest commit

 

History

History
71 lines (48 loc) · 2.02 KB

README.md

File metadata and controls

71 lines (48 loc) · 2.02 KB

SSDG

The implementation of Single-Side Domain Generalization for Face Anti-Spoofing.

The motivation of the proposed SSDG method:

An overview of the proposed SSDG method:

Congifuration Environment

  • python 3.6
  • pytorch 0.4
  • torchvision 0.2
  • cuda 8.0

Pre-training

Dataset.

Download the OULU-NPU, CASIA-FASD, Idiap Replay-Attack, and MSU-MFSD datasets.

Data Pre-processing.

MTCNN algotithm is utilized for face detection and face alignment. All the detected faces are normlaize to 256$\times$256$\times$3, where only RGB channels are utilized for training.

To be specific, we process every frame of each video and then utilize the sample_frames function in the utils/utils.py to sample frames during training.

Put the processed frames in the path $root/data/dataset_name.

Data Label Generation.

Move to the $root/data_label and generate the data label list:

python generate_label.py

Training

Move to the folder $root/experiment/testing_scenarios/ and just run like this:

python train_ssdg_full.py

The file config.py contains all the hype-parameters used during training.

Testing

Run like this:

python dg_test.py

Citation

Please cite our paper if the code is helpful to your research.

@InProceedings{Jia_2020_CVPR_SSDG,
    author = {Yunpei Jia and Jie Zhang and Shiguang Shan and Xilin Chen},
    title = {Single-Side Domain Generalization for Face Anti-Spoofing},
    booktitle = {Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year = {2020}
}