We use AiFenGe dataset for training. You can download it from the link: https://www.kaggle.com/laurentmih/aisegmentcom-matting-human-datasets/
We use EBB! dataset for testing. There are train, val and test data in the dataset. However, images with blurred background are in the train dataset only, so we choose EBB!-train dataset for testing TNET. Please download it from the link: https://competitions.codalab.org/competitions/24716#participate
Pytorch 1.17
Python 3.6
python3 preprocess.py
python3 train.py
python3 blur_image.py
python3 blurred_background.py
We use peak signal-noise ratio (PSNR) and structual similarity (SSIM) as the evaluation indicators.
PSNR | SSIM |
---|---|
21.92 | 0.74 |