An unofficial implementation of 'Domain Adaptive Faster R-CNN for Object Detection in the Wild ’
-
Install Pytorch
-
Our code is conducted based on faster-rcnn.pytorch,please setup the framework by it.
-
Download dataset
-
we use cityscape and cityscapes-foggy datasets respectly as source and target,the cityscapes dataset could be download Here
-
the format of datasets is similar with VOC,you just need to split train.txt to train_s.txt and train_t.txt
-
you can also download the dataset GoogleDrive
-
1.train the model,you need to download the pretrained model [vgg_caffe](https://github.com/jwyang/faster-rcnn.pytorch) which is different with pure pytorch pretrained model
2.change the dataset root path in ./lib/model/utils/config.py and some dataset dir path in ./lib/datasets/cityscape.py,the default dataset path is ./data and the default pre-trained module path is /data/ztc/detectionModel/
3 Train the model (cityscapes -> cityscapes-foggy)
CUDA_VISIBLE_DEVICES=GPU_ID python da_trainval_net.py --dataset cityscape --net vgg16 --bs 1 --lr 2e-3 --lr_decay_step 6 --cuda
- Test the model (move eval/test.py to ./test.py )
CUDA_VISIBLE_DEVICES=GPU_ID python test.py --dataset cityscapes --part test_t --cuda --model_dir=# The path of your pth model
Our model could arrive mAP=30.71% in target domain which is high than baseline mAP=24.26%