Skip to content

Latest commit

 

History

History
62 lines (51 loc) · 2.57 KB

README.md

File metadata and controls

62 lines (51 loc) · 2.57 KB

py-MDNet

by Hyeonseob Nam and Bohyung Han at POSTECH

Update (April, 2019)

  • Migration to python 3.6 & pyTorch 1.0
  • Efficiency improvement (~5fps)
  • ImagNet-VID pretraining
  • Code refactoring

Introduction

PyTorch implementation of MDNet, which runs at ~5fps with a single CPU core and a single GPU (GTX 1080 Ti).

If you're using this code for your research, please cite:

@InProceedings{nam2016mdnet,
author = {Nam, Hyeonseob and Han, Bohyung},
title = {Learning Multi-Domain Convolutional Neural Networks for Visual Tracking},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}

Results on OTB

Prerequisites

  • python 3.6+
  • opencv 3.0+
  • PyTorch 1.0+ and its dependencies
  • for GPU support: a GPU with ~3G memory

Usage

Tracking

 python tracking/run_tracker.py -s DragonBaby [-d (display fig)] [-f (save fig)]
  • You can provide a sequence configuration in two ways (see tracking/gen_config.py):
    • python tracking/run_tracker.py -s [seq name]
    • python tracking/run_tracker.py -j [json path]

Pretraining

  • Download VGG-M (matconvnet model) and save as "models/imagenet-vgg-m.mat"
  • Pretraining on VOT-OTB
    • Download VOT datasets into "datasets/VOT/vot201x"
     python pretrain/prepro_vot.py
     python pretrain/train_mdnet.py -d vot
  • Pretraining on ImageNet-VID
     python pretrain/prepro_imagenet.py
     python pretrain/train_mdnet.py -d imagenet