Skip to content

guanwang92/MVSNet_pytorch

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

An Unofficial Pytorch Implementation of MVSNet

MVSNet: Depth Inference for Unstructured Multi-view Stereo. Yao Yao, Zixin Luo, Shiwei Li, Tian Fang, Long Quan. ECCV 2018. MVSNet is a deep learning architecture for depth map inference from unstructured multi-view images.

This is an unofficial Pytorch implementation of MVSNet

How to Use

Environment

  • python 3.6 (Anaconda)
  • pytorch 1.0.1

Training

  • Download the preprocessed DTU training data (Fixed training cameras, from Original MVSNet), and upzip it as the MVS_TRANING folder
  • in train.sh, set MVS_TRAINING as your training data path
  • create a logdir called checkpoints
  • Train MVSNet: ./train.sh

Testing

  • Download the preprocessed test data DTU testing data (from Original MVSNet) and unzip it as the DTU_TESTING folder, which should contain one cams folder, one images folder and one pair.txt file.
  • in test.sh, set DTU_TESTING as your testing data path and CKPT_FILE as your checkpoint file. You can also download my pretrained model.
  • Test MVSNet: ./test.sh

Fusion

in eval.py, I implemented a simple version of depth map fusion. Welcome contributions to improve the code.

Results on DTU

Acc. Comp. Overall.
MVSNet(D=256) 0.396 0.527 0.462
PyTorch-MVSNet(D=192) 0.4492 0.3796 0.4144

Due to the memory limit, we only train the model with D=192, the fusion code is also different from the original repo.

About

PyTorch Implementation of MVSNet

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 68.6%
  • MATLAB 30.8%
  • Shell 0.6%