Skip to content

Latest commit

 

History

History
42 lines (36 loc) · 4.93 KB

README.md

File metadata and controls

42 lines (36 loc) · 4.93 KB

LVIS dataset

Introduction

@inproceedings{gupta2019lvis,
  title={{LVIS}: A Dataset for Large Vocabulary Instance Segmentation},
  author={Gupta, Agrim and Dollar, Piotr and Girshick, Ross},
  booktitle={Proceedings of the {IEEE} Conference on Computer Vision and Pattern Recognition},
  year={2019}
}

Common Setting

  • Please follow install guide to install open-mmlab forked cocoapi first.
  • Run following scripts to install our forked lvis-api.
    pip install "git+https://github.com/open-mmlab/cocoapi.git#subdirectory=lvis"
    
    or
    pip install -r requirements/optional.txt
    
  • All experiments use oversample strategy here with oversample threshold 1e-3.
  • The size of LVIS v0.5 is half of COCO, so schedule 2x in LVIS is roughly the same iterations as 1x in COCO.

Results and models of LVIS v0.5

Backbone Style Lr schd Mem (GB) Inf time (fps) box AP mask AP Download
R-50-FPN pytorch 2x - - 26.1 25.9 model | log
R-101-FPN pytorch 2x - - 27.1 27.0 model | log
X-101-32x4d-FPN pytorch 2x - - 26.7 26.9 model | log
X-101-64x4d-FPN pytorch 2x - - 26.4 26.0 model | log

Results and models of LVIS v1

Backbone Style Lr schd Mem (GB) Inf time (fps) box AP mask AP Download
R-50-FPN pytorch 1x 9.1 - 22.5 21.7 model | log
R-101-FPN pytorch 1x 10.8 - 24.6 23.6 model | log
X-101-32x4d-FPN pytorch 1x 11.8 - 26.7 25.5 model | log
X-101-64x4d-FPN pytorch 1x 14.6 - 27.2 25.8 model | log