This folder contains the implementation of the InternImage for semantic segmentation.
Our segmentation code is developed on top of MMSegmentation v0.27.0.
- Clone this repo:
git clone https://github.com/OpenGVLab/InternImage.git
cd InternImage
- Create a conda virtual environment and activate it:
conda create -n internimage python=3.7 -y
conda activate internimage
- Install
CUDA>=10.2
withcudnn>=7
following the official installation instructions - Install
PyTorch>=1.10.0
andtorchvision>=0.9.0
withCUDA>=10.2
:
For examples, to install torch==1.11 with CUDA==11.3 and nvcc:
conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch -y
conda install -c conda-forge cudatoolkit-dev=11.3 -y # to install nvcc
-
Install other requirements:
note: conda opencv will break torchvision as not to support GPU, so we need to install opencv using pip.
conda install -c conda-forge termcolor yacs pyyaml scipy pip -y
pip install opencv-python
- Install
timm
andmmcv-full
and `mmsegmentation':
pip install -U openmim
mim install mmcv-full==1.5.0
mim install mmsegmentation==0.27.0
pip install timm==0.6.11 mmdet==2.28.1
- Compile CUDA operators
cd ./ops_dcnv3
sh ./make.sh
# unit test (should see all checking is True)
python test.py
- You can also install the operator using .whl files DCNv3-1.0-whl
Prepare datasets according to the guidelines in MMSegmentation.
To evaluate our InternImage
on ADE20K val, run:
sh dist_test.sh <config-file> <checkpoint> <gpu-num> --eval mIoU
You can download checkpoint files from here. Then place it to segmentation/checkpoint_dir/seg.
For example, to evaluate the InternImage-T
with a single GPU:
python test.py configs/ade20k/upernet_internimage_t_512_160k_ade20k.py checkpoint_dir/seg/upernet_internimage_t_512_160k_ade20k.pth --eval mIoU
For example, to evaluate the InternImage-B
with a single node with 8 GPUs:
sh dist_test.sh configs/ade20k/upernet_internimage_b_512_160k_ade20k.py checkpoint_dir/seg/upernet_internimage_b_512_160k_ade20k.pth 8 --eval mIoU
To train an InternImage
on ADE20K, run:
sh dist_train.sh <config-file> <gpu-num>
For example, to train InternImage-T
with 8 GPU on 1 node (total batch size 16), run:
sh dist_train.sh configs/ade20k/upernet_internimage_t_512_160k_ade20k.py 8
For example, to train InternImage-XL
with 8 GPU on 1 node (total batch size 16), run:
GPUS=8 sh slurm_train.sh <partition> <job-name> configs/ade20k/upernet_internimage_xl_640_160k_ade20k.py
To inference a single/multiple image like this. If you specify image containing directory instead of a single image, it will process all the images in the directory.:
CUDA_VISIBLE_DEVICES=0 python image_demo.py \
data/ade/ADEChallengeData2016/images/validation/ADE_val_00000591.jpg \
configs/ade20k/upernet_internimage_t_512_160k_ade20k.py \
checkpoint_dir/seg/upernet_internimage_t_512_160k_ade20k.pth \
--palette ade20k
To export a segmentation model from PyTorch to TensorRT, run:
MODEL="model_name"
CKPT_PATH="/path/to/model/ckpt.pth"
python deploy.py \
"./deploy/configs/mmseg/segmentation_tensorrt_static-512x512.py" \
"./configs/ade20k/${MODEL}.py" \
"${CKPT_PATH}" \
"./deploy/demo.png" \
--work-dir "./work_dirs/mmseg/${MODEL}" \
--device cuda \
--dump-info
For example, to export upernet_internimage_t_512_160k_ade20k
from PyTorch to TensorRT, run:
MODEL="upernet_internimage_t_512_160k_ade20k"
CKPT_PATH="/path/to/model/ckpt/upernet_internimage_t_512_160k_ade20k.pth"
python deploy.py \
"./deploy/configs/mmseg/segmentation_tensorrt_static-512x512.py" \
"./configs/ade20k/${MODEL}.py" \
"${CKPT_PATH}" \
"./deploy/demo.png" \
--work-dir "./work_dirs/mmseg/${MODEL}" \
--device cuda \
--dump-info