English | 简体中文
MedicalSeg is an easy-to-use 3D medical image segmentation toolkit that supports the whole segmentation process including data preprocessing, model training, and model deployment. Specially, We provide data preprocessing acceleration, high precision model on COVID-19 CT scans lung dataset and MRISpineSeg spine dataset, support for multiple datasets including MSD, Promise12, Prostate_mri and etc, and a 3D visualization demo based on itkwidgets. The following image visualize the segmentation results on these two datasets:
VNet segmentation result on COVID-19 CT scans (mDice on evalset is 97.04%) & MRISpineSeg (16 class mDice on evalset is 89.14%)
MedicalSeg is currently under development! If you find any problem using it or want to share any future develop suggestions, please open a github issue or join us by scanning the following wechat QR code.
We successfully validate our framework with Vnet on the COVID-19 CT scans and MRISpineSeg dataset. With the lung mask as label, we reached dice coefficient of 97.04% on COVID-19 CT scans. You can download the log to see the result or load the model and validate it by yourself :).
Backbone | Resolution | lr | Training Iters | Dice | Links |
---|---|---|---|---|---|
- | 128x128x128 | 0.001 | 15000 | 97.04% | model | log | vdl |
- | 128x128x128 | 0.0003 | 15000 | 92.70% | model | log | vdl |
Backbone | Resolution | lr | Training Iters | Dice(20 classes) | Dice(16 classes) | Links |
---|---|---|---|---|---|---|
- | 512x512x12 | 0.1 | 15000 | 74.41% | 88.17% | model | log | vdl |
- | 512x512x12 | 0.5 | 15000 | 74.69% | 89.14% | model | log | vdl |
We add GPU acceleration in data preprocess using CuPy. Compared with preprocess data on CPU, acceleration enable us to use about 40% less time in data prepeocessing. The following shows the time we spend in process COVID-19 CT scans.
Device | Time(s) |
---|---|
CPU | 50.7 |
GPU | 31.4( ↓ 38%) |
This part introduce a easy to use the demo on COVID-19 CT scans dataset. This demo is available on our Aistudio project as well. Detailed steps on training and add your own dataset can refer to this tutorial.
-
Download our repository.
git clone https://github.com/PaddlePaddle/PaddleSeg.git cd contrib/MedicalSeg/
-
Install requirements:
pip install -r requirements.txt
-
(Optional) Install CuPY if you want to accelerate the preprocess process. CuPY installation guide
-
Get and preprocess the data. Remember to replace prepare_lung_coronavirus.py with different python script that you need here:
- change the GPU setting here to True if you installed CuPY and want to use GPU to accelerate.
python tools/prepare_lung_coronavirus.py
-
Run the train and validation example. (Refer to the tutorial for details.)
sh run-vnet.sh
This part shows you the whole picture of our repository, which is easy to expand with different model and datasets. Our file tree is as follows:
├── configs # All configuration stays here. If you use our model, you only need to change this and run-vnet.sh.
├── data # Data stays here.
├── deploy # deploy related doc and script.
├── medicalseg
│ ├── core # the core training, val and test file.
│ ├── datasets
│ ├── models
│ ├── transforms # the online data transforms
│ └── utils # all kinds of utility files
├── export.py
├── run-vnet.sh # the script to reproduce our project, including training, validate, infer and deploy
├── tools # Data preprocess including fetch data, process it and split into training and validation set
├── train.py
├── val.py
└── visualize.ipynb # You can try to visualize the result use this file.
We have several thoughts in mind about what should our repo focus on. Your contribution will be very much welcomed.
- Add PP-nnunet with acceleration in preprocess, automatic configuration for all dataset and better performance compared to nnunet.
- Add top 1 liver segmentation algorithm on LITS challenge.
- Add 3D Vertebral Measurement System.
- Add pretrain model on various dataset.
- Many thanks to Lin Han, Lang Du, onecatcn for their contribution in our repository
- Many thanks to itkwidgets for their powerful visualization toolkit that we used to present our visualizations.