The code is tested with PyTorch 1.11.0 and CUDA 11.3. After cloning the repository, follow the below steps for installation,
- Create and activate conda environment
conda create --name ConvNet python=3.8
conda activate ConvNet
- Install PyTorch and torchvision
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
- Install other dependencies
pip install -r requirements.txt
-
For evaluation:
-
evaluation_scripts/run_evaluation_synapse.sh
-
evaluation_scripts/run_evaluation_lung.sh
-
evaluation_scripts/run_evaluation_tumor.sh
-
-
For inference:
unetr_pp/inference/predict_simple.py
-
Network architecture:
-
unetr_pp/network_architecture/tumor/unetr_pp_tumor.py
-
unetr_pp/network_architecture/lung/unetr_pp_lung.py
-
unetr_pp/network_architecture/synapse/unetr_pp_synapse.py
-
-
For training:
unetr_pp/run/run_training.py
-
Trainer for dataset:
-
unetr_pp/training/network_training/unetr_pp_trainer_tumor.py
-
unetr_pp/training/network_training/unetr_pp_trainer_lung.py
-
unetr_pp/training/network_training/unetr_pp_trainer_synapse.py
-
We follow the same dataset preprocessing as in UNETR++. We conducted extensive experiments on four benchmarks: Synapse, BTCV, BRaTs, and Decathlon-Lung.
The dataset folders for Synapse should be organized as follows:
./DATASET_Synapse/
├── unetr_pp_raw/
├── unetr_pp_raw_data/
├── Task02_Synapse/
├── imagesTr/
├── imagesTs/
├── labelsTr/
├── labelsTs/
├── dataset.json
├── Task002_Synapse
├── unetr_pp_cropped_data/
├── Task002_Synapse
The dataset folders for Decathlon-Lung should be organized as follows:
./DATASET_Lungs/
├── unetr_pp_raw/
├── unetr_pp_raw_data/
├── Task06_Lung/
├── imagesTr/
├── imagesTs/
├── labelsTr/
├── labelsTs/
├── dataset.json
├── Task006_Lung
├── unetr_pp_cropped_data/
├── Task006_Lung
The dataset folders for BRaTs should be organized as follows:
./DATASET_Tumor/
├── unetr_pp_raw/
├── unetr_pp_raw_data/
├── Task03_tumor/
├── imagesTr/
├── imagesTs/
├── labelsTr/
├── labelsTs/
├── dataset.json
├── Task003_tumor
├── unetr_pp_cropped_data/
├── Task003_tumor
Please refer to Setting up the datasets on nnFormer repository for more details. Alternatively, you can download the preprocessed dataset for Synapse, Decathlon-Lung, BRaTs, and extract it under the project directory.
You can refer to nnFormer for data splitting and preprocessing
The following scripts can be used for training our 3D ConvNet++ model on the datasets:
bash training_scripts/run_training_synapse.sh
bash training_scripts/run_training_acdc.sh
bash training_scripts/run_training_lung.sh
bash training_scripts/run_training_tumor.sh
To reproduce the results of 3D ConvNet++:
1- Download Synapse weights and paste model_final_checkpoint.model
in the following path:
unetr_pp/evaluation/unetr_pp_synapse_checkpoint/unetr_pp/3d_fullres/Task002_Synapse/unetr_pp_trainer_synapse__unetr_pp_Plansv2.1/fold_0/
Then, run
bash evaluation_scripts/run_evaluation_synapse.sh
2- Download Decathlon-Lung weights and paste model_final_checkpoint.model
it in the following path:
unetr_pp/evaluation/unetr_pp_lung_checkpoint/unetr_pp/3d_fullres/Task006_Lung/unetr_pp_trainer_lung__unetr_pp_Plansv2.1/fold_0/
Then, run
bash evaluation_scripts/run_evaluation_lung.sh
3- Download BRaTs weights and paste model_final_checkpoint.model
it in the following path:
unetr_pp/evaluation/unetr_pp_lung_checkpoint/unetr_pp/3d_fullres/Task003_tumor/unetr_pp_trainer_tumor__unetr_pp_Plansv2.1/fold_0/
Then, run
bash evaluation_scripts/run_evaluation_tumor.sh
You can see the results of our 3D ConvNet++ model visualization in the directory CCC/result_figure
.
input feature has wrong size
If you encounter this problem during your implementation, please check the code in unetr_pp/run/default_configuration.py
. I have set independent crop size (i.e., patch size) for each dataset. You may need to modify the crop size based on your own need.