This repository provides a PyTorch implementation of JoB-VS: Joint Brain-Vessel Segmentation in TOF-MRA Images presented in ISBI 2023. JoB-VS performs a joint-task learning for brain and vessel segmentation (JoB-VS) in Time-of-Flight Magnetic Resonance images, being an end-to-end vessel segmentation framework. Unlike other vessel segmentation methods, our approach avoids the pre-processing step of implementing a model to extract the brain from the volumetric input data. Our method builds upon Towards Robust General Medical Image Segmentation with a segmentation head that allows the simultaneous prediction of the brain and vessel mask.
JoB-VS: Joint Brain-Vessel Segmentation in TOF-MRA Images
Natalia Valderrama1, Ioannis Pitsiorlas2, Luisa Vargas1, Pablo Arbeláez1*, Maria A. Zuluaga2
ISBI 2023.
1 Center for Research and Formation in Artificial Intelligence (CINFONIA), Universidad de Los Andes.
2 Data Science Department, EURECOM, Sophia Antipolis, France
$ git clone https://github.com/BCV-Uniandes/JoB-VS.git
$ cd JoB-VS
$ python setup.py install
-
Download your data and create a json file in the OASIS3 format. Here you can find an example of how the data must be organized. Specify the root to your data in the json file.
-
Set the
data_root
,out_directory
andnum_workers
variables in the filedata_preprocessing.py
and run the command:
python data_preprocessing.py
Your data will be organized in the following way:
Fold_X
|_ imagesTr
|_ |_ *.nii.gz
|_ imagesTs
|_ |_ *.nii.gz
|_ labelsTr
|_ |_ *.nii.gz
|_ dataset.json
|_ dataset_stats.json
Our benchmark is setup for 2 folds.
- (optional) If data doesn't have any labels, as in the IXI dataset, please use this file
data_preprocessing.py
.
We train JoB-VS on the original images, without using brain masks, and then we fine-tune the models using Free AT, as done in ROG:
# For training on original images
python main.py --gpu GPU_IDs --batch BATCH_SIZE --fold FOLD --data_ver OUT_DIRECTORY --name OUTPUT_DIR
# For the Free AT fine tuning
python main.py --gpu GPU_IDs --batch BATCH_SIZE --fold FOLD --data_ver OUT_DIRECTORY --name OUTPUT_DIR_FREE_AT --ft --pretrained OUTPUT_DIR --AT
For evaluating the models, modify the EXPS_PATH, PATH_ANNS and PATH_PREDS in the file run_evaluations.py:
python run_evaluations.py
If you want to make inference with our models, please download our weights in this link and run:
# For the training on original images
python main.py --gpu GPU_IDs --batch BATCH_SIZE --data_ver YOUR_DATA --name OUTPUT_DIR --load_weights WEIGHTS_PATH --test
If you are using the ixi dataset, please add the (--ixi
) flag.
Please find all the information for the MONAILabel app in the branch monai.