This repo includes:
- PyTorch official implementation of Prototype Knowledge Distillation for Medical Segmentation with Missing Modality
by Shuai Wang, Zipei Yan, Daoan Zhang, Haining Wei, Zhongsen Li, Rui Li.
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2023).
arXiv, IEEEXplore - Paper collection of missing modality in medical image segmentation.
python==3.8.13
torch==1.12.0
numpy==1.22.3
medpy==0.4.0
simpleitk==2.2.1
tensorboard==2.12.0
tqdm==4.65.0
Please download BraTS2018 training set from here.
For preprocessing, please change path
(Line35) in preprocess.py
, and
cd code/
python preprocess.py
the preprocessed data will be organzied as:
--code
--data
--brats2018 ##pre-processed data, npy format
--Brats18_2013_0_1.npy
--train_list.txt
--val_list.txt
--test_list.txt
First, train a teacher model
python pretrain.py --log_dir ../log/teachermodel
For baseline (unimodal in paper),
python train_baseline.py --log_dir ../log/unimodal_modality0 --modality 0
where modality=0,1,2,3 denotes using T1/T2/T1ce/Falir images for training.
For ProtoKD (our method)
python train_protokd.py --modality 0 --log_dir ../log/protokd_modality0
python evaluate.py --model_path ../log/protokd_modality0/model/best_model.pth \
--test_modality 0 \
--output_path protokd_modality0_outputs
If you want to save visualization results (nii.gz
format, you can open it using ITK-Snap or 3D-slicer), please add --save_vis
.
python evaluate.py --model_path ../log/protokd_modality0/model/best_model.pth \
--test_modality 0 \
--output_path protokd_modality0_outputs \
--save_vis
We provide models for teacher, baseline T1, protokd T1 in Google Drive.
If this code or ProtoKD is useful for your research, please consider citing our paper:
@inproceedings{Wang2023,
year = {2023},
author = {Shuai Wang and Zipei Yan and Daoan Zhang and Haining Wei and Zhongsen Li and Rui Li},
title = {Prototype Knowledge Distillation for Medical Segmentation with Missing Modality},
booktitle = {ICASSP}
}
If you have any question, please contact [email protected]
We have compiled a collection of papers addressing the issue of missing modality in medical image analysis. This resource aims to assist researchers and practitioners interested in this topic. Papers with publicly available code have been highlighted. If you're aware of any significant papers that we may have overlooked, please don't hesitate to inform us.
- Learning with Privileged Multimodal Knowledge for Unimodal Segmentation, IEEE TMI 2021, code
- D2-Net: Dual Disentanglement Network for Brain Tumor Segmentation with Missing Modalities, IEEE TMI 2022, code
- mmFormer: Multimodal Medical Transformer for Incomplete Multimodal Learning of Brain Tumor Segmentation, MICCAI 2022, arXiv, code
- Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement and Gated Fusion, MICCAI 2019, arXiv, code
- ACN: Adversarial Co-training Network for Brain Tumor Segmentation with Missing Modalities, MICCAI 2021, arXiv, code
- Hetero-Modal Variational Encoder-Decoder for Joint Modality Completion and Segmentation, MICCAI 2019, arXiv, code
- Knowledge distillation from multi-modal to mono-modal segmentation networks, MICCAI 2020, arXiv
- HeMIS: Hetero-Modal Image Segmentation, MICCAI 2016, arXiv
- Brain Tumor Segmentation on MRI with Missing Modalities, IPMI 2019, arXiv
- Latent Correlation Representation Learning for Brain Tumor Segmentation With Missing MRI Modalities, IEEE TIP 2021
- SMU-Net: Style matching U-Net for brain tumor segmentation with missing modalities, arXiv
- Learning Cross-Modality Representations From Multi-Modal Images, IEEE TMI 2018
- Multi-Domain Image Completion for Random Missing Input Data, IEEE TMI 2020
- Medical Image Segmentation on MRI Images with Missing Modalities: A Review, arXiv 2022
- Disentangle First, Then Distill: A Unified Framework for Missing Modality Imputation and Alzheimer’s Disease Diagnosis, IEEE TMI 2023
- Discrepancy and Gradient-Guided Multi-modal Knowledge Distillation for Pathological Glioma Grading, MICCAI 2022, code
- Gradient modulated contrastive distillation of low-rank multi-modal knowledge for disease diagnosis, Medical Image Analysis 2023
- M3AE: Multimodal Representation Learning for Brain Tumor Segmentation with Missing Modalities, arXiv 2023, code
- Fundus-Enhanced Disease-Aware Distillation Model for Retinal Disease Classification from OCT Images, MICCAI 2023 arXiv, code
- Multi-modal Learning with Missing Modality via Shared-Specific Feature Modelling, CVPR 2023, code
- M
$^2$ FTrans: Modality-Masked Fusion Transformer for Incomplete Multi-Modality Brain Tumor Segmentation, IEEE JBHI 2023, code. - Bootstrapping Chest CT Image Understanding by Distilling Knowledge from X-ray Expert Models, CVPR 2024
- PASSION: Towards Effective Incomplete Multi-Modal Medical Image Segmentation with Imbalanced Missing Rates, ACM MM 2024
- A Vision Transformer-Based Framework for Knowledge Transfer From Multi-Modal to Mono-Modal Lymphoma Subtyping Models, IEEE JBHI 2024
- MedMAP: Promoting Incomplete Multi-modal Brain Tumor Segmentation with Alignment, arXiv 2024