This repository offers a pre-trained vision transformer autoencoder (ViTAutoEnc), originally trained on 57,000 brain-enhanced MRI scans. This model was purposed for multi-contrast brain MRIs. The figure below gives an illustration of the training process:
Two mask schemes get utilized: mask block size 161616 with 86 blocks (A); mask block size 444 with 6000 blocks (B)
Dependencies can be installed via the following command:
pip install -r requirements.txt
We provide the pre-trained weights in SSL_ViT_Block16 and SSL_ViT-Block4. Download them and transfer them into the Pretrained_models directory.
Kindly follow the five steps below:
- Convert DICOM to NIFTI. We suggest using dcm2niix.
- Strip skull. Select T1 weight or enhanced T1w volume with higher resolution for skull stripping. HD-BET is a solid choice.
- Remove redundant blank area using ExtractRegionFromImageByMask from ANTs.
- For co-registration of other contrasts (eg, T2w and FLAIR) with the skull-stripped T1w or T1c, please multiply with the brain mask generated in the previous step.
- Merge the co-registered volumes and the skull-stripped volumes into a 4D volume, following the order T1w, T1c, T2w, and FLAIR.
Kindly follow the steps below:
- Modify
train_files
andval_files
in finetune_train.py to accurately point out pre-processed train and validation images. - Modify
test_files
in finetune_test.py to indicate the preprocessed test images, and evaluate the model using finetune_test.py. - (Optional) Adjust Model.py to test various classifier configurations.
We highly encourage the use of Distributed Data Parallel when performing pre-training with your own data using the NVIDIA Container on a server equipped with multiple GPUs. Kindly follow the steps below:
- Pull the MONAI docker image using the command
docker pull projectmonai/monai
- Update the PATHs in run_docker.sh appropriately, and execute
run_docker.sh
- While still in the docker container, execute
run_pretraining.sh
. Your model is then ready for use!