PyTorch implementation of Self-distillation Augmented Masked Autoencoders for Histopathological Image Understanding
arch | params | download | |||
---|---|---|---|---|---|
ViT-S/16 | 21M | NCT-CRC' model | NCT-CRC' logs | PCam's model | PCam's logs |
Please install PyTorch and download the NCT-CRC-HE and PatchCamelyon dataset. This codebase has been developed based on MAE. More information about requirements can be found at it.
Run SD-MAE with ViT-small classing model on a single node with 4 GPUs for 100 epochs with the following command. We provide training logs for this run to help reproducibility.
python -m torch.distributed.launch --nproc_per_node=4 SD-MAE/run_mae_pretraining.py \
--data_path yourpath/pCam \
--batch_size 256 \
--model ltrp_base_and_vs \
--mask_ratio 0.6 \
--epochs 100 \
--dino_head_dim 4096 \
--dino_bottleneck_dim 256 \
--dino_hidden_dim 2048 \
--warmup_epochs 5 \
--lr 0.0006
python -m torch.distributed.launch --nproc_per_node=4 SD-MAE/run_class_finetuning.py \
--model vit_small_patch16_224 \
--finetune yourpath/checkpoint-100.pth \
--data_path yourpath/pCam \
--batch_size 256 \
--opt adamw \
--opt_betas 0.9 0.999 \
--weight_decay 0.05 \
--epochs 100 \
--nb_classes 2
--data_set 'pCam' \
--lr 0.001