Skip to content

Commit

Permalink
train MAE-ViT-Nano on CIFAR
Browse files Browse the repository at this point in the history
  • Loading branch information
yjh0410 committed Aug 22, 2023
1 parent 8c9bfa5 commit 50bfbee
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 9 deletions.
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,39 +6,39 @@ PyTorch implementation of Masked AutoEncoder
- Train `MAE-ViT-Nano` on CIFAR10 dataset:

```Shell
python mae_pretrain.py --dataset cifar10 -m mae_vit_nano --batch_size 256 --img_size 32 --patch_size 2
python mae_pretrain.py --dataset cifar10 -m mae_vit_nano --batch_size 256 --img_size 32 --patch_size 2 --max_epoch 400 --wp_epoch 40
```

- Train `MAE-ViT-Nano` on ImageNet dataset:

```Shell
python mae_finetune.py --dataset imagenet -m mae_vit_nano --batch_size 256 --img_size 224 --patch_size 16
python mae_finetune.py --dataset imagenet -m mae_vit_nano --batch_size 256 --img_size 224 --patch_size 16 --max_epoch 400 --wp_epoch 40
```

## 2. Train from scratch
- Train `ViT-Nano` on CIFAR10 dataset:

```Shell
python mae_finetune.py --dataset cifar10 -m vit_nano --batch_size 256 --img_size 32 --patch_size 2
python mae_finetune.py --dataset cifar10 -m vit_nano --batch_size 256 --img_size 32 --patch_size 2 --max_epoch 200 --wp_epoch 20
```

- Train `ViT-Nano` on ImageNet dataset:

```Shell
python mae_finetune.py --dataset imagenet -m vit_nano --batch_size 256 --img_size 224 --patch_size 16
python mae_finetune.py --dataset imagenet -m vit_nano --batch_size 256 --img_size 224 --patch_size 16 --max_epoch 200 --wp_epoch 20
```

## 3. Train from MAE pretrained
- Train `ViT-Nano` on CIFAR10 dataset:

```Shell
python mae_finetune.py --dataset cifar10 -m vit_nano --batch_size 256 --img_size 32 --patch_size 2 --mae_pretrained
python mae_finetune.py --dataset cifar10 -m vit_nano --batch_size 256 --img_size 32 --patch_size 2 --mae_pretrained --max_epoch 50 --wp_epoch 5
```

- Train `ViT-Nano` on ImageNet dataset:

```Shell
python mae_finetune.py --dataset imagenet -m vit_nano --batch_size 256 --img_size 224 --patch_size 16 --mae_pretrained
python mae_finetune.py --dataset imagenet -m vit_nano --batch_size 256 --img_size 224 --patch_size 16 --mae_pretrained --max_epoch 50 --wp_epoch 5
```

## 4. Experiments
Expand Down
6 changes: 3 additions & 3 deletions mae_pretrain.py
Original file line number Diff line number Diff line change
Expand Up @@ -201,7 +201,7 @@ def main():
loss = mae_loss(images, output['x_pred'], output['mask'], args.patch_size, args.norm_pix_loss)
# update num_fgs & losses
total_num_fgs += output['mask'].sum().item()
total_losses += loss * output['mask'].sum().item()
total_losses += loss.item() * output['mask'].sum().item()

# Backward
loss /= args.grad_accumulate
Expand Down Expand Up @@ -255,8 +255,8 @@ def main():
'optimizer': optimizer.state_dict(),
'epoch': epoch},
checkpoint_path)
total_num_fgs = 0.
total_losses = 0.
total_num_fgs = 0.
total_losses = 0.

lr_scheduler.step()

Expand Down

0 comments on commit 50bfbee

Please sign in to comment.