Implement DNN models and advanced policy with PyTorch.
- torch >= 0.4.0
- torchvision >= 0.2.1
optimizer = optim.Adam(model.parameters(), lr=1.)
# initial lr should be 1
clr = cyclical_lr(step_size, min_lr=0.001, max_lr=1, scale_func=clr_func, scale_md='iterations')
scheduler = lr_scheduler.LambdaLR(optimizer, [clr])
- Warm restart policy is available now by AutuanLiu · Pull Request #6130 · pytorch/pytorch
- Cosine Annealing with warm restarts by roveo · Pull Request #7821 · pytorch/pytorch
- Cosine annealing with restarts by striajan · Pull Request #11104 · pytorch/pytorch
torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=100, eta_min=1e-8, T_mult=2)
# T_max < training epochs if you want to use restart policy
from models.BaseNet_class import BaseNet
# some configs setting
configs = {
'model': net,
'opt': opt,
'criterion': nn.CrossEntropyLoss(),
'dataloaders': ...,
'data_sz': ...,
'lrs_decay': lr_scheduler.StepLR(opt, step_size=50),
'prt_freq': 5,
'epochs': 500,
}
sub_model = BaseNet(configs)
# train and test
sub_model.train_m()
sub_model.test_m()
- ResNet
- AlexNet
- GoogLeNet
- DenseNet
- VGGNet
- LeNet
- GAN
- NiN
- STN
- VAE
- RNN
- LSTM
- GRU
- Neural Network for Time Series