-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Out of Memory after 1 epoch using densenet-BC 100layers(grouth_rate=12) #7
Comments
Hi @ZhenyF, python3 main.py --arch densenet --depth 100 --growth-rate 12 --bn-size 4 --compression 0.5 --data cifar10+ --epochs 300 --save save/cifar10+-densenet-bc-100 I also tried batch_size 128, which used about 5.0 GB. If it still doesn't work you may try this memory efficient implementation by my friend Geoff. |
Many thanks for the reply! @felixgwu `(pytorch) D:\GA\PYTorch\img_classification_pk_pytorch-master>python main.py --data cifar10+ --depth 100 --save save/cifar10+-densenetBC12_100 --arch densenet_eff of params: 769162Epoch 1 lr = 1.000000e-01 |
Hi @ZhenyF, It seems that you're using PyTorch windows version. Would it be possible that it's a bug for the windows version? |
Hi @taineleau |
Hi @ZhenyF, |
HI,
I tried the densenet you recommanded and set the grouth_rate=12, depth=100 and batch_size=128 on two GTX1080ti.
It seems that the model will stop after a epoch.
Could you please help me with this?
The text was updated successfully, but these errors were encountered: