Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA out of memory. #21

Open
penghusile opened this issue Nov 28, 2021 · 1 comment
Open

CUDA out of memory. #21

penghusile opened this issue Nov 28, 2021 · 1 comment

Comments

@penghusile
Copy link

penghusile commented Nov 28, 2021

Hello, when I am reproducing the vit-tiny model, I use four 2080ti GPU according to your configuration and it still does not work. It prompts CUDA out of memory. What is the reason? My configuration file is as follows

RANDOM=$$
GPU=0,1,2,3
CUDA_VISIBLE_DEVICES=${GPU} \
python3 train.py --train_data data_lmdb_release/training \
--valid_data data_lmdb_release/evaluation \
--select_data MJ-ST \
--batch_ratio 0.5-0.5 \
--Transformation None \
--FeatureExtraction None \
--SequenceModeling None \
--Prediction None \
--Transformer \
--TransformerModel vitstr_tiny_patch16_224 \
--imgH 224 \
--imgW 224 \
--manualSeed=$RANDOM \
--sensitive \
--valInterval 5000 \
--workers 6 \
--batch_size 48
@roatienza
Copy link
Owner

roatienza commented Nov 28, 2021

It should run on even on a single GPU. For instance, running the same script, the memory consumption is:

| 3 N/A N/A 125879 C python3 9087MiB |

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants