You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have 4 GPUs, when I run train.py with --num_samples 1 --gpu 4,there is only one GPU has utilization.
Is it because the model does not support multiple GPUs?
But when I run search.py with --num_samples 16 --gpu 0.25 , all GPUs has utilization.
The text was updated successfully, but these errors were encountered:
I've added CUDA_VISIBLE_DEVICES=0,1,2,3 to the scripts。
It's no problem on search.py, because run search.py, gpu is a fractional value, and num_samples > 1 . It works with multiple GPUs on Ray.
But, I mean train.py --num_samples=1, --gpu=4, ... here gpu nums > 1, num_samples=1, it means resources_per_trial:{"gpu":4} , classification model train is too slow, it does not work on 4 GPUs, only one GPU has utilization。
So, when --num_samples=1,--gpu>1 , model could not work with multiple GPUs on Ray ?
I have 4 GPUs, when I run train.py with --num_samples 1 --gpu 4,there is only one GPU has utilization.
Is it because the model does not support multiple GPUs?
But when I run search.py with --num_samples 16 --gpu 0.25 , all GPUs has utilization.
The text was updated successfully, but these errors were encountered: