You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use the following examples given by your readme: python3 -m accelerate.commands.launch \ --num_processes=8 \ -m lmms_eval \ --model llava \ --model_args pretrained="liuhaotian/llava-v1.5-7b" \ --tasks mme \ --batch_size 1 \ --log_samples \ --log_samples_suffix llava_v1.5_mme \ --output_path ./logs/
and I found that 8 GPUs using the same cases during inference. How can I run different examples on multiple gpus to save time? Or, is there something wrong with my command?
The text was updated successfully, but these errors were encountered:
I use the following examples given by your readme:
python3 -m accelerate.commands.launch \ --num_processes=8 \ -m lmms_eval \ --model llava \ --model_args pretrained="liuhaotian/llava-v1.5-7b" \ --tasks mme \ --batch_size 1 \ --log_samples \ --log_samples_suffix llava_v1.5_mme \ --output_path ./logs/
and I found that 8 GPUs using the same cases during inference. How can I run different examples on multiple gpus to save time? Or, is there something wrong with my command?
The text was updated successfully, but these errors were encountered: