You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
File "/workspace/long/yarn/finetune.py", line 143, in main
model = accelerator.prepare(model)
File "/root/miniconda3/envs/yarn/lib/python3.10/site-packages/accelerate/accelerator.py", line 1280, in prepare
result = self._prepare_deepspeed(*args)
File "/root/miniconda3/envs/yarn/lib/python3.10/site-packages/accelerate/accelerator.py", line 1515, in _prepare_deepspeed
raise ValueError(
ValueError: When using DeepSpeed accelerate.prepare() requires you to pass at least one of training or evaluation dataloaders or alternatively set an integer value in train_micro_batch_size_per_gpu in the deepspeed config fileor assign integer value to AcceleratorState().deepspeed_plugin.deepspeed_config['train_micro_batch_size_per_gpu'].
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 1523510) of binary: /root/miniconda3/envs/yarn/bin/python
The text was updated successfully, but these errors were encountered:
File "/workspace/long/yarn/finetune.py", line 143, in main model = accelerator.prepare(model) File "/root/miniconda3/envs/yarn/lib/python3.10/site-packages/accelerate/accelerator.py", line 1280, in prepare result = self._prepare_deepspeed(*args) File "/root/miniconda3/envs/yarn/lib/python3.10/site-packages/accelerate/accelerator.py", line 1515, in _prepare_deepspeed raise ValueError( ValueError: When using DeepSpeed accelerate.prepare() requires you to pass at least one of training or evaluation dataloaders or alternatively set an integer value in train_micro_batch_size_per_gpu in the deepspeed config fileor assign integer value to AcceleratorState().deepspeed_plugin.deepspeed_config['train_micro_batch_size_per_gpu']. ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 1523510) of binary: /root/miniconda3/envs/yarn/bin/python
File "/workspace/long/yarn/finetune.py", line 143, in main
model = accelerator.prepare(model)
File "/root/miniconda3/envs/yarn/lib/python3.10/site-packages/accelerate/accelerator.py", line 1280, in prepare
result = self._prepare_deepspeed(*args)
File "/root/miniconda3/envs/yarn/lib/python3.10/site-packages/accelerate/accelerator.py", line 1515, in _prepare_deepspeed
raise ValueError(
ValueError: When using DeepSpeed
accelerate.prepare()
requires you to pass at least one of training or evaluation dataloaders or alternatively set an integer value intrain_micro_batch_size_per_gpu
in the deepspeed config fileor assign integer value toAcceleratorState().deepspeed_plugin.deepspeed_config['train_micro_batch_size_per_gpu']
.ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 1523510) of binary: /root/miniconda3/envs/yarn/bin/python
The text was updated successfully, but these errors were encountered: