You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
tests: revert change of torch_require_multi_gpu to be device agnostic
The 11c27dd modified `torch_require_multi_gpu()` to be device agnostic
instead of being CUDA specific. This broke some tests which are rightfully
CUDA specific, such as:
* `tests/trainer/test_trainer_distributed.py::TestTrainerDistributed`
In the current Transformers tests architecture `require_torch_multi_accelerator()`
should be used to mark multi-GPU tests agnostic to device.
This change addresses the issue introduced by 11c27dd and reverts
modification of `torch_require_multi_gpu()`.
Fixes: 11c27dd ("Enable BNB multi-backend support (#31098)")
Signed-off-by: Dmitry Rogozhkin <[email protected]>
0 commit comments