You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I use DataParallel for multiple GPUs, it raises an error in the function sparsestmax: RuntimeError: arguments are located on different GPUs at /pytorch/aten/src/THC/generic/THCTensorMathCompareT.cu:31
It seems that the rad is only set to gpu0.
So how should I re-organize the code to set rad to all GPUs?
The text was updated successfully, but these errors were encountered:
Sometimes DataParallel module in PyTorch shows some random behavior and could be very slow, so we would recommend you to use distributed training instead.
We have updated SSN code for supporting dataparallel, please check and try.
But we haven't tested the experiement results(ImageNet...) in Dataparallel mode yet.
We have updated SSN code for supporting dataparallel, please check and try.
But we haven't tested the experiement results(ImageNet...) in Dataparallel mode yet.
When I use DataParallel for multiple GPUs, it raises an error in the function sparsestmax:
RuntimeError: arguments are located on different GPUs at /pytorch/aten/src/THC/generic/THCTensorMathCompareT.cu:31
It seems that the rad is only set to gpu0.
So how should I re-organize the code to set rad to all GPUs?
The text was updated successfully, but these errors were encountered: