You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 8, 2021. It is now read-only.
I've encountered an issue while using multiple GPUs to generate saliency maps with torchray.attribution. My model is wrapped in torch.nnDataParallel and I'm trying to use 4 GPUs. However, when the batch_size is set to be 4*m, the first dimension is returned saliency map is always m. I've checked the code, and it seems this issue occurs in class Probe in common.py. It seems that it probes gradient in one device. Do you have any ideas on solving this problem?
The text was updated successfully, but these errors were encountered:
Sorry for the delayed reply. Currently, multiple GPUs is not supported. I'd recommend moving the model to a single GPU. We may look into supporting multiple GPUs in the future.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hi, thanks for this amazing repo!
I've encountered an issue while using multiple GPUs to generate saliency maps with
torchray.attribution
. My model is wrapped intorch.nnDataParallel
and I'm trying to use 4 GPUs. However, when the batch_size is set to be4*m
, the first dimension is returned saliency map is alwaysm
. I've checked the code, and it seems this issue occurs inclass Probe
incommon.py
. It seems that it probes gradient in one device. Do you have any ideas on solving this problem?The text was updated successfully, but these errors were encountered: