You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To avoid issues when running bundles in CPU mode like that encountered below, all bundle weights should be stored on CPU. The alternative solution is to ensure any CheckpointLoader objects used have a map_location set to something which can be used without CUDA being present.
For those that support CPU-only operations, some way of testing bundles without the presence of CUDA might be nice too.
"RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU."
To avoid issues when running bundles in CPU mode like that encountered below, all bundle weights should be stored on CPU. The alternative solution is to ensure any
CheckpointLoader
objects used have amap_location
set to something which can be used without CUDA being present.For those that support CPU-only operations, some way of testing bundles without the presence of CUDA might be nice too.
Discussed in #516
Originally posted by mpsampat October 10, 2023
Hello!
I am trying to run the inference.json for the WholeBody_ct_segmentation bundle. the inference.json file is here:
https://github.com/Project-MONAI/model-zoo/blob/dev/models/wholeBody_ct_segmentation/configs/inference.json
When I run on a CPU with 128 gb memory I get this error:
"RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU."
I tried to change line 15 in the inference.json file:
https://github.com/Project-MONAI/model-zoo/blob/dev/models/wholeBody_ct_segmentation/configs/inference.json#L15
I changed it from
"device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
to
"device": "cpu",
But i still get the same error as above.
thanks
Mehul
The text was updated successfully, but these errors were encountered: