You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear cell2fate developers,
I am encountering a CUDA Out of Memory error when running the export_posterior method on my server with an NVIDIA T400 4GB GPU. Despite having a GPU available, the method seems to require more memory than my GPU can provide, leading to the following error:
RuntimeError: CUDA out of memory. Tried to allocate 2.21 GiB (GPU 0; 3.80 GiB total capacity; 2.21 GiB already allocated; 297.38 MiB free; 2.21 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF.`
Environment Details:
Server Configuration:
GPU: NVIDIA T400 4GB
CUDA Version: 10.2
Questions:
Is there a way to control the memory usage of the export_posterior method, such as by setting a batch size or using a different memory management strategy?
Are there any specific recommendations for running cell2fate on GPUs with limited memory capacity (e.g., 4GB)?
If the method cannot be run on a GPU with limited memory, is there a way to force it to run on the CPU instead?
Any guidance or suggestions on how to resolve this issue would be greatly appreciated. Thank you for your time and support!
Best regards,
Bella
The text was updated successfully, but these errors were encountered:
Here I have reduced the number of samples from 30 to 10 and I have also decreased the batch size from all to 1000. You can adjust these further to reduce memory further if needed.
You can also directly export the median and 0.05 and 0.95 quantiles for all parameters, which is the fastest way if you don't care much about estimating variance in parameters, for example:
Dear cell2fate developers,
I am encountering a CUDA Out of Memory error when running the export_posterior method on my server with an NVIDIA T400 4GB GPU. Despite having a GPU available, the method seems to require more memory than my GPU can provide, leading to the following error:
RuntimeError: CUDA out of memory. Tried to allocate 2.21 GiB (GPU 0; 3.80 GiB total capacity; 2.21 GiB already allocated; 297.38 MiB free; 2.21 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF.`
Environment Details:
Server Configuration:
GPU: NVIDIA T400 4GB
CUDA Version: 10.2
Questions:
Any guidance or suggestions on how to resolve this issue would be greatly appreciated. Thank you for your time and support!
Best regards,
Bella
The text was updated successfully, but these errors were encountered: