You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Totally clearing out CUDA memory after training but before posterior generation seems to help out a lot. Not sure if it will totally solve this for everybody, but this will be in the 0.3.0 release, and should fix the issue for most people
Create a new input argument to control the batch size used by the posterior inference step.
Unclear if the problematic line is here
CellBender/cellbender/remove_background/infer.py
Line 413 in 0feb5e0
or here
CellBender/cellbender/remove_background/infer.py
Line 383 in 0feb5e0
but figure this out.
Also explore memory usage during posterior generation. Why is it spiking? Is this avoidable?
The text was updated successfully, but these errors were encountered: