Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Command line argument to limit posterior generation memory usage #98

Closed
sjfleming opened this issue May 3, 2021 · 3 comments
Closed
Assignees
Labels
enhancement New feature or improvement
Milestone

Comments

@sjfleming
Copy link
Member

Create a new input argument to control the batch size used by the posterior inference step.

Unclear if the problematic line is here

or here

n_cells = min(100, cell_inds.size)

but figure this out.

Also explore memory usage during posterior generation. Why is it spiking? Is this avoidable?

@sjfleming sjfleming added the enhancement New feature or improvement label May 3, 2021
@sjfleming sjfleming added this to the v0.2.1 milestone May 3, 2021
@sjfleming sjfleming self-assigned this May 3, 2021
@sjfleming
Copy link
Member Author

Totally clearing out CUDA memory after training but before posterior generation seems to help out a lot. Not sure if it will totally solve this for everybody, but this will be in the 0.3.0 release, and should fix the issue for most people

@hoondy
Copy link

hoondy commented Jun 24, 2022

Changing batch_size=20, and n_cells = min(10, cell_inds.size) from CellBender/cellbender/remove_background/infer.py worked for me.

@sjfleming sjfleming mentioned this issue Mar 28, 2023
@sjfleming sjfleming mentioned this issue Aug 6, 2023
@sjfleming
Copy link
Member Author

Closed by #238

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or improvement
Projects
None yet
Development

No branches or pull requests

2 participants