-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Importing QuPathProject taking up >100GB memory #69
Comments
Hello @hwarden162, Virtual memory is not a good proxy for real memory usage. Since paquo starts the JVM with a default setting of max ram usage of 50% of the systems ram (see: https://github.com/bayer-science-for-a-better-life/paquo/blob/51102306e7fc7d144807656641a2589233f57b11/paquo/.paquo.defaults.toml#L28 ) the value you report is a pretty good match. I believe the memory issue you mention is probably due to something else. Could you provide the code snippet that reproduces the memory error? And provide the traceback or error message that's displayed when it occurs? Cheers, |
Hi @ap--, This is the code I run:
It is the
However, if I run the same code but with Thanks, |
Thanks for the traceback and the additional explanation! Looks like I am wrong and we really reserve 50% of RAM then. The good thing is you can easily override the setting. But we should change the default to have a reasonable upper limit. To fix your issue, follow the instructions here: https://paquo.readthedocs.io/en/latest/configuration.html to create a custom paquo configuration file on your cluster replacing the JAVA_OPTS Let me know if anything is unclear and if this solves your issue. Cheers, |
I ran the I then ran
to
Now,
so it looks like I successfully altered the .paquo.toml but importing QuPathProject is still taking up half of my memory (measured as before) and I am still getting the error as before when trying to normalise my image (and going over my memory allowance). I have tried completely exiting conda and my session on the terminal to see if rebooting would help but I am still getting the error. Thanks, EDIT1: Putting backslash before hashtags to fix markdown formatting |
Hmmm, just to be on the safe side: I'd now try to go back to |
I am definitely running I changed the .paquo.toml putting in Without importing paquo, the peak memory usage of my script appears to be ~38.2GB |
Thanks for checking. Something definitely seems to be off, considering your script requiring ~40gb and you still run out of memory on a 200gb machine. I'll try to create a test case to reproduce the oom crash in a more controlled environment, and then we can hopefully improve the default behavior of paquo. In the meantime my recommendation would be to lazy import paquo when you need it after the bigger computations are done, or factor out the paquo part into a separate script. I'll update this issue when I start work on this. Cheers, |
Thanks very much! Paquo is a really useful bit of kit that is giving me a lot more flexibility in my workflows. Please let me know if I can help with any more information about the bug. Best |
I am trying to use paquo on a linux cluster. I login to a node with 200GB of memory.
Once on the node I use tmux to open two terminals on the same node. In one of the terminals I activate a conda environment and open up a python window and run
I then go to my other window and run
top -p <my_pid>
which, as I understand it, allows me to see how much memory I am using. At this point it shows 134060 under VIRT.
In the python window I then run
and then it shows 102.7g under VIRT. As I understand it, this means the QuPathProject object is currently using 102.7GB of memory. This is then giving me errors further down in my pipeline where I try and do much smaller operations but I don't have any memory left.
Here is my conda environment
environment(1).txt
The text was updated successfully, but these errors were encountered: