You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm new to dask and trying to parallel some calculations on HPC cluster using SLURMCluster. Apparently, I faced a memory leak similar to that discussed in #5960. There is a reproducer of code I run (the only difference is that in real workflow I submit scipy.optimize.minimize function with some args, then get result and save it to file).
There is also a graph of mprof memory usage by the script above:
It's clear for me from that graph that smth's wrong with submitting a new future (10 peaks corresponding to 10 submitted futures on each iteration). CreatingSLURMCluster outside of the function go() doesn't make any difference. The lines with aggressive deletion are also doesn't really affect the memory usage.
Here I used dask 2022.1.1 and distributed 2022.1.1 according to temporary solution from #5960, but that didn't work out for me either. The same situation with the latest(2024.8.0) version of dask and distributed.
The memory leak is around a few GB per iteration in my original case where I try to submit around ~500 futures on 100 workers.
I'm curious whether this is specific to SLURM? Are you able to repdocue this memory leak with LocalCluster? If not then we may want to transfer this issue to dask-jobqueue.
I'm new to dask and trying to parallel some calculations on HPC cluster using
SLURMCluster
. Apparently, I faced a memory leak similar to that discussed in #5960. There is a reproducer of code I run (the only difference is that in real workflow I submit scipy.optimize.minimize function with some args, then get result and save it to file).There is also a graph of
mprof
memory usage by the script above:It's clear for me from that graph that smth's wrong with submitting a new future (10 peaks corresponding to 10 submitted futures on each iteration). Creating
SLURMCluster
outside of the functiongo()
doesn't make any difference. The lines with aggressive deletion are also doesn't really affect the memory usage.Here I used
dask 2022.1.1
anddistributed 2022.1.1
according to temporary solution from #5960, but that didn't work out for me either. The same situation with the latest(2024.8.0
) version ofdask
anddistributed
.The memory leak is around a few GB per iteration in my original case where I try to submit around ~500 futures on 100 workers.
I'm also providing my bash script just in case:
Env
dask==2022.01.1/2024.8.0
distributed==2022.01.1/2024.8.0
dask_jobqueue==0.8.5
The text was updated successfully, but these errors were encountered: