-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
.impute
consumes too much memory
#729
Comments
hi @Marius1311 , yes I observed this as well multiple times and reported it in private to @michalk8 as well
I talked to @michalk8 about this and it's probably a vmap that creates an array of the wrong shape. For now, we could solve this by evaluating the pull batch-wise over small sets of genes. This is inefficient, but would solve the issue for now. and yes, I also think that this is due to vmap. I think this is true also for GW problems and also not only for imputation but also e.g. for cell transition in my experience. Basically anywhere you want to apply the transport matrix
this is a solution but would require considerable amount of work as there are various mixin methods that use that operation |
yes, I agree with you @giovp, batch-wise evaluation isn't really the way to go, this can only be a temporary fix. For me personally, materializing the transport matrix before calling |
pinging @michalk8 , I think we reported this personally on slack several other times, basically any time we do a push/pull of the tmap, it seems that there is an unusual amount of memory being used. afaik @Marius1311 mentioned that you were working on some refactoring of the geometries in ott, and wondering if there is any update? |
@selmanozleyen let's please ensure that this is not a problem any more after update to ott-jax 0.5.0 |
@MUCDK I ran a small benchmark with the code I wrote last year #639 (comment). This says the peak memory seems to stabilize after some point. Since this was a very small and quick benchmark on my local system I'd say it's not entirely clear to me that ott-jax has solved this issue on their side. So if this issue remains we should look at the ott-jax side first again. Let's see whats the feedback after the update |
I'm trying to call
problem.impute()
on a solved (linear) spatial mapping problem of dimensionsn_source=17806
(spatial data) byn_target=13298
(single-cell data) forn_genes=2039
. This is just a full-rank Sinkhorn problem withbatch_size=None
.Under the hood, this evaluates:
The
pull
amounts to a matrix multiplication:prediction = P @ X
for transport matrix of shape17806 x 13298
and single-cell GEX matrixX
of shape13298 x 2039
. Thus, the memory bottleneck should beP
, which is stored asfloat32
and should thus consume around 903 MB of memory. However, the call toimpute
fails (see traceback below) as it requests1.76TiB
of memory. That's because it tries to create an array of shapeShape: f32[2039,17806,13298]
, which is not needed for this operation.Note that passing a batch size does not help much - let's say I'm passing
batch_size=500
, then this would still request an array of shape2039 x 500 x 13298
, which still requires over 50GB of memory. Also, this this slows down solving the actual OT problem, which would not be necessary from a memory point of view.I talked to @michalk8 about this and it's probably a
vmap
that creates an array of the wrong shape. For now, we could solve this by evaluating the pull batch-wise over small sets of genes. This is inefficient, but would solve the issue for now.If the transport matrix fits into CPU memory, then the current best way to go about this is materializing the transport matrix before calling
impute
:That prevents the memory issue.
Traceback:
The text was updated successfully, but these errors were encountered: