-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
torch MPS backend support #42
Comments
oh cool, thanks for letting us now! I didn't know that pytorch supports the M1. Since I have an M1 based machine, I will try to implement this in himalaya :) |
Unfortunately, it looks like the MPS support in pytorch is far from being completed. Many linear algebra operators are not implemented yet (pytorch/pytorch#77764). For example, using
I don't think we can support the MPS backend in himalaya until all the linalg operations are fully implemented in pytorch. |
Thanks for getting to this so quickly, @mvdoc !! Would it make since to try the |
Tom can correct me if I'm wrong, but I think that using that flag will
defeat the purpose of using GPU support, and it will be equivalent to
running himalaya with a CPU backend (`backend = set_backend("numpy")` or
`backend = set_backend("torch")`). Most of the GPU acceleration we get in
himalaya comes from the speed of those linear algebra operators on GPU. But
if these operators are not implemented yet in pytorch for the MPS backend,
I don't think there will be any noticeable speed-up in himalaya.
…On Wed, Feb 15, 2023 at 10:45 AM Elizabeth DuPre ***@***.***> wrote:
Thanks for getting to this so quickly, @mvdoc <https://github.com/mvdoc>
!!
Would it make since to try the PYTORCH_ENABLE_MPS_FALLBACK=1 flag as
suggested in the linked thread ? I completely understand if you'd rather
wait until the full M1 linalg support is available, but it might also be
nice to take advantage of what is currently available.
—
Reply to this email directly, view it on GitHub
<#42 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABO5TGX6MDQIXDKAI7UCPK3WXUP6LANCNFSM6AAAAAAU4FMIKU>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
Matteo Visconti di Oleggio Castello, Ph.D.
Postdoctoral Scholar
Helen Wills Neuroscience Institute, UC Berkeley
MatteoVisconti.com <http://matteovisconti.com> || github.com/mvdoc ||
linkedin.com/in/matteovisconti
|
Himalaya solvers are using GPUs to speed up two kinds of expensive computations:
I think both improvements are important. So even though MPS does not support Also, all solvers using |
Locally I'm not able to confirm MPS support using either the stable or nightly build, getting
EDIT : It looks like this is set here and indeed triggered as a warning in the thread I sent. So : no, |
I will experiment a bit more and see what speedup we get even if we use |
Well, I'm happy I was wrong (by a lot). I ran the voxelwise tutorial that fits the banded ridge model (it's actually one banded ridge model + one ridge model). We get a ~3x speed up by using the MPS backend. (The There are still things to check (some tests fail with the Backend
|
Another test: we don't get a noticeable speedup when running a simple ridge model. Backend
|
Nice! Thanks for working on this. |
Thanks for your great work making VM accessible !
I was looking into starting with himalaya, but it seems that you do not currently support pytorch's MPS backend for working on the M1 GPU. Is this correct ?
As the MPS backend has been officially released for almost a year, it would be great to take advantage of it to accelerate himalaya models ! Is this something that you would be interested in ?
Thanks again,
Elizabeth
The text was updated successfully, but these errors were encountered: