-
-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenBLAS on aarch substantially slower than other BLAS flavours #160
Comments
All these numbers look a little strange to me - why would MKL be only marginally better than the entirely unoptimized reference implementation ? And I'd expect even the baseline OpenBLAS - an MSVC build that can only make use of the generic C sources - to be slightly faster than the reference as well. |
The answer is almost certainly that the cumulated runtime of blas/lapack calls as a portion of the total runtime of the entire scipy test suite gets dominated by random variability of the CI agent (memory and CPU contention, caches etc.). Under this hypothesis, only when something takes way longer on average in the blas/lapack interface does it actually make a difference. (The difference between CI agents is likely also determined by other aspects than just the available CPU instructions too - numpy does runtime dispatching, but none of our blas/lapack builds have been compiled to take advantage of avx instructions, for example) |
Sounds like the issue in OpenMathLib/OpenBLAS#4582. Was the |
That issue was fixed in OpenMathLib/OpenBLAS#4587 which landed in 0.3.27 (which is the version we're using across conda-forge already). |
Hmm, that needs investigating then. The timings make it almost certain to be a similar deadlock lock contention issue. It's a 10x-100x slowdown for BLAS/LAPACK calls to explain the test suite being that slow. There was a similar issue at scipy/scipy#20585 (comment). |
There have been no other changes to the Windows thread server code in 0.3.27 since then (as far as I am aware right now), so I think the best option for testing would be to swap in the old version of blas_server_win32.c from OpenMathLib/OpenBLAS@66904f8 (this will probably need adding back the global declaration of |
(though I would assume that any residual problem in mseminatore's PRs should have come up when testing the fix in PR4587) |
The test case from OpenMathLib/OpenBLAS#4582 passes for me without any apparent delays, so I don't think it's that. |
Also seems to me the "similar" issue is/was plagued by some kind of infighting between duplicate libraries ? |
That's an interesting conjecture, though I don't see how that can happen in conda-forge, where we generally take care to unvendor things so there's only one copy. Especially for numpy and scipy, where we keep a close look at the builds |
Can you post a link to the logs? |
Here you go (plus any other still remaining runs of |
First impression is that it appears to be |
In those PRs you cannot just look at the overall runtime, because we're trying to forcefully make a distinction between CPUs with/without AVX512F/AVX512CD, and the abort if the CPU architecture expectation isn't met (because some past & present failures had different behaviours, and azure doesn't provide a way to influence that). However, if that happens, then there's zero runtime of the test suite. On windows, roughly everything over 60min actually ran the tests, everything under 50min didn't, and in between it depends on circumstances (fast or slow agent). |
It's quite unlikely that a single test (or anything less than widespread/systemic) can blow out the test times like that. This is especially the case as many tests are completely unrelated to blas. Tests also have a 20min timeout, and none of them is hit. |
umm, did you ever get around to testing with the pre-0.3.26 blas_server_win32.c as per #160 (comment) ? |
Thanks for the reminder, I didn't understand the ask at the time, but I've dug a bit into the git history and I think/hope I got it right; currently building an openblas version with this in #162; we might already see a difference in the blas testing on this feedstock (compare "Good news |
thank you - meanwhile it looks like i might have to revert that PR anyway due to the sunpy thread safety issue that came up on numpy :( |
FWIW, this pattern is still occurring in a slightly different with current OpenBLAS 0.3.28 -- windows builds have pretty homogeneous times now, but the aarch builds with OpenBLAS are taking substantially longer than with netlib (interestingly; neither implementation is affected on PPC). Both aarch/ppc get emulated through QEMU. We recently upgraded to the newest QEMU 9.1.2, which could play a role. Just thought I'd mention it here. Taking the median run (out of 5 each; for more details see logs), in hours:
(there's also some failures specific to aarch+openblas, so this might just be an unhappy combination for now). |
This could be related to unprofitable forwarding of GEMM calls to GEMV ( OpenMathLib/OpenBLAS#4951 already fixed on the develop branch), though I'd not expect that to be worse than the netlib implementation. It is the only major regression known to affect aarch64 targets - absurdly a lot of time went into chasing down an elusive slowdown on NeoverseV2 that could have been better spent pushing the 0.3.29 release. I don't quite see where you're getting all the Cholesky errors from - what is your qemu emulating here, ARMV8 or ARMV8SVE ? |
Ok, I think qemu-user is providing "generic" emulation only, so probably ARMV8. I'm currently trying to run the scipy 1.14.1 testsuite on an Ampere Altra in the GCC Compile farm to check if I can find any indication of recent slowdown. |
Did a few scipy_test runs with a pip-installed scipy 1.14.1 on the 64-core Ampere Altra in the GCC Compile Farm (ARMV8 target, Debian Linux, gcc-14.2), did not see any indication of a marked slowdown with recent builds so far ?
|
Thanks for the testing! Glad to hear that baremetal performance looks much better than the emulated one! FWIW, this is what
|
Interesting that it sees SVE, I did not expect that. Could be that this is what is slowing it down, depending on how good the emulation is - if is it translating into AVX512, or simple serial code. (That is, if you are running the test without forcing OPENBLAS_CORETYPE to ARMV8.) |
I have now run a series of scipy_test runs for all releases since 0.3.23 plus a number of commit hashes from recent development, using my Pixel8pro with TARGET=NEOVERSEV1 as a low-end SVE machine. Runtime turned out to be fairly stable between them, with considerable jitter between individual runs of the same build (despite having the phone on charger and with no other foreground apps open besides termux) |
In the context of the BLAS variant testing for scipy, I noticed that the OpenBLAS runs were much slower. I did some basic timing comparisons based on what pytest reports as the overall runtime of the scipy test suite (the timing also depends quite a bit on whether the agent has AVX512F/AVX512CD or not, and this is random across Azure's fleet, so I'm taking the average across implementation & CPU type):
Overall, OpenBLAS ends up being between 3-5x slower than all the other BLAS/LAPACK implementations, which to me is indicative of something going very wrong somewhere.
CC @martin-frbg @rgommers
The text was updated successfully, but these errors were encountered: