-
-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Backport #40412 to 14.0.2
#1369
Conversation
@conda-forge-admin, please rerender |
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
…nda-forge-pinning 2024.04.18.15.24.04
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
@h-vetinari This might need a CI restart too |
@conda-forge-admin, please rerender |
…nda-forge-pinning 2024.04.19.19.21.20
Hi! This is the friendly conda-forge automerge bot! Commits were made to this PR after the |
Had tried restarting the failing job yesterday evening, but it still fails. Unfortunately the log is sparse on details except for the job stopping at some point. Typically this is a sign of resource limitations Guessing this is either too many threads or too little space on the VM. Option to try would be...
More details on 2 & 3 in this doc, which would involve updating this arrow-cpp-feedstock/conda-forge.yml Lines 1 to 2 in 8639672
|
The aarch builds have gotten incredibly flaky recently, but were fine before. Most of the issues happens in the tests, where we're getting errors for fetching something from the web, presumably because the QEMU emulation is too slow. The jobs just straight up dying happens much more rarely, when we get a particularly slow agent.
You're literally pointing out how we're doing that already...? There's no "more" either - it's on or off. I doubt that reducing the threads helps (will just make the build even slower and more prone to timeouts), and I doubt that the memory is the issue, but happy to be proven wrong. 🙃 |
Gotcha thanks for the context Well there are finer grained settings that do more. They are not turned on by default as they can be a bit slower and in a couple cases builds needed the components removed That said, no need to spend time doing that if we don't think it will help. Just think up options |
This has become a reproducible failure at
for both v14 & v15 (though not v12 & v13). |
It's a GCC 12 problem. The CUDA build passes because it uses GCC 10; Downgrading the non-CUDA job to GCC 10 would be possible, but I've also tested that GCC 13 works and prefer that. I'll finish this off with the libgoogle migration, and will come back to this afterwards. |
Thanks @h-vetinari ! |
This PR backports a python memory leak patch to 14.0.2: apache/arrow#40412
Checklist
0
(if the version changed)conda-smithy
(Use the phrase@conda-forge-admin, please rerender
in a comment in this PR for automated rerendering)