-
-
Notifications
You must be signed in to change notification settings - Fork 25.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🔒 🤖 CI Update lock files for array-api CI build(s) 🔒 🤖 #29373
🔒 🤖 CI Update lock files for array-api CI build(s) 🔒 🤖 #29373
Conversation
3403db7
to
68f5f9b
Compare
I started the CUDA CI for this PR https://github.com/scikit-learn/scikit-learn/actions/runs/9740561717 (failed because it couldn't check out the commit. Investigating that now) Now https://github.com/scikit-learn/scikit-learn/actions/runs/9740690111/job/26878320047 is running. I was expecting to see an automatically triggered run from the same bot that made this PR. There is https://github.com/scikit-learn/scikit-learn/actions/runs/9738462572 but that seems to have run on a4ebe19 which is the latest commit on edit: you have to paste the full commit hash |
There seems to be genuine errors. Probably it would be good to look at #29276 first since there seems to be some Array API issues there as well. |
Some (or even all?) of the errors in the CI should be fixed by #29336. I think if we click the "Update branch" button at the bottom of this PR we will get those changes from |
I clicked "Update branch button" and triggered the GPU workflow, let's see 😉 https://github.com/scikit-learn/scikit-learn/actions/runs/9758399657/job/26932855916 |
You may want to double-check I triggered the GPU workflow correctly since this is the first time I did it 😉 |
Amongst other errors, there seems to be the same errors |
2ed608b
to
8d60544
Compare
I'm a bit puzzled by this. Trying to reproduce this on a machine with a GPU I used
If anyone has an idea, let me know |
FWIW I am able to reproduce on a machine with a GPU from the lock-file (but not on Colab for some reason ...) The two value on each of the assertion that fails:
Since there are errors with torch on cpu I guess this is also reproducible without GPU? Edit: yes I can use the lock-file, run on a machine without GPU and I get the failure for torch cpu. One of the stack-trace
|
I fixed my issue with |
Ah the good old setuptools days 😉 |
The problem might be
exp_var_diff = xp.where(
exp_var > self.noise_variance_,
exp_var_diff,
xp.asarray(0.0, device=device(exp_var), dtype=exp_var.dtype),
) the test passes again. I think it makes sense. Interesting that we didn't find this earlier. And another reason why it would be good for |
710654e
to
da8dee7
Compare
Closing since the issue has been fixed in #29488. |
Update lock files.
Note
If the CI tasks fail, create a new branch based on this PR and add the required fixes to that branch.