-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fixed numpy.prod failing tests #21858
Conversation
Thanks for contributing to Ivy! 😊👏 |
Hey @ShreyanshBardia thanks for working on this! It looks like in the CI tests, there are some failing tests for
And I don't see any test results for |
Hey @Pchatain, I'm not sure why this is happening, because when I ran the tests locally earlier the tests were passing and everything seemed to be working as expected. |
Hey @ShreyanshBardia , to better understand, are all 3 of those tests passing on your end? When I checkout your PR and run
e.g.
|
@Pchatain, that's a little weird, I re-ran the tests again and they seem to pass successfully, with only torch backend failing I'll update my branch once and run again to check if that's causing the issue |
This reverts commit 77a385d.
…yanshBardia/ivy into numpy-failing-tests" This reverts commit 68545b1, reversing changes made to 77a385d.
…com/ShreyanshBardia/ivy into numpy-failing-tests"" This reverts commit 87536f0.
This reverts commit 223d9ae.
This reverts commit 1c45c35.
Hey @Pchatain, can you please check now? I am not sure why the tests not running accordingly at my end, if you have any idea kindly please do lemme know. Also I noticed a bug related to extreme float values, I think this will cause tests to fail
Is this expected? or should I look into it and try to solve, kindly guide |
Hey so if you click on checks located at the top, I don't think this is expected, unless you observe the same behavior with torch itself for example. Could you investigate it a little bit? |
Hi @Pchatain, sorry for the late reply. I don't think is an issue due to ivy's abstraction, this bug seems to exist in tensorflow framework itself.
One way I can think of solving this is, when casting from float to int in case of tensorflow we should use numpy first and then convert it to tensorflow. What's your opinion? Also I'm not sure if everyone can re-run the tests, because I don't see any such feature, it might require priveleges to be granted |
Sorry about the late reply, I mistakenly thought I had replied. Yes, that's entirely likely that I have to click on run tests. Let me know when you want to do this. Thanks for looking into this. After reading more, it seems this is indeed a small quirk of the way floating points are handled. This is actually rather strange indeed consider https://www.tensorflow.org/api_docs/python/tf/cast suggests that cast should round down. Here is what I'm wondering:
So that works, but using float32 as intermediary value immediately rounds it to 2.0.
|
@CatB1t could I get a second opinion on this? I'm not sure if the floating point mismatch requires a change in the way we test functions and what change to make if it does require one. |
Hi @Pchatain, thanks for looking into this more. From what I understand I don't think this has anything to do with hypothesis, I believe we should update the array (tensor) creation for
The issue lies with using
I think something like this should solve the issue
|
Hey @ShreyanshBardia I think that makes sense. The logic is basically if it fails, instead of directly casting we should take it to something high precision like float64 and then cast it. I'm thinking they probably made the choice to default to float32 because it leads to better performance? In any case, barring skipping the particular tests in hypothesis that are these extreme float values, I think it should maybe convert to complex128 and then convert down to dtype, since that will preserve the most information. If the tests pass with float64, feel free to do that exactly as you stated. Indeed the problem definitely not with hypothesis itself, but the tests are selected by hypothesis I think. So hypothesis will generate these test cases that cause the failure. |
Yes it seems so, both torch and tesorflow default to
Currently all tests are passing both |
Awesome @ShreyanshBardia, and thanks for your patience. Running the CI checks again and then will merge if those work. |
Hey @ShreyanshBardia , just some minor linting errors. Could you run pre-commit and accept the changes it suggests? Then I'll merge. |
ivy-gardener |
Hey @Pchatain, it looks like some other changes were made by bot too, should I remove those? |
yeah reject the changes on the conflicted files you didn't edit. |
Hey @Pchatain, I have removed the changes made by bot |
Co-authored-by: ivy-branch <[email protected]>
closes #20418
closes #20415
closes #13732
closes #8209