-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add some unit tests #4
base: master
Are you sure you want to change the base?
Conversation
Demanding that a bunch of numbers only differ by up to 1 ulps is a pretty harsh condition. So the question is not if the unit tests are failing because the result is wrong (although it might very well be wrong!). The question is if the result is within the error bounds of the algorithm. If you want to investigate the result, could you look at the raw output and also compare to what scipy's |
Both tests fail due to the assertion.
When the assertion is disabled (and resonable error bounds are chosen) both tests pass
The results I've used are from wolfram alpha, but i could add some scipy tests as well |
this is just for temporary testing disable assert
|
I have now added some complex tests on the
I don't have case specific handling for f32 yet, I should use a different epsilon but for example random_py seems to be way off |
I just added some random test and expm is already failing:
It might be beneficial to extend the test coverage further