Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add some unit tests #4

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from
Draft

Conversation

TrueDoctor
Copy link

I just added some random test and expm is already failing:

running 8 tests
test tests::exp_of_random_matrix ... FAILED
test tests::exp_of_doubled_unit ... FAILED
test tests::verify_pade_13 ... ok
test tests::verify_pade_3 ... ok
test tests::verify_pade_5 ... ok
test tests::verify_pade_7 ... ok
test tests::verify_pade_9 ... ok
test tests::exp_of_unit ... ok

failures:

---- tests::exp_of_random_matrix stdout ----
thread 'tests::exp_of_random_matrix' panicked at 'assertion failed: `(left == right)`
  left: `4`,
 right: `7`', src/lib.rs:243:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.

---- tests::exp_of_doubled_unit stdout ----
thread 'tests::exp_of_doubled_unit' panicked at 'assertion failed: `(left == right)`
  left: `4`,
 right: `7`', src/lib.rs:243:9


failures:
    tests::exp_of_doubled_unit
    tests::exp_of_random_matrix

test result: FAILED. 6 passed; 2 failed; 0 ignored; 0 measured; 0 filtered out

It might be beneficial to extend the test coverage further

@TrueDoctor TrueDoctor mentioned this pull request Mar 5, 2020
@SuperFluffy
Copy link
Owner

Demanding that a bunch of numbers only differ by up to 1 ulps is a pretty harsh condition.

So the question is not if the unit tests are failing because the result is wrong (although it might very well be wrong!). The question is if the result is within the error bounds of the algorithm.

If you want to investigate the result, could you look at the raw output and also compare to what scipy's expm gives you?

@TrueDoctor
Copy link
Author

TrueDoctor commented Mar 5, 2020

Demanding that a bunch of numbers only differ by up to 1 ulps is a pretty harsh condition.

Both tests fail due to the assertion.

So the question is not if the unit tests are failing because the result is wrong (although it might
very well be wrong!). The question is if the result is within the error bounds of the algorithm.

When the assertion is disabled (and resonable error bounds are chosen) both tests pass

If you want to investigate the result, could you look at the raw output and also compare to what scipy's expm gives you?

The results I've used are from wolfram alpha, but i could add some scipy tests as well
It might be worthwhile to automate test generation

this is just for temporary testing
disable assert
@TrueDoctor
Copy link
Author

  • I have commented out the assert for now.
  • this commit is just for testing and currently requires nightly
    You have push access, should you want to add anything or change something
    You might have a better intuition on when things might break, feel free to add some more tests
python_testing!(
    simple, f64, vec![1.0, 0.0, 1.0, 0.0],
    double, f64, vec![2.0, 0.0, 2.0, 0.0],
    random, f64, vec![1.02, -3.2, 4.2, 100.0]
);

@TrueDoctor
Copy link
Author

I have now added some complex tests on the generic-scalar branch
Some of the test are failing:

running 15 tests
test tests::complex_exp ... ok
test tests::complex_exp_py ... ok
test tests::complex_random_py ... FAILED
test tests::double_py_f32 ... FAILED
test tests::double_py ... ok
test tests::random_py ... FAILED
test tests::simple_py ... ok
test tests::verify_pade_13 ... ok
test tests::simple_py_f32 ... FAILED
test tests::verify_pade_3 ... ok
test tests::verify_pade_5 ... ok
test tests::verify_pade_7 ... ok
test tests::verify_pade_9 ... ok
test tests::exp_of_unit ... ok
test tests::exp_of_doubled_unit ... ok

failures:

---- tests::complex_random_py stdout ----
thread 'tests::complex_random_py' panicked at 'assert_relative_eq!((b1 - b2).abs(), 0.0, epsilon = 0.00000001)

   left  = 0.0008432883514622831
   right = 0.0

', src/lib.rs:735:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

---- tests::double_py_f32 stdout ----
thread 'tests::double_py_f32' panicked at 'assert_relative_eq!((b1 - b2).abs(), 0.0, epsilon = 0.00000001)

   left  = 0.000011444092
   right = 0.0

', src/lib.rs:735:5

---- tests::random_py stdout ----
thread 'tests::random_py' panicked at 'assert_relative_eq!((b1 - b2).abs(), 0.0, epsilon = 0.00000001)

   left  = 295408072890019770000000000000000000000.0
   right = 0.0

', src/lib.rs:735:5

---- tests::simple_py_f32 stdout ----
thread 'tests::simple_py_f32' panicked at 'assert_relative_eq!((b1 - b2).abs(), 0.0, epsilon = 0.00000001)

   left  = 0.00000047683716
   right = 0.0

', src/lib.rs:735:5


failures:
   tests::complex_random_py
   tests::double_py_f32
   tests::random_py
   tests::simple_py_f32

I don't have case specific handling for f32 yet, I should use a different epsilon but for example random_py seems to be way off
the assert might be necessary after all 😅
As I'm just a second year computer scientist, understanding the whole paper is a bit much, but it would be greatly appreciated should you have a bit of time/motivation to look into that

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants