Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move tests to tests folder #216

Merged
merged 20 commits into from
Apr 5, 2024
Merged

Conversation

andrewgsavage
Copy link
Contributor

No description provided.

@andrewgsavage
Copy link
Contributor Author

This doesn't have the 1to2 commits showing, maybe this will work?

@wshanks
Copy link
Collaborator

wshanks commented Apr 3, 2024

The diff here looks good to me. Additionally, I diffed testing.py and helpers.py against the old test_uncertainties.py and confirmed they are clean relocations of code from there.

I think the one outstanding point here before merge is to resolve @jagerber48 and @newville's reservations about uncertainties.testing. To me, this module mirrors testing modules in other packages like numpy.testing and pandas.testing. I think ufloats_close and arrays_close could be useful in a downstream library writing unit tests on its usage of Uncertainties. numbers_close is generic to floats and maybe something that should not be public. I am unsure about compare_derivatives. I suppose a downstream library could also want to implement a custom function and test it, but that seems more niche than just doing calculations and using ufloats_close to check the results. Also, I agree with @jagerber48 that if we are making the functions public we should document them. I am not sure what all is in scope for #203 but either there or after there perhaps we could enable the apidoc Sphinx extension. The current longform documentation is good, but the automated API docs make it easy to mark a function/class for inclusion in the API docs which can be taken as the marker for being public or not. We could just put them in helpers.py for now and come back to making them public later. If documenting and keeping now, I think arrays_close should be renamed uarrays_close.

@andrewgsavage
Copy link
Contributor Author

Thanks, the merging issues distracted me from those comments. I agree with renaming to uarrays_close.

The testing.py concept seems strange to me.

I moved some functions that could be used by downstream libraries to uncertainties.testing

These functions are only used in the tests so I think it makes sense for them to be in the test helpers module. Is the idea that downstream users might want to use these functions? If so, then they should be part of public API and they need to be documented etc. And if they're meant to be part of the public API I think it is strange for them to be in a module named testing instead of a module name utilities or something.

I do not fully understand the added uncertainties/testing.py file either. Do we want to support users using these tests?

Yes, these functions are only used by the tests but they are of value to downstream libraries, and they can't import from tests/helpers.py. If I were to create a seperate module uncertainties-pandas for #184, it would need its own assert_equal, assert_close ... functions for testing uncertainties - and would likely be copy pastes of what's in helpers.py.

I expect these functions would only be used by advanced python users who can recognise ufloats_close does, so expect little if any support needed on using the fuctions - and if they do find issues with the functions that's usually a good thing. Documentation wise I don't think much is needed as the functions have docstrings already. A note in the docs to point users to them would be helpful but shouldn't prevent the functions being exposed to users IMO.

From a downstream user perspective, I'd like them included in the public API so I can import them without them moving in a future version.

Copy link

codecov bot commented Apr 3, 2024

Codecov Report

Attention: Patch coverage is 95.47739% with 9 lines in your changes are missing coverage. Please review.

Project coverage is 95.59%. Comparing base (57e2a20) to head (2e4c8a6).

Files Patch % Lines
tests/helpers.py 94.19% 9 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #216      +/-   ##
==========================================
- Coverage   95.70%   95.59%   -0.12%     
==========================================
  Files          11       12       +1     
  Lines        1908     1905       -3     
==========================================
- Hits         1826     1821       -5     
- Misses         82       84       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@jagerber48
Copy link
Contributor

Ok I see. I hadn't used numpy.testing or anything like that before.

Still, for now,

We could just put them in helpers.py for now and come back to making them public later.

has my vote. I can see that it could be useful to expose these methods. But, having a look through testing.py, if those things are to be part of the public API then I have some opinions about changing some names and behaviors. Rather than hash that through now what about putting it in tests/helpers.py now, and opening an issue to address in a future release to expose uncertainties.testing for these purposes?

This strategy minimizes adding new work now. Also exposing such a testing.py wasn't on the roadmap we had so far for this release.

@newville
Copy link
Member

newville commented Apr 4, 2024

+1 on "We could just put them in helpers.py for now and come back to making them public later."

@wshanks
Copy link
Collaborator

wshanks commented Apr 5, 2024

This looks good to me now, but I think the imports removed in #208 are restored here.

tests/helpers.py Outdated Show resolved Hide resolved
Co-authored-by: Will Shanks <[email protected]>
Copy link
Member

@newville newville left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks very much! This looks good to me.

I think we can ignore the coverage check marked as "unsuccessful". You rearranged all the testing code, and the coverage of those tests is still > 95%. Let's call that "excellent".

The tool that does a simple count of tested lines of code (to be clear, a helpful thing to know) injects itself CI processes and claims that any loss of coverage is "tests FAIL".
That's just game-ifying the process, which is something we should refuse.

@newville
Copy link
Member

newville commented Apr 5, 2024

I think this can be merged. Any objections?

@wshanks
Copy link
Collaborator

wshanks commented Apr 5, 2024

It looks good to me now as well.

We had a little discussion of coverage here. I agree it is annoying it puts a red X on the PR for a 0.3% change in coverage. To me coverage passing/failing a PR only makes sense if we are diligent about using no cover for lines we don't want covered and require 100% of the other lines to be covered.

@andrewgsavage andrewgsavage merged commit ea1d664 into lmfit:master Apr 5, 2024
16 of 18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants