Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

move tests to tests folder #211

Closed
wants to merge 17 commits into from
Closed

Conversation

andrewgsavage
Copy link
Contributor

No description provided.

Copy link

codecov bot commented Apr 1, 2024

Codecov Report

Attention: Patch coverage is 93.54839% with 12 lines in your changes are missing coverage. Please review.

Project coverage is 95.60%. Comparing base (e9b82f0) to head (925f74a).
Report is 2 commits behind head on master.

Files Patch % Lines
uncertainties/testing.py 88.88% 9 Missing ⚠️
uncertainties/unumpy/core.py 0.00% 3 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #211      +/-   ##
==========================================
+ Coverage   95.53%   95.60%   +0.06%     
==========================================
  Files          17       13       -4     
  Lines        2085     1912     -173     
==========================================
- Hits         1992     1828     -164     
+ Misses         93       84       -9     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

newville
newville previously approved these changes Apr 1, 2024
Copy link
Member

@newville newville left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!! This looks good to me.

@andrewgsavage
Copy link
Contributor Author

Thanks!! This looks good to me.

This now shows the changes I had made for removing depreciations and 1to2, which I don't think you saw when you commented. I'll open a PR for 1to2 now that can be merged before this.

@wshanks
Copy link
Collaborator

wshanks commented Apr 1, 2024

Did you consider moving the tests out of the package (just a top level tests/ directory next to uncertainties/)? I think that is more standard.

I have occasionally found it useful to have tests in the main package so I can test the package with pip install package && python -m unittest package.tests and without separately cloning the repo, but I don't think pytest supports such execution of tests in an installed package. I think it always wants to run from a working directory containing the tests below it.

@newville
Copy link
Member

newville commented Apr 1, 2024

+1 on moving tests out of the source tree and into a top-level tests folder.
But then the CI scripts need updating too.

@wshanks
Copy link
Collaborator

wshanks commented Apr 1, 2024

The CI scripts just do a plain pytest call, though it would be nice to specify tests/ so pytest doesn't waste time searching other files (not that there are many here). I mentioned adding a minimal pytest configuration (like this) in #205 (comment) because I think it might also help with that case (VS Code knowing to run pytest).

@newville
Copy link
Member

newville commented Apr 1, 2024

@wshanks Ah, thanks, yes I see that now, and I completely agree.
I would probably go with the "add pytest config to pyproject.toml", but adding a pytest config file seems okay to me too.

@andrewgsavage
Copy link
Contributor Author

interesting that it keeps the approval after I make new commits

I moved some functions that could be used by downstream libraries to uncertainties.testing

@wshanks
Copy link
Collaborator

wshanks commented Apr 1, 2024

interesting that it keeps the approval after I make new commits

I enabled dismissing approvals when new changes are pushed. I have found it to be annoying on other projects where the reviewer approves the spirit of the PR but then has to come back and approve a couple more times because of typos/linting. Here, it seems like we have a responsive enough group for it to be okay.

Copy link
Collaborator

@wshanks wshanks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good. Besides my inline question, could we add this to pyproject.toml:

[tool.pytest.ini_options]
testpaths = ["tests"]

I don't know that we need to add any additional options at this time.

from . import test_uncertainties

from uncertainties.testing import compare_derivatives, numbers_close
from test_uncertainties import power_special_cases, power_all_cases, power_wrt_ref
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does not seem right to import from one test_ file to another. These functions do seem too specialized for uncertainties.testing though. Could we use one of the options in this StackOverflow question. Maybe the conftest.py and fixture option, or the pythonpath and helpers file option?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding testpaths = ["tests"] is not required for tests to work right? It's just a convenience that makes pytest discover tests more quickly? Is my understanding correct?

In the past I've put helper function needed for testing in an __init__.py file in the tests folder. These objects can then be imported within the test scripts. We could also have a tests/helper.py module to accomplish the same result.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, importing across test files does feel a bit odd. The helpers.py solution is nice and simple.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding testpaths = ["tests"] is not required for tests to work right? It's just a convenience that makes pytest discover tests more quickly? Is my understanding correct?

Yes, that seems to be all it does functionally. My conjecture was that it might also help IDE's figure out that the project uses pytest as the test runner because otherwise there is no configuration indicating that pytest should be run, but I haven't tried to confirm that.

I like the move to helpers.py. We might want to add

testpaths = ["tests"]
addopts = [
    "--import-mode=importlib",
]

to pyproject.toml like the one StackOverflow answer suggests, but we could wait for that. Currently, pytest adds all the test directories to the Python path so you can do import helpers but it has been trying to move away from that to just importing the test files with importlib without modifying the Python path.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for explaining, wouldn't have understood why otherwise

wshanks
wshanks previously approved these changes Apr 2, 2024
Copy link
Collaborator

@wshanks wshanks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is good. I had one suggestion about adding more pytest configuration but it's not mandatory.

This commit removes:
- the 1to2 code and documentation, 
- the compatability code that handled obsolete function calls, 
- the deprecations in these calls,
- the tests that tested obsolete behaviour
@wshanks
Copy link
Collaborator

wshanks commented Apr 2, 2024

Also, should this fold in #210? Otherwise they will create merge conflicts with each other.

@andrewgsavage
Copy link
Contributor Author

I think this is good. I had one suggestion about adding more pytest configuration but it's not mandatory.

I tried doing this and it worked locally but I couldn't get it working with CI. I've reverted so this can be merged and we can do the importlib in another PR. Ready for merging now

wshanks
wshanks previously approved these changes Apr 2, 2024
@andrewgsavage andrewgsavage dismissed wshanks’s stale review April 2, 2024 20:23

The merge-base changed after approval.

wshanks
wshanks previously approved these changes Apr 2, 2024
@wshanks
Copy link
Collaborator

wshanks commented Apr 2, 2024

It looks good but it won't let me merge at the moment. The diff also doesn't look right in the GitHub web UI but looks right locally. I saw this in another repo recently as well.

@andrewgsavage andrewgsavage dismissed wshanks’s stale review April 2, 2024 20:29

The merge-base changed after approval.

@jagerber48
Copy link
Contributor

Can testing.py be moved into tests/helpers.py?

@wshanks
Copy link
Collaborator

wshanks commented Apr 2, 2024

Can testing.py be moved into tests/helpers.py?

It could be, but @andrewgsavage described the contents above as "I moved some functions that could be used by downstream libraries to uncertainties.testing"

@andrewgsavage
Copy link
Contributor Author

andrewgsavage commented Apr 2, 2024

hold up, looks like I'd deleted the contents of a file
e:fixed

@wshanks
Copy link
Collaborator

wshanks commented Apr 2, 2024

I see all the changes from #214 still in the web UI. Could you squash onto master and force push? Or does it show up correctly for others now?

@jagerber48
Copy link
Contributor

The testing.py concept seems strange to me.

I moved some functions that could be used by downstream libraries to uncertainties.testing

These functions are only used in the tests so I think it makes sense for them to be in the test helpers module. Is the idea that downstream users might want to use these functions? If so, then they should be part of public API and they need to be documented etc. And if they're meant to be part of the public API I think it is strange for them to be in a module named testing instead of a module name utilities or something.

@newville
Copy link
Member

newville commented Apr 3, 2024

I am not ever going to be able to keep up with conversations at the pace here.

I do not fully understand the added uncertainties/testing.py file either. Do we want to support users using these tests?

I am working slowly on the docs, but this will take some time -- given my work schedule, into next week.

I deeply despise that codecov calls a PR unsuccessful if it loses a bit of coverage. Knowing the code coverage is fine. As a github check, codecov and its reports are warts. I will make a PR to copy the codecov config from lmfit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants