Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Organise unit tests to use pytest fixtures #172

Merged
merged 1 commit into from
Oct 13, 2023

Conversation

SeanBryan51
Copy link
Collaborator

@SeanBryan51 SeanBryan51 commented Oct 6, 2023

Unit tests have an excessive amount of repeated test setup code, making unit tests harder to maintain and read. Using pytest fixtures to setup each test would remove needless code repetition and also prevents the possibility of state leaking into each test case.

Organise the tests so that for each function func, we have a test class TestFunc with each method of TestFunc containing a single test (success or failure case).

Use pytest's parametrize() feature for testing different levels of verbosity.

Remove usage of mock_cwd fixture unless it is required for a test to pass.

Fixes #163

@SeanBryan51 SeanBryan51 linked an issue Oct 6, 2023 that may be closed by this pull request
@codecov
Copy link

codecov bot commented Oct 6, 2023

Codecov Report

Merging #172 (107cc86) into main (f083d3e) will decrease coverage by 1.48%.
Report is 1 commits behind head on main.
The diff coverage is 97.54%.

@@            Coverage Diff             @@
##             main     #172      +/-   ##
==========================================
- Coverage   88.58%   87.10%   -1.48%     
==========================================
  Files          26       25       -1     
  Lines        1603     1613      +10     
==========================================
- Hits         1420     1405      -15     
- Misses        183      208      +25     
Files Coverage Δ
tests/test_benchcab.py 100.00% <100.00%> (ø)
tests/test_config.py 100.00% <100.00%> (ø)
tests/test_fs.py 100.00% <100.00%> (ø)
tests/test_pbs.py 100.00% <100.00%> (ø)
tests/test_subprocess.py 100.00% <100.00%> (ø)
tests/test_workdir.py 100.00% <100.00%> (ø)
tests/conftest.py 92.98% <91.83%> (-7.02%) ⬇️
tests/test_comparison.py 90.90% <90.00%> (-9.10%) ⬇️
tests/test_repository.py 97.33% <97.16%> (-2.67%) ⬇️
tests/test_fluxsite.py 97.27% <97.12%> (-2.73%) ⬇️

... and 2 files with indirect coverage changes

@SeanBryan51
Copy link
Collaborator Author

I have reorganised a few tests files just to see what it looks like. @ccarouge @bschroeter it would be great if I can get your thoughts before I push on further.

@SeanBryan51 SeanBryan51 marked this pull request as ready for review October 6, 2023 01:06
Copy link
Collaborator

@ccarouge ccarouge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it looks better than before.

But considering the amount of changes, do we have to do all the tests in the same PR? It might make it very very hard to review. Could we do one module per PR or only a couple at a time?

)
@pytest.fixture
def bitwise_cmp_dir(self, mock_cwd):
_bitwise_cmp_dir = mock_cwd / internal.FLUXSITE_BITWISE_CMP_DIR
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the mock_cwd fixture, it means you can use a relative path here (internal.FLUXSITE_BITWISE_CMP_DIR only). It might lighten up the tests. Might be applicable elsewhere.

comparison_task.run()
assert f"nccmp -df {file_a} {file_b}" in mock_subprocess_handler.commands

def test_default_standard_output(self, comparison_task, files):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we could instead use the test parameterization to write one output test that runs on all output cases. But this might wait for after the reorganisation?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be possible to use parameterisation for testing different verbosity levels. I'll look into it

@SeanBryan51
Copy link
Collaborator Author

do we have to do all the tests in the same PR? It might make it very very hard to review. Could we do one module per PR or only a couple at a time?

I think I prefer we do the merge into main as a single PR. But we can treat this branch as 'main' for this issue and create other branches that implement small changes and have those merged into this branch?

@ccarouge
Copy link
Collaborator

ccarouge commented Oct 6, 2023

But we can treat this branch as 'main' for this issue and create other branches that implement small changes and have those merged into this branch?

Yes, that would be better!



@pytest.fixture
def config():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I am going to approve this section, but bearing in mind that some of the changes I've made to the configuration validation system (which now has the bonus feature of installing a data directory [which contains tests]) have simplified this process significantly.

I'll have a fun merge on my hands after this one...

@SeanBryan51
Copy link
Collaborator Author

@ccarouge I've rebased this branch off main so that we have the recently merged changes (e.g. linting with ruff). I've added the changes you requested for using pytest's parametrize() and usage of mock_cwd.

@ccarouge
Copy link
Collaborator

@SeanBryan51 It seems you asked my review again on this meta-PR. Is that correct?

@SeanBryan51
Copy link
Collaborator Author

@ccarouge Yes, just checking I've addressed your comments in: 19eff9e

Unit tests have an excessive amount of repeated test setup code, making
unit tests harder to maintain and read. Using pytest fixtures to setup
each test would remove needless code repetition and also prevents the
possibility of state leaking into each test case.

Organise the tests so that for each function `func`, we have a test
class `TestFunc` with each method of `TestFunc` containing a single test
(success or failure case).

Use pytest's `parametrize()` feature for testing different levels of
verbosity.

Remove usage of `mock_cwd` fixture unless it is required for a test to
pass.

Fixes #163
@SeanBryan51 SeanBryan51 force-pushed the 163-tidy-up-organisation-of-unit-tests branch from 9d88427 to 107cc86 Compare October 12, 2023 23:56
@SeanBryan51 SeanBryan51 merged commit 058f465 into main Oct 13, 2023
3 of 4 checks passed
@SeanBryan51 SeanBryan51 deleted the 163-tidy-up-organisation-of-unit-tests branch October 13, 2023 00:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Tidy up organisation of unit tests
3 participants