Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase test coverage #280

Merged
merged 2 commits into from
Nov 20, 2023
Merged

Increase test coverage #280

merged 2 commits into from
Nov 20, 2023

Conversation

constantinpape
Copy link
Contributor

by adding tests for

  • get_sam_model (with and without local weight paths)
  • get_custom_sam_model
  • run_inference_with_iterative_prompting

Test for the latter two are integrated with the training integration test.

@constantinpape constantinpape changed the base branch from master to dev November 18, 2023 21:34
@@ -87,7 +86,7 @@ def get_cache_directory() -> None:

Users can set the MICROSAM_CACHEDIR environment variable for a custom cache directory.
"""
default_cache_directory = os.path.expanduser(pooch.os_cache('micro-sam'))
default_cache_directory = os.path.expanduser(pooch.os_cache('micro_sam'))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name of the cachedir was different from the previous one in _CHECKPOINT_FOLDER (see above).
I think micro_sam is better than micro-sam (because it's the name of the actual python package) but if there is a reason to switch to micro-sam instead I would also be fine with it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't mind either, so that sounds fine. I don't have any reason for it. I think it's also a good idea to double check there's no remaining micro-sam left anywhere else in the code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I quickly checked and this was the only location in the code.
(It's still a bit inconsistent in overall naming, but we can address that later on.)

Copy link
Collaborator

@GenevieveBuckley GenevieveBuckley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me. Up to 41% test coverage is a big increase, which is great 🎉

  1. I don't know why codecov.io is saying "Token found by environment variables with length 50 did not pass validation" and not actually uploading the results. Given that, it might be a good idea to append --cov-report term-missing in addition to --cov-report xml:coverage.xml in the pytest options section of pyproject.toml
[tool.pytest.ini_options]
addopts = "-v --durations=10 --cov=micro_sam --cov-report xml:coverage.xml --cov-report term-missing"
  1. Should the tests in test/test_instance_segmentation.py have the "skip" marker removed, or should they stay skipped? No pressure either way.
    Turns out that un-skipping the test does not actually increase the overall test coverage that much (from 41% to 44%). They're also slow-ish tests, although not prohibitively so (they do turn up in our slowest 10 test duration list though).
==================================== slowest 10 durations =====================================
63.16s call     test/test_training.py::TestTraining::test_training
16.18s call     test/test_instance_segmentation.py::TestInstanceSegmentation::test_tiled_automatic_mask_generator
5.32s call     test/test_gui.py::test_annotator_2d
4.56s call     test/test_instance_segmentation.py::TestInstanceSegmentation::test_automatic_mask_generator
2.42s call     test/test_instance_segmentation.py::TestInstanceSegmentation::test_tiled_embedding_mask_generator
2.39s setup    test/test_instance_segmentation.py::TestInstanceSegmentation::test_automatic_mask_generator
2.23s call     test/test_util.py::TestUtil::test_tiled_prediction
1.54s call     test/test_sam_annotator/test_widgets.py::test_embedding_widget
1.39s call     test/test_instance_segmentation.py::TestInstanceSegmentation::test_embedding_mask_generator
0.85s call     test/test_prompt_based_segmentation.py::TestPromptBasedSegmentation::test_segment_from_mask_non_square

@constantinpape
Copy link
Contributor Author

I don't know why codecov.io is saying "Token found by environment variables with length 50 did not pass validation" and not actually uploading the results. Given that, it might be a good idea to append --cov-report term-missing in addition to --cov-report xml:coverage.xml in the pytest options section of pyproject.toml

I also don't know why this error is happening, but seems to be due to the unreliability of codecov. I have added the term-missing line.

2. Should the tests in test/test_instance_segmentation.py have the "skip" marker removed, or should they stay skipped? No pressure either way.
Turns out that un-skipping the test does not actually increase the overall test coverage that much (from 41% to 44%). They're also slow-ish tests, although not prohibitively so (they do turn up in our slowest 10 test duration list though).

I want to remove this functionality soonish, so will leave it on skip since these tests do increase the test runtime a bit.
(The functionality implements an alternative approach for automatic instance segmentation that turned out to not work as well. We are now working on a much improved version included in model fine-tuning and once we integrate it in micro_sam I will remove this old functionality.)

@constantinpape constantinpape merged commit fe13841 into dev Nov 20, 2023
3 checks passed
@constantinpape constantinpape deleted the predictor-tests branch November 20, 2023 12:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants