-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase test coverage #280
Conversation
@@ -87,7 +86,7 @@ def get_cache_directory() -> None: | |||
|
|||
Users can set the MICROSAM_CACHEDIR environment variable for a custom cache directory. | |||
""" | |||
default_cache_directory = os.path.expanduser(pooch.os_cache('micro-sam')) | |||
default_cache_directory = os.path.expanduser(pooch.os_cache('micro_sam')) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name of the cachedir was different from the previous one in _CHECKPOINT_FOLDER
(see above).
I think micro_sam
is better than micro-sam
(because it's the name of the actual python package) but if there is a reason to switch to micro-sam
instead I would also be fine with it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't mind either, so that sounds fine. I don't have any reason for it. I think it's also a good idea to double check there's no remaining micro-sam
left anywhere else in the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I quickly checked and this was the only location in the code.
(It's still a bit inconsistent in overall naming, but we can address that later on.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me. Up to 41% test coverage is a big increase, which is great 🎉
- I don't know why codecov.io is saying "Token found by environment variables with length 50 did not pass validation" and not actually uploading the results. Given that, it might be a good idea to append
--cov-report term-missing
in addition to--cov-report xml:coverage.xml
in the pytest options section ofpyproject.toml
[tool.pytest.ini_options]
addopts = "-v --durations=10 --cov=micro_sam --cov-report xml:coverage.xml --cov-report term-missing"
- Should the tests in
test/test_instance_segmentation.py
have the "skip" marker removed, or should they stay skipped? No pressure either way.
Turns out that un-skipping the test does not actually increase the overall test coverage that much (from 41% to 44%). They're also slow-ish tests, although not prohibitively so (they do turn up in our slowest 10 test duration list though).
==================================== slowest 10 durations =====================================
63.16s call test/test_training.py::TestTraining::test_training
16.18s call test/test_instance_segmentation.py::TestInstanceSegmentation::test_tiled_automatic_mask_generator
5.32s call test/test_gui.py::test_annotator_2d
4.56s call test/test_instance_segmentation.py::TestInstanceSegmentation::test_automatic_mask_generator
2.42s call test/test_instance_segmentation.py::TestInstanceSegmentation::test_tiled_embedding_mask_generator
2.39s setup test/test_instance_segmentation.py::TestInstanceSegmentation::test_automatic_mask_generator
2.23s call test/test_util.py::TestUtil::test_tiled_prediction
1.54s call test/test_sam_annotator/test_widgets.py::test_embedding_widget
1.39s call test/test_instance_segmentation.py::TestInstanceSegmentation::test_embedding_mask_generator
0.85s call test/test_prompt_based_segmentation.py::TestPromptBasedSegmentation::test_segment_from_mask_non_square
I also don't know why this error is happening, but seems to be due to the unreliability of codecov. I have added the
I want to remove this functionality soonish, so will leave it on skip since these tests do increase the test runtime a bit. |
by adding tests for
get_sam_model
(with and without local weight paths)get_custom_sam_model
run_inference_with_iterative_prompting
Test for the latter two are integrated with the training integration test.