Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix seviri_l2_grib end_time property bug. #2943

Merged
merged 2 commits into from
Oct 22, 2024

Conversation

strandgren
Copy link
Collaborator

As described in #2942, the changes in #2717 lead to the end_time property not working for SEVIRI data, since we now use the sub-satellite point to determine the scanning mode and thus the temporal resolution used to determine the end_time from the start_time (prior to #2717 it was always set to 15 minutes, which is incorrect for RSS data). However, the self._ssp_lon attribute is only available after a dataset and its metadata have been loaded and thus an AttributeError was raised during Scene initialization. This was not covered by the unit test and a human error lead to the bug being unnoticed also when testing with real data.

This PR adapts the end_time property to return None if no dataset has been loaded, meaning that _ssp_lon is not available. As soon as a dataset has been loaded, both scn.end_time and scn[dataset].attrs['end_time'] will have valid and correct datetime.datetime objects. The unit test has been adapted to cover this bug.

@pnuu
Copy link
Member

pnuu commented Oct 21, 2024

I have a feeling that either pytroll/trollimage#179 or pytroll/trollimage#181 cause the test failure, I created a new release today with these two included. Although there's no new conda-forge package which I would have thought the CI would be using 🤔

@pnuu
Copy link
Member

pnuu commented Oct 21, 2024

Trollimage seems to be coming from PyPI: https://github.com/pytroll/satpy/blob/main/continuous_integration/environment.yaml#L64

@gerritholl could you perhaps have a look at the failing tests that seem to be handling paletted images and enhancement history?

  • satpy/tests/enhancement_tests/test_enhancements.py::TestEnhancementStretch::test_palettize
  • satpy/tests/enhancement_tests/test_viirs.py::TestVIIRSEnhancement::test_viirs

@pnuu
Copy link
Member

pnuu commented Oct 21, 2024

I think it should be as simple as updating the expected results.

@pnuu
Copy link
Member

pnuu commented Oct 21, 2024

Looks like the enhancement test function is used in many places other the two failing tests, so needs a bit more involved solution. I'll have a go.

@pnuu
Copy link
Member

pnuu commented Oct 21, 2024

A fix attempted in #2944

@pnuu
Copy link
Member

pnuu commented Oct 21, 2024

@strandgren the fix to the failing tests has been merged, so please merge main to this PR.

Copy link

codecov bot commented Oct 21, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 96.09%. Comparing base (b9a2f4d) to head (9c1075c).
Report is 4 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #2943   +/-   ##
=======================================
  Coverage   96.09%   96.09%           
=======================================
  Files         377      377           
  Lines       55070    55074    +4     
=======================================
+ Hits        52920    52924    +4     
  Misses       2150     2150           
Flag Coverage Δ
behaviourtests 3.94% <0.00%> (-0.01%) ⬇️
unittests 96.19% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@coveralls
Copy link

Pull Request Test Coverage Report for Build 11438867601

Details

  • 7 of 7 (100.0%) changed or added relevant lines in 2 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 96.202%

Totals Coverage Status
Change from base Build 11438666819: 0.0%
Covered Lines: 53168
Relevant Lines: 55267

💛 - Coveralls

@strandgren
Copy link
Collaborator Author

@pnuu Thanks, now the tests passed :)

Copy link
Member

@mraspaud mraspaud left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I can live with this solution :)
However, I want to point out that it feels strange to have a property return different results depending on whether the file has already been open or not. Ideally, the call to end_time should trigger reading just what is needed in the file to give a sensible result.
But since I don't know the format, I think we can merge this.

@strandgren
Copy link
Collaborator Author

strandgren commented Oct 21, 2024

I think I can live with this solution :) However, I want to point out that it feels strange to have a property return different results depending on whether the file has already been open or not. Ideally, the call to end_time should trigger reading just what is needed in the file to give a sensible result. But since I don't know the format, I think we can merge this.

Yes I kind of agree, but the metadata are provided for each grib message i.e. dataset meaning that at Scene initialization it would be ambiguous from which message/dataset to read the ssp_lon from. Of course it will most likely be the same for all datasets and we could read it from the first message, but I still didn't like that solution, also since it invokes some duplicate reading.

Comment on lines +136 to +145
# Check that end_time is None for SEVIRI before the dataset has been loaded
assert reader.end_time is None

common_checks(ec_, reader, mock_file, dataset_id)

# Check end_time
# Check that end_time is now a valid datetime.datetime object after the dataset has been loaded
assert reader.end_time == datetime.datetime(year=2020, month=10, day=20,
hour=19, minute=50, second=0)


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this test case be split into several specific tests?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm, I've been trying to refactor this into something better, but so far with no satisfying results. I have previously created an issue to work on removing the mocking #2932, do you think we could move this test optimization to the PR that will deal with the mocking (since it will anyway lead to quite some refactoring)?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, sounds good! Feel free to merge.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Nah, you do it, you're the reviewer ;)

Copy link
Member

@pnuu pnuu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@pnuu pnuu merged commit 3a65934 into pytroll:main Oct 22, 2024
18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0.52.0 breaks seviri_l2_grib-reader with 'EUML2GribFileHandler' object has no attribute '_ssp_lon'
4 participants