-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cycle over data time #750
Comments
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
For basic difference plots provide a list of models you want to compare (e.g. [1, 2, 3]) and then do the combination of all of those. Can just arbitrary decide the comparison order based on model number. E.g: m1 - m2 We can then use surface field table to ensure are comparing the same fields. Table of toggles of models on both axis as way to do it? |
I've now done all of the tasks I identified as part of this work. I'm a bit tempted to try implementing a difference plot to confirm that is now possible, but that should land as a later PR. So to do on this PR now:
|
What problem does your feature request solve?
Currently we cycle over validity time. This has several drawbacks:
Describe the solution you'd like
We should switch to cycling over data time (AKA forecast initiation time). This will allow us to processes multiple case studies in the same CSET run. We should also run larger tasks, where an entire forecast is processed in one go.
Describe alternatives you've considered
The text was updated successfully, but these errors were encountered: