-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Basic support with stand-alone evaluation #65
Comments
Hey Taylor, Let me try to reproduce your results locally so that I can provide some insight. A few answers in principle, though:
My guess is that there are some peculiarities with your dataset that are leading to these odd results, such as all ensemble members having the same value, consistently. But I will try to reproduce locally as a starting point... Cheers, James |
Reproduced. The first thing I notice is these two warnings:
In general, it is best to either declare the time-scale of the data inband to the data (i.e., to use a format that supports this, such as the CSV format) or to declare the time-scale of the data within the declaration itself. In the absence of this, an assumption will be made, as indicated in the warnings. Such declaration may look like this (or whatever the data represents):
This is followed up with warnings on the calculation of every single pool along these lines:
Next, and I think this is the crux of the problem, I see only one ensemble member in the paired data. Why? Well, it seems that the predictions use the keyword https://github.com/NOAA-OWP/wres/wiki/Format-Requirements-for-CSV-Files In other words, the software is interpreting every single forecast as a single-valued forecast because it is ignoring the ensemble information. The software is lenient about the presence of columns that do not coincide with expected keywords, but it will not use the information for those columns. In short, I would start by fixing the keyword column in the header ( Let me know if something above doesn't make sense. Cheers, James |
Hi there!
I'm trying to mimic a typical HEFS evaluation, but some of my outputs seem to lack resolution. For example, the reliability, rank histogram, and ROC diagram plots only show 2 or maybe three points on the plots. Is there a way to increase the number of bins/points on these for these outputs?
Also, the cross-pair functionality does not seem to work when I run it. Can you help me better understand what this is doing? Pairing the forecasts with the observations appears to be working.
Note that I've commented out the baseline forecast information (and associated skill scores) because the ESP data were too large to attach here. Also, I've lowered the
minimum_sample_size setting
to 1 for testing, since I'm only using one year of forecast data.Thank you!
HOPC1.QME.csv
HOPC1.HEFS.tgz
wres-test-outputs.zip
The text was updated successfully, but these errors were encountered: