grab-bag issues on the metrics #178
Labels
documentation
Improvements or additions to documentation
good first issue
Good for newcomers
metric
new/upgraded metric needed
question
Further information is requested
The default goodness-of-fit measure for pairwise comparison of PDFs is
ad
, which is not the most standard one so should probably be swapped forks
, or, better yet, changed to not have a default value and instead require the user to specify it, since the closest we have to a standard for this in astrophysics is the RMSE.Relatedly, did we have a reason to not include RMSE and KLD among the acceptable goodness-of-fit measures?
Unrelatedly, the top-level docstring for the Brier score is specific to astronomy, but the rest of qp's documentation is not (makes sense since it started in RAIL!) so we should probably update that. This might come up in other places within the code.
In any case, I'm a little confused about the organization of what's in each module. It's great that the
__init__
importing*
masks it from the user, as a developer, it's not clear where one should put a new metric nor where to check if there's already a function that would share some of the same underlying code. I propose a rearrangement to group functions by the kinds of inputs and outputs, which would be metrics between two ensembles, metrics between an ensemble and a vector of the same length, shortcut/helper functions (so the array metrics and a few others beginning with\_
), and ways to reduce PDFs to point estimates (assuming I haven't missed any functions that don't fit into any of those).Also, it would be nice if anything else in that last category, especially the RBPE, could be directly accessible as methods of
pdf\_gen
, like the mean, median, and mode are, since that's how such "metrics" are used. Did we have a reason for not doing that (or did I miss where it's already that way)?The text was updated successfully, but these errors were encountered: