Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

grab-bag issues on the metrics #178

Open
aimalz opened this issue Jun 8, 2023 · 0 comments
Open

grab-bag issues on the metrics #178

aimalz opened this issue Jun 8, 2023 · 0 comments
Assignees
Labels
documentation Improvements or additions to documentation good first issue Good for newcomers metric new/upgraded metric needed question Further information is requested

Comments

@aimalz
Copy link
Collaborator

aimalz commented Jun 8, 2023

The default goodness-of-fit measure for pairwise comparison of PDFs is ad, which is not the most standard one so should probably be swapped for ks, or, better yet, changed to not have a default value and instead require the user to specify it, since the closest we have to a standard for this in astrophysics is the RMSE.

Relatedly, did we have a reason to not include RMSE and KLD among the acceptable goodness-of-fit measures?

Unrelatedly, the top-level docstring for the Brier score is specific to astronomy, but the rest of qp's documentation is not (makes sense since it started in RAIL!) so we should probably update that. This might come up in other places within the code.

In any case, I'm a little confused about the organization of what's in each module. It's great that the __init__ importing * masks it from the user, as a developer, it's not clear where one should put a new metric nor where to check if there's already a function that would share some of the same underlying code. I propose a rearrangement to group functions by the kinds of inputs and outputs, which would be metrics between two ensembles, metrics between an ensemble and a vector of the same length, shortcut/helper functions (so the array metrics and a few others beginning with \_), and ways to reduce PDFs to point estimates (assuming I haven't missed any functions that don't fit into any of those).

Also, it would be nice if anything else in that last category, especially the RBPE, could be directly accessible as methods of pdf\_gen, like the mean, median, and mode are, since that's how such "metrics" are used. Did we have a reason for not doing that (or did I miss where it's already that way)?

@aimalz aimalz added documentation Improvements or additions to documentation good first issue Good for newcomers question Further information is requested labels Jun 8, 2023
@aimalz aimalz self-assigned this Jun 8, 2023
@aimalz aimalz changed the title Extremely low-priority issues on the metrics grab-bag issues on the metrics Jul 13, 2023
@aimalz aimalz added the metric new/upgraded metric needed label Aug 31, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation good first issue Good for newcomers metric new/upgraded metric needed question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant