-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: percentage fix #206
fix: percentage fix #206
Conversation
flagged_metrics / len(model_quality_reference) | ||
) | ||
perc_model_quality["value"] = 1 - ( | ||
flagged_metrics / len(model_quality_current["grouped_metrics"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the percentage is the number of "wrong" (flagged) metrics / the number of metrics that we have considered
) | ||
cumulative_sum += 1 - ( | ||
flagged_metrics | ||
/ len(model_quality_reference["class_metrics"][0]["metrics"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For each class, we have the number of "wrong"/flagged metrics divided by the numer of the metrics, and we put this in an accumulator
) | ||
flagged_metrics = 0 | ||
perc_model_quality["value"] = ( | ||
cumulative_sum / len(model_quality_reference["classes"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then the accumulator is divided by the number of classes
Fixed percentages that went above 100% or were wrong