-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ENH] Calibration plot (add performance curves) and a new Calibrated Learner widget #3881
Conversation
Codecov Report
@@ Coverage Diff @@
## master #3881 +/- ##
==========================================
+ Coverage 84.31% 84.56% +0.24%
==========================================
Files 385 390 +5
Lines 72758 73803 +1045
==========================================
+ Hits 61343 62408 +1065
+ Misses 11415 11395 -20 |
dd860da
to
2312bf4
Compare
064c928
to
a67d0fd
Compare
…erformance curves
…derived widget to change the default name without interferring with user-changed settings.
a67d0fd
to
4cc3a54
Compare
4cc3a54
to
2edcb39
Compare
47308ca
to
2049afa
Compare
@@ -47,7 +47,7 @@ def results_for_preview(data_name=""): | |||
from Orange.classification import \ | |||
LogisticRegressionLearner, SVMLearner, NuSVMLearner | |||
|
|||
data = Table(data_name or "ionosphere") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed this because ionosphere is no longer present. It affects previews of widgets in Evaluation.
@@ -1783,6 +1783,9 @@ def __init__(self, master, enableDragDrop=False, dragDropCallback=None, | |||
def sizeHint(self): | |||
return self.size_hint | |||
|
|||
def minimumSizeHint(self): | |||
return self.size_hint | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added this because list view remained too large. This is the only way with which I managed to reduce its size - setMinimumSizeHint just didn't work...
Orange/evaluation/testing.py
Outdated
@@ -317,7 +317,7 @@ def split_by_model(self): | |||
res.probabilities = self.probabilities[(i,), :, :] | |||
|
|||
if self.models is not None: | |||
res.models = self.models[:, i] | |||
res.models = self.models[:, i:i+1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this was a bug because it changed self.models from 2d to 1d array. Apparently nobody needed this so far.
@@ -756,7 +757,8 @@ def __update(self): | |||
stratified=self.shuffle_stratified, | |||
random_state=rstate) | |||
elif self.resampling == OWTestLearners.TestOnTrain: | |||
sampler = Orange.evaluation.TestOnTrainingData() | |||
sampler = Orange.evaluation.TestOnTrainingData( | |||
store_models=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without these changes, Orange canvas can't produce evaluation results with valid models that Calibration plot can output. This will also be useful in, say ROC Analysis that will now also be able to output a model with some specific threshold.
@@ -315,7 +315,7 @@ def set_learner(self, learner, key): | |||
# Removed | |||
self._invalidate([key]) | |||
del self.learners[key] | |||
else: | |||
elif learner is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We haven't encountered this bug before because (I suppose) no learner widget can fail to produce a learner. Now it can - Calibrated Learner outputs None
if it doesn't have a learner on its input.
def __init__(self, targetAttr, selectedAttr): | ||
super().__init__() | ||
self.targetAttr, self.selectedAttr = targetAttr, selectedAttr | ||
"""Context handler for evaluation results""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is obviously a new context handler with the same name as an old one that nobody used because it probably didn't work and it was weird. ROC Analysis and Lift Curves currently don't have context; they can now have this one.
The potentially problematic commit (but I think it's OK) is f742ff9. Currently, widgets for learners set the default name and the user can change it. The default name was always fixed. In Calibrated Learner however, the name should depend on the widget's input and settings to produce a meaningful name. The widget could reset the name (as a part of its context?), but with this it would override user's changes. This commit sets the default name as editline's placeholder, so the user sees the default, (s)he can enter a different name, the widget can change the default (without having to pay attention to any user's changes) and it the user then removes hir input, (s)he gets the widget's default. |
Huh. I limited the model name to 20 characters (elide if longer; I can't add tooltips because Qt's rich text doesn't support them). Not the most elegant and foolproof solution, but I don't have a better one. |
37323a1
to
04d05f4
Compare
Press Report in Calibrated Learner widget. |
I obviously haven't tried it. :) Now it works. |
To merge this PR I would suggest:
|
Merged due to lack of activity. |
This PR revamps an old widget, adds a new one, adds some core functionality, and adds a few minor fixes to base classes.
Orange.evaluation.performance_curves.Curves
is a class that computes performance curves (ca, f1, sens, spec...) as functions of probabilities,Orange.model.calibration
contains learners and classifiers that set and user thresholds and calibrate probabilities,Working on this uncovered some hidden bugs and glitches in other parts of the code, hence a large number of commits.
Fixes #1267.