Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add probability calibration to the classifier outputs #6

Open
sabinthomas opened this issue Apr 11, 2014 · 1 comment
Open

Add probability calibration to the classifier outputs #6

sabinthomas opened this issue Apr 11, 2014 · 1 comment

Comments

@sabinthomas
Copy link

Classify() needs to implement a threshold mechanism for classify() errors. An error is a condition where the labels and probabilities are inconclusive, and a match cannot be obtained.

One way around this is by computing a priorProbabilities classification, and then comparing every getClassification result to the value of this priorProbabilities

@DrDub
Copy link
Contributor

DrDub commented Nov 3, 2016

What you describe seems more in line with application code and it is beyond what a classifier is or does.

But working out confidence levels on the predictions is a direction ML packages are moving towards: http://scikit-learn.org/stable/modules/calibration.html

I'm retitling this and labeling a feature enhancement.

@DrDub DrDub changed the title classifier.classify() should return error when no match found Add probability calibration to the classifier outputs Nov 3, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants