Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

In the future, could we learn from incorrect classifications #24

Open
jonfroehlich opened this issue Apr 30, 2019 · 1 comment
Open

Comments

@jonfroehlich
Copy link
Member

If we start serving CV crops to the validation interface, could we start using the incorrect classifications to further improve the model?

@galenweld
Copy link
Collaborator

Absolutely - this approach when done offline (I gather) is called hard-negative mining. The idea is pretty simple, as you train, you keep track of which examples you get wrong, and train on those wrong examples specifically. Hooking it up to the validation interface would be some online variation thereof; and while one could presumably come up with some arbitrarily complex implementation, a straightforward approach would be pretty simple: as labels are marked as incorrect, we simply bundle them together, and occasionally run a few more epochs of training on these incorrect labels (probably also incorporating some amount of the larger dataset as well) to further refine the model's weights.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants