Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize score loss computation #77

Open
NicolasHug opened this issue Dec 17, 2018 · 0 comments
Open

Optimize score loss computation #77

NicolasHug opened this issue Dec 17, 2018 · 0 comments
Labels
perf Computational performance issue or improvement

Comments

@NicolasHug
Copy link
Collaborator

Slightly related to #76

This is the second bullet point from #69 (comment)

When early stopping (or just score monitoring) is done on the training data with the loss, we should just use the raw_predictions array from fit() instead of re-computing it.

Results would be slightly different from the current implementation because we are currently computing the loss on a subset of the training data, not on the whole training data.

A further optimization would be, instead of calling loss_.__call__(), to compute the loss w.r.t each sample in e.g. loss_.update_gradients_and_hessians and use those values to compute the gradients and hessians. Overhead would be minimal this way.

@NicolasHug NicolasHug added the perf Computational performance issue or improvement label Dec 17, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
perf Computational performance issue or improvement
Projects
None yet
Development

No branches or pull requests

1 participant