Skip to content
/ hoag Public
forked from fabianp/hoag

Hyperparameter optimization with approximate gradient

Notifications You must be signed in to change notification settings

lucfra/hoag

This branch is 6 commits ahead of, 6 commits behind fabianp/hoag:master.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Jul 19, 2017
5f568ac · Jul 19, 2017

History

41 Commits
Jul 11, 2016
Jun 23, 2017
Jul 19, 2017
May 24, 2017
Mar 26, 2016
Nov 19, 2016
Jul 19, 2017
Mar 25, 2016
Jul 11, 2016

Repository files navigation

https://travis-ci.org/fabianp/hoag.svg?branch=master

HOAG

Hyperparameter optimization with approximate gradient

https://raw.githubusercontent.com/fabianp/hoag/master/doc/comparison_ho_real_sim.png

Depends

  • scikit-learn 0.16

Usage

This package exports a LogisticRegressionCV class which automatically estimates the L2 regularization of logistic regression. As other scikit-learn objects, it has a .fit and .predict method. However, unlike scikit-learn objects, the .fit method takes 4 arguments consisting of the train set and the test set. For example:

>>> from hoag import LogisticRegressionCV
>>> clf = LogisticRegressionCV()
>>> clf.fit(X_train, y_train, X_test, y_test)

where X_train, y_train, X_test, y_test are numpy arrays representing the train and test set, respectively.

For full usage example check out this ipython notebook.

https://raw.githubusercontent.com/fabianp/hoag/master/doc/hoag_screenshot.png

Usage tips

Standardize features of the input data such that each feature has unit variance. This makes the Hessian better conditioned. This can be done using e.g. scikit-learn's StandardScaler.

Citing

If you use this, please cite it as

@inproceedings{PedregosaHyperparameter16,
  author    = {Fabian Pedregosa},
  title     = {Hyperparameter optimization with approximate gradient},
  booktitle = {Proceedings of the 33nd International Conference on Machine Learning,
               {ICML}},
  year      = {2016},
  url       = {http://jmlr.org/proceedings/papers/v48/pedregosa16.html},
}

About

Hyperparameter optimization with approximate gradient

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 93.0%
  • Python 7.0%