diff --git a/README.md b/README.md index 85a0aad5..5ab94a45 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,8 @@ Optuna Examples This page contains a list of example codes written with Optuna. -The simplest codeblock looks like this: +
+Simplest Codeblock ```python import optuna @@ -20,16 +21,42 @@ if __name__ == "__main__": study.optimize(objective, n_trials=1000, timeout=3) print(f"Best params is {study.best_params} with value {study.best_value}") ``` +
-The examples below provide codeblocks similar to the example above for various different scenarios. +> [!NOTE] +> If you are interested in a quick start of [Optuna Dashboard](https://github.com/optuna/optuna-dashboard) with in-memory storage, please take a look at [this example](./dashboard/run_server_simple.py). -### Simple Black-box Optimization +> [!TIP] +> Couldn't find your usecase? +> [FAQ](https://optuna.readthedocs.io/en/stable/faq.html) might be helpful for you to implement what you want. +> In this example repository, you can also find the examples for the following scenarios: +> 1. [Objective function with additional arguments](./sklearn/sklearn_additional_args.py), which is useful when you would like to pass arguments besides `trial` to your objective function. +> +> 2. [Manually provide trials with sampler](./basic_and_faq_usages/enqueue_trial.py), which is useful when you would like to force certain parameters to be sampled. +> +> 3. [Callback to control the termination criterion of study](./basic_and_faq_usages/max_trials_callback.py), which is useful when you would like to define your own termination criterion other than `n_trials` or `timeout`. -* [Quadratic function](./basic_and_faq_usages/quadratic_simple.py) -* [Quadratic multi-objective function](./basic_and_faq_usages/quadratic_simple_multi_objective.py) -* [Quadratic function with constraints](./basic_and_faq_usages/quadratic_simple_constraint.py) +## Examples for Diverse Problem Setups -### Examples with ML Libraries +Here are the URLs to the example codeblocks to the corresponding setups. + +
+Simple Black-box Optimization + +* [Quadratic Function](./basic_and_faq_usages/quadratic_simple.py) +* [Quadratic Multi-Objective Function](./basic_and_faq_usages/quadratic_simple_multi_objective.py) +* [Quadratic Function with Constraints](./basic_and_faq_usages/quadratic_simple_constraint.py) +
+ +
+Multi-Objective Optimization + +* [Optimization with BoTorch](./multi_objective/botorch_simple.py) +* [Optimization of Multi-Layer Perceptron with PyTorch](./multi_objective/pytorch_simple.py) +
+ +
+Machine Learning (Incl. LightGBMTuner and OptunaSearchCV) * [AllenNLP](./allennlp/allennlp_simple.py) * [AllenNLP (Jsonnet)](./allennlp/allennlp_jsonnet.py) @@ -56,19 +83,13 @@ The examples below provide codeblocks similar to the example above for various d * [Tensorflow (eager)](./tensorflow/tensorflow_eager_simple.py) * [XGBoost](./xgboost/xgboost_simple.py) -### An example of Optuna Dashboard - -The following example demonstrates how to use [Optuna Dashboard](https://github.com/optuna/optuna-dashboard). - -* [Starting Optuna Dashboard with in-memory storage](./dashboard/run_server_simple.py) - -### An example where an objective function uses additional arguments - -The following example demonstrates how to implement an objective function that uses additional arguments other than `trial`. +If you are looking for an example of reinforcement learning, please take a look at the following: +* [Optimization of Hyperparameters for Stable-Baslines Agent](./rl/sb3_simple.py) -* [Scikit-learn (callable class version)](./sklearn/sklearn_additional_args.py) +
-### Examples of Pruning +
+Pruning The following example demonstrates how to implement pruning logic with Optuna. @@ -76,98 +97,94 @@ The following example demonstrates how to implement pruning logic with Optuna. In addition, integration modules are available for the following libraries, providing simpler interfaces to utilize pruning. -* [Pruning with Catalyst integration module](./pytorch/catalyst_simple.py) -* [Pruning with CatBoost integration module](./catboost/catboost_pruning.py) -* [Pruning with Chainer integration module](./chainer/chainer_integration.py) -* [Pruning with ChainerMN integration module](./chainer/chainermn_integration.py) -* [Pruning with FastAI integration module](./fastai/fastai_simple.py) -* [Pruning with Keras integration module](./keras/keras_integration.py) -* [Pruning with LightGBM integration module](./lightgbm/lightgbm_integration.py) -* [Pruning with PyTorch integration module](./pytorch/pytorch_simple.py) -* [Pruning with PyTorch Ignite integration module](./pytorch/pytorch_ignite_simple.py) -* [Pruning with PyTorch Lightning integration module](./pytorch/pytorch_lightning_simple.py) -* [Pruning with PyTorch Lightning integration module (DDP)](./pytorch/pytorch_lightning_ddp.py) -* [Pruning with Tensorflow integration module](./tensorflow/tensorflow_estimator_integration.py) -* [Pruning with XGBoost integration module](./xgboost/xgboost_integration.py) -* [Pruning with XGBoost integration module (cross validation, XGBoost.cv)](./xgboost/xgboost_cv_integration.py) - -### Examples of Samplers +* [Pruning with Catalyst Integration Module](./pytorch/catalyst_simple.py) +* [Pruning with CatBoost Integration Module](./catboost/catboost_pruning.py) +* [Pruning with Chainer Integration Module](./chainer/chainer_integration.py) +* [Pruning with ChainerMN Integration Module](./chainer/chainermn_integration.py) +* [Pruning with FastAI Integration Module](./fastai/fastai_simple.py) +* [Pruning with Keras Integration Module](./keras/keras_integration.py) +* [Pruning with LightGBM Integration Module](./lightgbm/lightgbm_integration.py) +* [Pruning with PyTorch Integration Module](./pytorch/pytorch_simple.py) +* [Pruning with PyTorch Ignite Integration Module](./pytorch/pytorch_ignite_simple.py) +* [Pruning with PyTorch Lightning Integration Module](./pytorch/pytorch_lightning_simple.py) +* [Pruning with PyTorch Lightning Integration Module (DDP)](./pytorch/pytorch_lightning_ddp.py) +* [Pruning with Tensorflow Integration Module](./tensorflow/tensorflow_estimator_integration.py) +* [Pruning with XGBoost Integration Module](./xgboost/xgboost_integration.py) +* [Pruning with XGBoost Integration Module (Cross Validation Version)](./xgboost/xgboost_cv_integration.py) +
+ +
+Samplers * [Warm Starting CMA-ES](./samplers/warm_starting_cma.py) -### Examples of User-Defined Sampler - +If you are interested in defining a user-defined sampler, here is an example: * [SimulatedAnnealingSampler](./samplers/simulated_annealing_sampler.py) +
-### Examples of Terminator +
+Terminator * [Optuna Terminator](./terminator/terminator_simple.py) * [OptunaSearchCV with Terminator](./terminator/terminator_search_cv.py) +
-### Examples of Multi-Objective Optimization - -* [Optimization with BoTorch](./multi_objective/botorch_simple.py) -* [Optimization of MLP with PyTorch](./multi_objective/pytorch_simple.py) - -### Examples of Visualization - -* [Visualizing study](https://colab.research.google.com/github/optuna/optuna-examples/blob/main/visualization/plot_study.ipynb) -* [Visualizing study with HiPlot](https://colab.research.google.com/github/optuna/optuna-examples/blob/main/hiplot/plot_study.ipynb) +
+Visualization -### An example to enqueue trials with given parameter values +* [Visualizing Study](https://colab.research.google.com/github/optuna/optuna-examples/blob/main/visualization/plot_study.ipynb) +* [Visualizing Study with HiPlot](https://colab.research.google.com/github/optuna/optuna-examples/blob/main/hiplot/plot_study.ipynb) +
-* [Enqueuing trials with given parameters](./basic_and_faq_usages/enqueue_trial.py) +
+Distributed Optimization -### Examples of aim - -* [Tracking optimization process with aim](./aim/aim_integration.py) - -### Examples of MLflow - -* [Tracking optimization process with MLflow](./mlflow/keras_mlflow.py) - -### Examples of Weights & Biases - -* [Tracking optimization process with Weights & Biases](./wandb/wandb_integration.py) - -### Examples of Hydra - -* [Optimization with Hydra](./hydra/simple.py) - -### Examples of Distributed Optimization - -* [Optimizing on a Dask cluster](./dask/dask_simple.py) +* [Optimizing on Dask Cluster](./dask/dask_simple.py) * [Optimizing on Kubernetes](./kubernetes/README.md) -* [Optimizing with Ray's joblib backend](./ray/ray_joblib.py) +* [Optimizing with Ray's Joblib Backend](./ray/ray_joblib.py) +
-### Examples of Reinforcement Learning +
+MLOps Platform -* [Optimization of Hyperparameters for Stable-Baslines Agent](./rl/sb3_simple.py) +* [Tracking Optimization Process with aim](./aim/aim_integration.py) +* [Tracking Optimization Process with MLflow](./mlflow/keras_mlflow.py) +* [Tracking Optimization Process with Weights & Biases](./wandb/wandb_integration.py) +* [Optimization with Hydra](./hydra/simple.py) +
-### External projects using Optuna +
+External Projects Using Optuna -* [Hugging Face Trainer's hyperparameter search](https://huggingface.co/docs/transformers/main/main_classes/trainer#transformers.Trainer.hyperparameter_search) +* [Hugging Face Trainer's Hyperparameter Search](https://huggingface.co/docs/transformers/main/main_classes/trainer#transformers.Trainer.hyperparameter_search) * [Allegro Trains](https://github.com/allegroai/trains) -* [BBO-Rietveld: Automated crystal structure refinement](https://github.com/quantumbeam/BBO-Rietveld) +* [BBO-Rietveld: Automated Crystal Structure Refinement](https://github.com/quantumbeam/BBO-Rietveld) * [Catalyst](https://github.com/catalyst-team/catalyst) * [CuPy](https://github.com/cupy/cupy) -* [Hydra's Optuna Sweeper plugin](https://hydra.cc/docs/next/plugins/optuna_sweeper/) +* [Hydra's Optuna Sweeper Plugin](https://hydra.cc/docs/next/plugins/optuna_sweeper/) * [Mozilla Voice STT](https://github.com/mozilla/DeepSpeech) * [neptune.ai](https://neptune.ai) -* [OptGBM: A scikit-learn compatible LightGBM estimator with Optuna](https://github.com/Y-oHr-N/OptGBM) +* [OptGBM: A scikit-learn Compatible LightGBM Estimator with Optuna](https://github.com/Y-oHr-N/OptGBM) * [Optuna-distributed](https://github.com/xadrianzetx/optuna-distributed) * [PyKEEN](https://github.com/pykeen/pykeen) * [RL Baselines Zoo](https://github.com/DLR-RM/rl-baselines3-zoo) -* [Hyperparameter Optimization for Machine Learning, code repository for online course](https://github.com/solegalli/hyperparameter-optimization) +* [Hyperparameter Optimization for Machine Learning, Code Repository for Online Course](https://github.com/solegalli/hyperparameter-optimization) +
-PRs to add additional projects welcome! +> [!IMPORTANT] +> PRs to add additional real-world examples or projects are welcome! ### Running with Optuna's Docker images? -You can use our docker images with the tag ending with `-dev` to run most of the examples. -For example, you can run [PyTorch Simple](./pytorch/pytorch_simple.py) via `docker run --rm -v $(pwd):/prj -w /prj optuna/optuna:py3.7-dev python pytorch/pytorch_simple.py`. -Also, you can try our visualization example in Jupyter Notebook by opening `localhost:8888` in your browser after executing this: +Our Docker images for most examples are available with the tag ending with `-dev`. +For example, [PyTorch Simple](./pytorch/pytorch_simple.py) can be run via: + +```bash +$ docker run --rm -v $(pwd):/prj -w /prj optuna/optuna:py3.7-dev python pytorch/pytorch_simple.py +``` + +Additionally, our visualization example can also be run on Jupyter Notebook by opening `localhost:8888` in your browser after executing the following: ```bash -docker run -p 8888:8888 --rm optuna/optuna:py3.7-dev jupyter notebook --allow-root --no-browser --port 8888 --ip 0.0.0.0 --NotebookApp.token='' --NotebookApp.password='' +$ docker run -p 8888:8888 --rm optuna/optuna:py3.7-dev jupyter notebook --allow-root --no-browser --port 8888 --ip 0.0.0.0 --NotebookApp.token='' --NotebookApp.password='' ```