You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In practical applications, there is often a need to perform optimization on numerous targets with identical or similar search spaces. Warm-starting or meta-learning for the Tree-structured Parzen Estimator (TPE) could significantly accelerate these optimizations. For instance, in materials development, we might have multiple (10-100, or even 1000+) Optuna studies for various materials, and we want to efficiently optimize for a new material.
Description
This issue proposes to implement warm-starting or meta-learning capabilities for TPE in Optuna. This feature would allow leveraging previous optimization runs to initialize and guide the search for new, related tasks, thus reducing the overall optimization time.
One promising approach is based on meta-learning using task similarity, as demonstrated in Watanabe et al.'s work on speeding up multi-objective hyperparameter optimization.
This approach learns from the distributions of promising hyperparameters across related tasks to inform the TPE sampler's prior distributions for new tasks.
Alternatives (optional)
While other approaches like transfer learning could be considered, meta-learning appears particularly well-suited for this scenario, given its ability to generalize across tasks with similar search spaces. Direct warm-starting, using the best hyperparameters from previous studies as a starting point, could be a simpler but potentially less effective alternative for complex search spaces.
Additional context (optional)
No response
The text was updated successfully, but these errors were encountered:
Motivation
In practical applications, there is often a need to perform optimization on numerous targets with identical or similar search spaces. Warm-starting or meta-learning for the Tree-structured Parzen Estimator (TPE) could significantly accelerate these optimizations. For instance, in materials development, we might have multiple (10-100, or even 1000+) Optuna studies for various materials, and we want to efficiently optimize for a new material.
Description
This issue proposes to implement warm-starting or meta-learning capabilities for TPE in Optuna. This feature would allow leveraging previous optimization runs to initialize and guide the search for new, related tasks, thus reducing the overall optimization time.
One promising approach is based on meta-learning using task similarity, as demonstrated in Watanabe et al.'s work on speeding up multi-objective hyperparameter optimization.
This approach learns from the distributions of promising hyperparameters across related tasks to inform the TPE sampler's prior distributions for new tasks.
Alternatives (optional)
While other approaches like transfer learning could be considered, meta-learning appears particularly well-suited for this scenario, given its ability to generalize across tasks with similar search spaces. Direct warm-starting, using the best hyperparameters from previous studies as a starting point, could be a simpler but potentially less effective alternative for complex search spaces.
Additional context (optional)
No response
The text was updated successfully, but these errors were encountered: