-
Hi, I'm now trying to do some simple experiments with Ray + Dragonfly, following the instruction here: https://wood-b.github.io/post/a-novices-guide-to-hyperparameter-optimization-at-scale. My example codes are pasted here: import numpy as np
from ray import tune
from ray.tune.search.dragonfly import DragonflySearch
from ray.tune.schedulers import AsyncHyperBandScheduler
from dragonfly import load_config
from dragonfly.exd.experiment_caller import CPFunctionCaller, EuclideanFunctionCaller
from dragonfly.opt.gp_bandit import CPGPBandit, EuclideanGPBandit
def evaluation_fn(width, height):
return (0.1 + width / 100) ** (-1) + height * 0.1
def my_func(config):
# Hyperparameters
width, height = config["point"]
value = evaluation_fn(width, height)
return {"mean_loss": value}
np.random.seed(12345)
param_list = [{"name": "width",
"type": "discrete_numeric",
"items": "-".join(f"{x}" for x in range(0, 21, 2))},
{"name": "height",
"type": "discrete_numeric",
"items": list(range(-100, 101, 5))}
]
param_dict = {"name": "BO_CGCNN", "domain": param_list}
domain_config = load_config(param_dict)
domain, domain_orderings = domain_config.domain, domain_config.domain_orderings
# define the hpo search algorithm BO
func_caller = CPFunctionCaller(None, domain, domain_orderings=domain_orderings)
optimizer = CPGPBandit(func_caller, ask_tell_mode=True)
bo_search_alg = DragonflySearch(optimizer, metric="mean_loss", mode="min")
scheduler = AsyncHyperBandScheduler(stop_last_trials=False)
# bayesopt = BayesOptSearch(metric="mean_loss", mode="min")
tuner = tune.Tuner(
my_func,
tune_config=tune.TuneConfig(
search_alg=bo_search_alg,
scheduler=scheduler,
num_samples=50,
metric="mean_loss",
mode="min"
),
)
results = tuner.fit()
print(results)
best_result = results.get_best_result()
print(best_result) One thing I notice that is it generates a bunch of duplicated points, especially around the optimal point, e.g. [18.0, -100], [20.0, -100]. Maybe re-evaluation of the same points is the expected behavior (although it adds no addition info), but anyone could give a more detailed explanation? There seems to be a related Q&A in Matlab's website: https://www.mathworks.com/matlabcentral/answers/477377-duplicate-points-evaluated-in-bayesian-optimization, but I'm not if the same reason applies to Dragonfly. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Wrong Dragonfly 🤷🏼 |
Beta Was this translation helpful? Give feedback.
Wrong Dragonfly 🤷🏼
Please open the issue here: https://github.com/dragonfly/dragonfly