Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Acquisition function for Exploration only #331

Open
jdridder opened this issue Dec 28, 2023 · 4 comments
Open

Acquisition function for Exploration only #331

jdridder opened this issue Dec 28, 2023 · 4 comments

Comments

@jdridder
Copy link
Contributor

For many real world applications exploration only strategies are very useful to obtain precisely trained GP models on the whole design space. Up till now, this can be done by utilizing the UCB acquisition function while choosing a high exploration parameter β which is limited to the single-output case. The situation for multi-output models is a trickier though.

In this case, the acquisition function only needs to focus on the posterior variance that should be minimized for a new candidate point as implemented in qNegIntegratedPosteriorVariance which allows multi-output models.

This would be truly useful for all kinds of accurate surrogate modeling of multi-output scenarios without optimization in the first place.

@jduerholt
Copy link
Contributor

Hi JD,

thanks for pointing this out. We are planning to add active learning acqfs soon. I need to think a bit more about the MO case ;) I just saw there is large PR in botorch with new active learning acqfs pytorch/botorch#2163. As soon as they are available there, we will also integrate them in bofire.

Best,

Johannes

@jdridder
Copy link
Contributor Author

Hi Johannes,

thanks for your answer. I'm glad to hear, that you are about to implement the active learning acqfs. If you want to outsource some work, I will eagerly help you, as I am myself testing around with active_learning acqfs in vanilla botorch. Would love to see this work in bofire.

Best regards,
JD

@jduerholt
Copy link
Contributor

jduerholt commented Feb 23, 2024

This are great news, I indeed started last week a branch to implement an ActiveLearningStrategy, which is not yet finished. I will clean this up and document it a bit so that you can fork it and finish it. Thank you very much!

PS: Cudos to using vanilla botorch, this is an achievement, but it would be really nice to have it in BoFire to reduce the overhead and boilerplate code.

@jdridder
Copy link
Contributor Author

For the sake of documentation
To analyze the performance of the new ActiveLearning acquisition function qNegIntegratedPosteriorVariance, I did a benchmark test against the RandomStrategy and the acquisition function qUCB with a high exploration parameter of 1000. This was done by comparing the mean squared error to the true solution of the benchmark function Himmelblau of the models trained by each strategy in five separate trial rounds.

The graph shows the mean mse and its standard deviation of the five rounds. It illustrates that qNegIntegratedPosteriorVariance is clearly superior to the RandomStrategy and even the qUCB producing the lowest MSE to the real solution. This makes the new ActiveLearningStrategy a good option for pure exploration of the objective function.
active_learning_performance

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants