-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement a bayesian opimization algorithm for CADET-Process based on ax #34
Implement a bayesian opimization algorithm for CADET-Process based on ax #34
Conversation
@schmoelder: Right now multiobjective optimization works. to-dos1. finalize unittests for
because these follow different implementation. I will see if I can implement single obj. as a special case of multi-obj. or if I have to use different implementations dependent on the case 2. Ax provides a runner class in the developer APIAfter a short interlude of experimenting with the
3. Start writing docstrings for the classes and methods and clean up code4. Implement Different Surrogate Models / Acquisition Functions in AxI'm currently understanding the framework of ax that deals with models. It is again complicated and not super easy to disentangle the different components, so I'm just writing things down here to track the evolution of my knowledge. A
Then there is a A number of ready-made models are already available in the
MODEL_KEY_TO_MODEL_SETUP = {
"MOO": ModelSetup(
bridge_class=TorchModelBridge,
model_class=MultiObjectiveBotorchModel,
transforms=Cont_X_trans + Y_trans,
standard_bridge_kwargs=STANDARD_TORCH_BRIDGE_KWARGS,
),
"BoTorch": ModelSetup(
bridge_class=TorchModelBridge,
model_class=ModularBoTorchModel,
transforms=Cont_X_trans + Y_trans,
standard_bridge_kwargs=STANDARD_TORCH_BRIDGE_KWARGS,
),
} the For single objective, a So the best way to go currently I think is to:
questions1. Do you think you can work on making intermediate results of the evaluation-toolchain available, or give me a short Idea on how to do this? This would improve the efficiency of my callbacks, unless I'm missing something again 😄 can be solved by adding the flag 2. post processing is inefficient Hypothesis Jo:
try profiling in vscode. |
Tasks for this weekpriority
|
If I can add:
|
@schmoelder, is there an assertion in your test suite that asserts the near equality of two arrays? If not should I write one? |
I mostly just use np.testing.almost_equal. |
@schmoelder do you know a good simple multi-objective optimization problem with linear constraints and a known pareto front, ideally implemented in python (not ax so I can use it for other optimization packages as well)? If not, have you experience how to derive the pareto front for simple problems or do you know who to ask. I asked chatGPT for a problem but I suspect it got the pareto front wrong. But maybe its also me and we can have a look tomorrow together |
Maybe some of pymoo's test problems would work (see: https://pymoo.org/problems/test_problems.html#Multi-Objective). |
5784e4e
to
0ca7b54
Compare
dc3c03d
to
26807e3
Compare
@schmoelder i have written a test suite for Ax. This is not completely finalized, but I think its a good start to use this also for other optimizers. I wouldn't mind if you took a look at it and review if you're happy with the implementation. I don't mind changing parts of the implementation. |
a8afe93
to
0d0a857
Compare
Thanks a lot! I will definitely have a look and review! Btw, I took the liberty to rebase again. you might have to |
I used |
This comment might get us some idea on how to address custom initialization: |
75b6a98
to
670efab
Compare
05ce911
to
d52e14e
Compare
5f60c95
to
9544fcd
Compare
95c2b58
to
62ce922
Compare
This was necessary, because NEHVI does not support single objective
This is done in order to reduce iterations, because the Moo algorithms need a long time to remove dominated solutions from the pareto set. If mismatch_tol is large enough, a fraction of non-pareto-optimal solutions is accepted
…ntime of NEHVI in testing
`SLSQP`, `TrustConstr`, `GPEI`, `NEHVI` are passing `UNSGA3` open
All tests are passing
Locally all tests are passing (11 min).
Other than that all should be okay for now and ready for merge 🎉 |
# print( | ||
# f"Iteration: Best in iteration {new_value:.3f}, " | ||
# f"Best so far: {self._data.df['mean'].min():.3f}" | ||
# ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we don't need this anymore, please remove.
Great news, @flo-schu! Thanks for all your efforts! I will try to run some problems locally to test performance of some "real" problems. Really excited! Other than some minor remarks in the code base (see review comments), please make sure to clean up the git history a bit. I can also help you with that if you want. |
TODOs have been transferred to https://github.com/fau-advanced-separations/CADET-Process/pull/34/files
@schmoelder: Everything looks good. Tests are passing. IMO we can merge 🚀 |
implements the developer API of Ax (https://ax.dev/tutorials/gpei_hartmann_developer.html) for CADET-Process.
new features
termination criteria (nice to have)multi-objective (can be singular in the case of minimization)hyperparameters: Forumsbeitrag exploration vs exploitation was sind gute Parameter für verschiedene Algortihmenimprovements
infra