-
Notifications
You must be signed in to change notification settings - Fork 311
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove custom optimizer_argparse
case for qKG
#2997
Conversation
This pull request was exported from Phabricator. Differential Revision: D65227420 |
Summary: Context: It appears that using `qKnowledgeGradient` with MBM doesn't work, since [this line](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/acquisition.py#L339) passes the argument `optimizer` to `_argparse_kg`, which errors [here](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/optimizer_argparse.py#L169) because it has now received the argument "optimizer" twice. We don't really need the `optimizer_argparse` special case for qKG anymore. This existed for two reasons: - in order to construct initial conditions, which can be handled by `optimize_acqf`, and - to ensure that the optimizer is `optimize_acqf`, because others are not supported This diff: * Modifies the `optimize_argparse` case for qKG to do nothing except update the optimizer to `optimize_acqf` and then call the base case Implementation notes: * Isn't it nonintuitive to set the optimizer then override it? Yes, a little, but the user can't choose the optimizer, so we're not overriding a user-specified choice. Also, lots of arguments to the `optimizer_argparse` functions get ignored. The "right" thing might be to put the choice of optimizer inside a dispatcher so that it can depend on the acquisition class, but that would be a bigger change. * Do we really need this dispatcher anymore, if it is doing so little? Yes, third parties may wish to use it to accommodate acquisition functions that are not in Ax. * Do the `optimize_argparse` functions still need to support so many arguments, given that some of them seemed to just be there for constructing initial conditions? Probably not; I mean to look into that in a follow-up. Differential Revision: D65227420
5192a2d
to
48fb646
Compare
This pull request was exported from Phabricator. Differential Revision: D65227420 |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2997 +/- ##
==========================================
- Coverage 95.61% 95.61% -0.01%
==========================================
Files 487 487
Lines 49027 49023 -4
==========================================
- Hits 46879 46873 -6
- Misses 2148 2150 +2 ☔ View full report in Codecov by Sentry. |
Summary: Context: It appears that using `qKnowledgeGradient` with MBM doesn't work, since [this line](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/acquisition.py#L339) passes the argument `optimizer` to `_argparse_kg`, which errors [here](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/optimizer_argparse.py#L169) because it has now received the argument "optimizer" twice. We don't really need the `optimizer_argparse` special case for qKG anymore. This existed for two reasons: - in order to construct initial conditions, which can be handled by `optimize_acqf`, and - to ensure that the optimizer is `optimize_acqf`, because others are not supported This diff: * Modifies the `optimize_argparse` case for qKG to do nothing except update the optimizer to `optimize_acqf` and then call the base case Implementation notes: * Isn't it nonintuitive to set the optimizer then override it? Yes, a little, but the user can't choose the optimizer, so we're not overriding a user-specified choice. Also, lots of arguments to the `optimizer_argparse` functions get ignored. The "right" thing might be to put the choice of optimizer inside a dispatcher so that it can depend on the acquisition class, but that would be a bigger change. * Do we really need this dispatcher anymore, if it is doing so little? Maybe. Third parties may wish to use it to accommodate acquisition functions that are not in Ax. On the other hand, this dispatcher is currently not doing much of anything. * Do the `optimize_argparse` functions still need to support so many arguments, given that some of them seemed to just be there for constructing initial conditions? Probably not; I mean to look into that in a follow-up. Reviewed By: saitcakmak Differential Revision: D65227420
48fb646
to
5c1cb72
Compare
This pull request was exported from Phabricator. Differential Revision: D65227420 |
Summary: Context: It appears that using `qKnowledgeGradient` with MBM doesn't work, since [this line](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/acquisition.py#L339) passes the argument `optimizer` to `_argparse_kg`, which errors [here](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/optimizer_argparse.py#L169) because it has now received the argument "optimizer" twice. We don't really need the `optimizer_argparse` special case for qKG anymore. This existed for two reasons: - in order to construct initial conditions, which can be handled by `optimize_acqf`, and - to ensure that the optimizer is `optimize_acqf`, because others are not supported This diff: * Modifies the `optimize_argparse` case for qKG to do nothing except update the optimizer to `optimize_acqf` and then call the base case Implementation notes: * Isn't it nonintuitive to set the optimizer then override it? Yes, a little, but the user can't choose the optimizer, so we're not overriding a user-specified choice. Also, lots of arguments to the `optimizer_argparse` functions get ignored. The "right" thing might be to put the choice of optimizer inside a dispatcher so that it can depend on the acquisition class, but that would be a bigger change. * Do we really need this dispatcher anymore, if it is doing so little? Maybe. Third parties may wish to use it to accommodate acquisition functions that are not in Ax. On the other hand, this dispatcher is currently not doing much of anything. * Do the `optimize_argparse` functions still need to support so many arguments, given that some of them seemed to just be there for constructing initial conditions? Probably not; I mean to look into that in a follow-up. Reviewed By: saitcakmak Differential Revision: D65227420
5c1cb72
to
3257639
Compare
This pull request was exported from Phabricator. Differential Revision: D65227420 |
Summary: Context: It appears that using `qKnowledgeGradient` with MBM doesn't work, since [this line](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/acquisition.py#L339) passes the argument `optimizer` to `_argparse_kg`, which errors [here](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/optimizer_argparse.py#L169) because it has now received the argument "optimizer" twice. We don't really need the `optimizer_argparse` special case for qKG anymore. This existed for two reasons: - in order to construct initial conditions, which can be handled by `optimize_acqf`, and - to ensure that the optimizer is `optimize_acqf`, because others are not supported This diff: * Modifies the `optimize_argparse` case for qKG to do nothing except update the optimizer to `optimize_acqf` and then call the base case Implementation notes: * Isn't it nonintuitive to set the optimizer then override it? Yes, a little, but the user can't choose the optimizer, so we're not overriding a user-specified choice. Also, lots of arguments to the `optimizer_argparse` functions get ignored. The "right" thing might be to put the choice of optimizer inside a dispatcher so that it can depend on the acquisition class, but that would be a bigger change. * Do we really need this dispatcher anymore, if it is doing so little? Maybe. Third parties may wish to use it to accommodate acquisition functions that are not in Ax. On the other hand, this dispatcher is currently not doing much of anything. * Do the `optimize_argparse` functions still need to support so many arguments, given that some of them seemed to just be there for constructing initial conditions? Probably not; I mean to look into that in a follow-up. Reviewed By: saitcakmak Differential Revision: D65227420
Summary: Context: It appears that using `qKnowledgeGradient` with MBM doesn't work, since [this line](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/acquisition.py#L339) passes the argument `optimizer` to `_argparse_kg`, which errors [here](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/optimizer_argparse.py#L169) because it has now received the argument "optimizer" twice. We don't really need the `optimizer_argparse` special case for qKG anymore. This existed for two reasons: - in order to construct initial conditions, which can be handled by `optimize_acqf`, and - to ensure that the optimizer is `optimize_acqf`, because others are not supported This diff: * Modifies the `optimize_argparse` case for qKG to do nothing except update the optimizer to `optimize_acqf` and then call the base case Implementation notes: * Isn't it nonintuitive to set the optimizer then override it? Yes, a little, but the user can't choose the optimizer, so we're not overriding a user-specified choice. Also, lots of arguments to the `optimizer_argparse` functions get ignored. The "right" thing might be to put the choice of optimizer inside a dispatcher so that it can depend on the acquisition class, but that would be a bigger change. * Do we really need this dispatcher anymore, if it is doing so little? Maybe. Third parties may wish to use it to accommodate acquisition functions that are not in Ax. On the other hand, this dispatcher is currently not doing much of anything. * Do the `optimize_argparse` functions still need to support so many arguments, given that some of them seemed to just be there for constructing initial conditions? Probably not; I mean to look into that in a follow-up. Reviewed By: saitcakmak Differential Revision: D65227420
Summary: Context: It appears that using `qKnowledgeGradient` with MBM doesn't work, since [this line](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/acquisition.py#L339) passes the argument `optimizer` to `_argparse_kg`, which errors [here](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/optimizer_argparse.py#L169) because it has now received the argument "optimizer" twice. We don't really need the `optimizer_argparse` special case for qKG anymore. This existed for two reasons: - in order to construct initial conditions, which can be handled by `optimize_acqf`, and - to ensure that the optimizer is `optimize_acqf`, because others are not supported This diff: * Modifies the `optimize_argparse` case for qKG to do nothing except update the optimizer to `optimize_acqf` and then call the base case Implementation notes: * Isn't it nonintuitive to set the optimizer then override it? Yes, a little, but the user can't choose the optimizer, so we're not overriding a user-specified choice. Also, lots of arguments to the `optimizer_argparse` functions get ignored. The "right" thing might be to put the choice of optimizer inside a dispatcher so that it can depend on the acquisition class, but that would be a bigger change. * Do we really need this dispatcher anymore, if it is doing so little? Maybe. Third parties may wish to use it to accommodate acquisition functions that are not in Ax. On the other hand, this dispatcher is currently not doing much of anything. * Do the `optimize_argparse` functions still need to support so many arguments, given that some of them seemed to just be there for constructing initial conditions? Probably not; I mean to look into that in a follow-up. Reviewed By: saitcakmak Differential Revision: D65227420
Summary: Context: It appears that using `qKnowledgeGradient` with MBM doesn't work, since [this line](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/acquisition.py#L339) passes the argument `optimizer` to `_argparse_kg`, which errors [here](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/optimizer_argparse.py#L169) because it has now received the argument "optimizer" twice. We don't really need the `optimizer_argparse` special case for qKG anymore. This existed for two reasons: - in order to construct initial conditions, which can be handled by `optimize_acqf`, and - to ensure that the optimizer is `optimize_acqf`, because others are not supported This diff: * Modifies the `optimize_argparse` case for qKG to do nothing except update the optimizer to `optimize_acqf` and then call the base case Implementation notes: * Isn't it nonintuitive to set the optimizer then override it? Yes, a little, but the user can't choose the optimizer, so we're not overriding a user-specified choice. Also, lots of arguments to the `optimizer_argparse` functions get ignored. The "right" thing might be to put the choice of optimizer inside a dispatcher so that it can depend on the acquisition class, but that would be a bigger change. * Do we really need this dispatcher anymore, if it is doing so little? Maybe. Third parties may wish to use it to accommodate acquisition functions that are not in Ax. On the other hand, this dispatcher is currently not doing much of anything. * Do the `optimize_argparse` functions still need to support so many arguments, given that some of them seemed to just be there for constructing initial conditions? Probably not; I mean to look into that in a follow-up. Reviewed By: saitcakmak Differential Revision: D65227420
This pull request has been merged in 56c91ea. |
Summary:
Context: It appears that using
qKnowledgeGradient
with MBM doesn't work, since this line passes the argumentoptimizer
to_argparse_kg
, which errors here because it has now received the argument "optimizer" twice.We don't really need the
optimizer_argparse
special case for qKG anymore. This existed for two reasons:optimize_acqf
, andoptimize_acqf
, because others are not supportedThis diff:
optimize_argparse
case for qKG to do nothing except update the optimizer tooptimize_acqf
and then call the base caseImplementation notes:
optimizer_argparse
functions get ignored. The "right" thing might be to put the choice of optimizer inside a dispatcher so that it can depend on the acquisition class, but that would be a bigger change.optimize_argparse
functions still need to support so many arguments, given that some of them seemed to just be there for constructing initial conditions? Probably not; I mean to look into that in a follow-up.Differential Revision: D65227420