Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove custom optimizer_argparse case for qKG #2997

Closed
wants to merge 1 commit into from

Conversation

esantorella
Copy link
Contributor

Summary:
Context: It appears that using qKnowledgeGradient with MBM doesn't work, since this line passes the argument optimizer to _argparse_kg, which errors here because it has now received the argument "optimizer" twice.

We don't really need the optimizer_argparse special case for qKG anymore. This existed for two reasons:

  • in order to construct initial conditions, which can be handled by optimize_acqf, and
  • to ensure that the optimizer is optimize_acqf, because others are not supported

This diff:

  • Modifies the optimize_argparse case for qKG to do nothing except update the optimizer to optimize_acqf and then call the base case

Implementation notes:

  • Isn't it nonintuitive to set the optimizer then override it? Yes, a little, but the user can't choose the optimizer, so we're not overriding a user-specified choice. Also, lots of arguments to the optimizer_argparse functions get ignored. The "right" thing might be to put the choice of optimizer inside a dispatcher so that it can depend on the acquisition class, but that would be a bigger change.
  • Do we really need this dispatcher anymore, if it is doing so little? Yes, third parties may wish to use it to accommodate acquisition functions that are not in Ax.
  • Do the optimize_argparse functions still need to support so many arguments, given that some of them seemed to just be there for constructing initial conditions? Probably not; I mean to look into that in a follow-up.

Differential Revision: D65227420

@facebook-github-bot facebook-github-bot added CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported labels Oct 30, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D65227420

@esantorella esantorella self-assigned this Oct 30, 2024
@esantorella esantorella linked an issue Oct 30, 2024 that may be closed by this pull request
1 task
esantorella added a commit to esantorella/Ax that referenced this pull request Oct 30, 2024
Summary:

Context: It appears that using `qKnowledgeGradient` with MBM doesn't work, since [this line](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/acquisition.py#L339) passes the argument `optimizer` to `_argparse_kg`, which errors [here](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/optimizer_argparse.py#L169) because it has now received the argument "optimizer" twice.

We don't really need the `optimizer_argparse` special case for qKG anymore. This existed for two reasons:
- in order to construct initial conditions, which can be handled by `optimize_acqf`, and
- to ensure that the optimizer is `optimize_acqf`, because others are not supported

This diff:
* Modifies the `optimize_argparse` case for qKG to do nothing except update the optimizer to `optimize_acqf` and then call the base case

Implementation notes:
* Isn't it nonintuitive to set the optimizer then override it? Yes, a little, but the user can't choose the optimizer, so we're not overriding a user-specified choice. Also, lots of arguments to the `optimizer_argparse` functions get ignored. The "right" thing might be to put the choice of optimizer inside a dispatcher so that it can depend on the acquisition class, but that would be a bigger change.
* Do we really need this dispatcher anymore, if it is doing so little? Yes, third parties may wish to use it to accommodate acquisition functions that are not in Ax.
* Do the `optimize_argparse` functions still need to support so many arguments, given that some of them seemed to just be there for constructing initial conditions? Probably not; I mean to look into that in a follow-up.

Differential Revision: D65227420
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D65227420

@codecov-commenter
Copy link

codecov-commenter commented Oct 30, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 95.61%. Comparing base (535af4e) to head (5c1cb72).

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2997      +/-   ##
==========================================
- Coverage   95.61%   95.61%   -0.01%     
==========================================
  Files         487      487              
  Lines       49027    49023       -4     
==========================================
- Hits        46879    46873       -6     
- Misses       2148     2150       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

esantorella added a commit to esantorella/Ax that referenced this pull request Oct 30, 2024
Summary:

Context: It appears that using `qKnowledgeGradient` with MBM doesn't work, since [this line](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/acquisition.py#L339) passes the argument `optimizer` to `_argparse_kg`, which errors [here](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/optimizer_argparse.py#L169) because it has now received the argument "optimizer" twice.

We don't really need the `optimizer_argparse` special case for qKG anymore. This existed for two reasons:
- in order to construct initial conditions, which can be handled by `optimize_acqf`, and
- to ensure that the optimizer is `optimize_acqf`, because others are not supported

This diff:
* Modifies the `optimize_argparse` case for qKG to do nothing except update the optimizer to `optimize_acqf` and then call the base case

Implementation notes:
* Isn't it nonintuitive to set the optimizer then override it? Yes, a little, but the user can't choose the optimizer, so we're not overriding a user-specified choice. Also, lots of arguments to the `optimizer_argparse` functions get ignored. The "right" thing might be to put the choice of optimizer inside a dispatcher so that it can depend on the acquisition class, but that would be a bigger change.
* Do we really need this dispatcher anymore, if it is doing so little? Maybe. Third parties may wish to use it to accommodate acquisition functions that are not in Ax. On the other hand, this dispatcher is currently not doing much of anything.
* Do the `optimize_argparse` functions still need to support so many arguments, given that some of them seemed to just be there for constructing initial conditions? Probably not; I mean to look into that in a follow-up.

Reviewed By: saitcakmak

Differential Revision: D65227420
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D65227420

Summary:

Context: It appears that using `qKnowledgeGradient` with MBM doesn't work, since [this line](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/acquisition.py#L339) passes the argument `optimizer` to `_argparse_kg`, which errors [here](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/optimizer_argparse.py#L169) because it has now received the argument "optimizer" twice.

We don't really need the `optimizer_argparse` special case for qKG anymore. This existed for two reasons:
- in order to construct initial conditions, which can be handled by `optimize_acqf`, and
- to ensure that the optimizer is `optimize_acqf`, because others are not supported

This diff:
* Modifies the `optimize_argparse` case for qKG to do nothing except update the optimizer to `optimize_acqf` and then call the base case

Implementation notes:
* Isn't it nonintuitive to set the optimizer then override it? Yes, a little, but the user can't choose the optimizer, so we're not overriding a user-specified choice. Also, lots of arguments to the `optimizer_argparse` functions get ignored. The "right" thing might be to put the choice of optimizer inside a dispatcher so that it can depend on the acquisition class, but that would be a bigger change.
* Do we really need this dispatcher anymore, if it is doing so little? Maybe. Third parties may wish to use it to accommodate acquisition functions that are not in Ax. On the other hand, this dispatcher is currently not doing much of anything.
* Do the `optimize_argparse` functions still need to support so many arguments, given that some of them seemed to just be there for constructing initial conditions? Probably not; I mean to look into that in a follow-up.

Reviewed By: saitcakmak

Differential Revision: D65227420
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D65227420

esantorella added a commit to esantorella/Ax that referenced this pull request Oct 30, 2024
Summary:

Context: It appears that using `qKnowledgeGradient` with MBM doesn't work, since [this line](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/acquisition.py#L339) passes the argument `optimizer` to `_argparse_kg`, which errors [here](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/optimizer_argparse.py#L169) because it has now received the argument "optimizer" twice.

We don't really need the `optimizer_argparse` special case for qKG anymore. This existed for two reasons:
- in order to construct initial conditions, which can be handled by `optimize_acqf`, and
- to ensure that the optimizer is `optimize_acqf`, because others are not supported

This diff:
* Modifies the `optimize_argparse` case for qKG to do nothing except update the optimizer to `optimize_acqf` and then call the base case

Implementation notes:
* Isn't it nonintuitive to set the optimizer then override it? Yes, a little, but the user can't choose the optimizer, so we're not overriding a user-specified choice. Also, lots of arguments to the `optimizer_argparse` functions get ignored. The "right" thing might be to put the choice of optimizer inside a dispatcher so that it can depend on the acquisition class, but that would be a bigger change.
* Do we really need this dispatcher anymore, if it is doing so little? Maybe. Third parties may wish to use it to accommodate acquisition functions that are not in Ax. On the other hand, this dispatcher is currently not doing much of anything.
* Do the `optimize_argparse` functions still need to support so many arguments, given that some of them seemed to just be there for constructing initial conditions? Probably not; I mean to look into that in a follow-up.

Reviewed By: saitcakmak

Differential Revision: D65227420
esantorella added a commit to esantorella/Ax that referenced this pull request Oct 30, 2024
Summary:

Context: It appears that using `qKnowledgeGradient` with MBM doesn't work, since [this line](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/acquisition.py#L339) passes the argument `optimizer` to `_argparse_kg`, which errors [here](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/optimizer_argparse.py#L169) because it has now received the argument "optimizer" twice.

We don't really need the `optimizer_argparse` special case for qKG anymore. This existed for two reasons:
- in order to construct initial conditions, which can be handled by `optimize_acqf`, and
- to ensure that the optimizer is `optimize_acqf`, because others are not supported

This diff:
* Modifies the `optimize_argparse` case for qKG to do nothing except update the optimizer to `optimize_acqf` and then call the base case

Implementation notes:
* Isn't it nonintuitive to set the optimizer then override it? Yes, a little, but the user can't choose the optimizer, so we're not overriding a user-specified choice. Also, lots of arguments to the `optimizer_argparse` functions get ignored. The "right" thing might be to put the choice of optimizer inside a dispatcher so that it can depend on the acquisition class, but that would be a bigger change.
* Do we really need this dispatcher anymore, if it is doing so little? Maybe. Third parties may wish to use it to accommodate acquisition functions that are not in Ax. On the other hand, this dispatcher is currently not doing much of anything.
* Do the `optimize_argparse` functions still need to support so many arguments, given that some of them seemed to just be there for constructing initial conditions? Probably not; I mean to look into that in a follow-up.

Reviewed By: saitcakmak

Differential Revision: D65227420
esantorella added a commit to esantorella/Ax that referenced this pull request Oct 31, 2024
Summary:

Context: It appears that using `qKnowledgeGradient` with MBM doesn't work, since [this line](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/acquisition.py#L339) passes the argument `optimizer` to `_argparse_kg`, which errors [here](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/optimizer_argparse.py#L169) because it has now received the argument "optimizer" twice.

We don't really need the `optimizer_argparse` special case for qKG anymore. This existed for two reasons:
- in order to construct initial conditions, which can be handled by `optimize_acqf`, and
- to ensure that the optimizer is `optimize_acqf`, because others are not supported

This diff:
* Modifies the `optimize_argparse` case for qKG to do nothing except update the optimizer to `optimize_acqf` and then call the base case

Implementation notes:
* Isn't it nonintuitive to set the optimizer then override it? Yes, a little, but the user can't choose the optimizer, so we're not overriding a user-specified choice. Also, lots of arguments to the `optimizer_argparse` functions get ignored. The "right" thing might be to put the choice of optimizer inside a dispatcher so that it can depend on the acquisition class, but that would be a bigger change.
* Do we really need this dispatcher anymore, if it is doing so little? Maybe. Third parties may wish to use it to accommodate acquisition functions that are not in Ax. On the other hand, this dispatcher is currently not doing much of anything.
* Do the `optimize_argparse` functions still need to support so many arguments, given that some of them seemed to just be there for constructing initial conditions? Probably not; I mean to look into that in a follow-up.

Reviewed By: saitcakmak

Differential Revision: D65227420
esantorella added a commit to esantorella/Ax that referenced this pull request Oct 31, 2024
Summary:

Context: It appears that using `qKnowledgeGradient` with MBM doesn't work, since [this line](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/acquisition.py#L339) passes the argument `optimizer` to `_argparse_kg`, which errors [here](https://github.com/facebook/Ax/blob/535af4edff70cbf20a49c676377f5c8945560d03/ax/models/torch/botorch_modular/optimizer_argparse.py#L169) because it has now received the argument "optimizer" twice.

We don't really need the `optimizer_argparse` special case for qKG anymore. This existed for two reasons:
- in order to construct initial conditions, which can be handled by `optimize_acqf`, and
- to ensure that the optimizer is `optimize_acqf`, because others are not supported

This diff:
* Modifies the `optimize_argparse` case for qKG to do nothing except update the optimizer to `optimize_acqf` and then call the base case

Implementation notes:
* Isn't it nonintuitive to set the optimizer then override it? Yes, a little, but the user can't choose the optimizer, so we're not overriding a user-specified choice. Also, lots of arguments to the `optimizer_argparse` functions get ignored. The "right" thing might be to put the choice of optimizer inside a dispatcher so that it can depend on the acquisition class, but that would be a bigger change.
* Do we really need this dispatcher anymore, if it is doing so little? Maybe. Third parties may wish to use it to accommodate acquisition functions that are not in Ax. On the other hand, this dispatcher is currently not doing much of anything.
* Do the `optimize_argparse` functions still need to support so many arguments, given that some of them seemed to just be there for constructing initial conditions? Probably not; I mean to look into that in a follow-up.

Reviewed By: saitcakmak

Differential Revision: D65227420
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 56c91ea.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: Using botorch_acqf_class=qKnowledgeGradient with botorch modular
3 participants