Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move outcome_names from BenchmarkRunner to BenchmarkTestFunction #3021

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

esantorella
Copy link
Contributor

Summary:
Context: This will enable constructing the BenchmarkRunner based on the BenchmarkProblem and BenchmarkMethod rather than asking the user to provide it. In addition to making things simpler (it's weird that a runner is part of a problem!), that will enable the Runner to be aware of aspects of the method, such as parallelism.

This will also enable us to return metrics in a dict format ({outcome_name: value}) if we choose to do so in the future. That may be simpler since the data already gets processed into dicts by the runner.

Note that for problems based on BoTorch problems, names are usually already set programmatically, so that logic moves to the test problem.

This diff:

  • Requires outcome_names on BenchmarkTestFunction
  • Removes outcome_names as an argument from BenchmarkRunner
  • Sets outcome names automatically on BoTorchTestFunction when they are not provided, following the convention used elsewhere.

Update usages:

  • Remove outcome_names from calls to BenchmarkRunner
  • Add outcome_names to calls to BenchmarkTestFunction, where needed; they are generally already present on surroagate test functions and can be constructed automatically for BoTorch-based problems.

Differential Revision: D65497700

Summary:
**Context**: This will enable constructing the `BenchmarkRunner` based on the `BenchmarkProblem` and `BenchmarkMethod` rather than asking the user to provide it. In addition to making things simpler (it's weird that a runner is part of a problem!), that will enable the Runner to be aware of aspects of the method, such as parallelism.

This will also enable us to return metrics in a dict format (`{outcome_name: value}`) if we choose to do so in the future. That may be simpler since the data already gets processed into dicts by the runner.

Note that for problems based on BoTorch problems, names are usually already set programmatically, so that logic moves to the test problem.

**This diff**:
* Requires `outcome_names` on `BenchmarkTestFunction`
* Removes `outcome_names` as an argument from `BenchmarkRunner`
* Sets outcome names automatically on `BoTorchTestFunction` when they are not provided, following the convention used elsewhere.

Update usages:
* Remove `outcome_names` from calls to `BenchmarkRunner`
* Add `outcome_names` to calls to `BenchmarkTestFunction`, where needed; they are generally already present on surroagate test functions and can be constructed automatically for BoTorch-based problems.

Differential Revision: D65497700
@facebook-github-bot facebook-github-bot added the CLA Signed Do not delete this pull request or issue due to inactivity. label Nov 5, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D65497700

@codecov-commenter
Copy link

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 95.60%. Comparing base (93c236e) to head (aef50f0).

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #3021   +/-   ##
=======================================
  Coverage   95.60%   95.60%           
=======================================
  Files         483      483           
  Lines       49012    49024   +12     
=======================================
+ Hits        46859    46871   +12     
  Misses       2153     2153           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

esantorella added a commit to esantorella/Ax that referenced this pull request Nov 5, 2024
facebook#3021)

Summary:

**Context**: This will enable constructing the `BenchmarkRunner` based on the `BenchmarkProblem` and `BenchmarkMethod` rather than asking the user to provide it. In addition to making things simpler (it's weird that a runner is part of a problem!), that will enable the Runner to be aware of aspects of the method, such as parallelism.

This will also enable us to return metrics in a dict format (`{outcome_name: value}`) if we choose to do so in the future. That may be simpler since the data already gets processed into dicts by the runner.

Note that for problems based on BoTorch problems, names are usually already set programmatically, so that logic moves to the test problem.

**This diff**:
* Requires `outcome_names` on `BenchmarkTestFunction`
* Removes `outcome_names` as an argument from `BenchmarkRunner`
* Sets outcome names automatically on `BoTorchTestFunction` when they are not provided, following the convention used elsewhere.

Update usages:
* Remove `outcome_names` from calls to `BenchmarkRunner`
* Add `outcome_names` to calls to `BenchmarkTestFunction`, where needed; they are generally already present on surroagate test functions and can be constructed automatically for BoTorch-based problems.

Differential Revision: D65497700
esantorella added a commit to esantorella/Ax that referenced this pull request Nov 5, 2024
facebook#3021)

Summary:

**Context**: This will enable constructing the `BenchmarkRunner` based on the `BenchmarkProblem` and `BenchmarkMethod` rather than asking the user to provide it. In addition to making things simpler (it's weird that a runner is part of a problem!), that will enable the Runner to be aware of aspects of the method, such as parallelism.

This will also enable us to return metrics in a dict format (`{outcome_name: value}`) if we choose to do so in the future. That may be simpler since the data already gets processed into dicts by the runner.

Note that for problems based on BoTorch problems, names are usually already set programmatically, so that logic moves to the test problem.

**This diff**:
* Requires `outcome_names` on `BenchmarkTestFunction`
* Removes `outcome_names` as an argument from `BenchmarkRunner`
* Sets outcome names automatically on `BoTorchTestFunction` when they are not provided, following the convention used elsewhere.

Update usages:
* Remove `outcome_names` from calls to `BenchmarkRunner`
* Add `outcome_names` to calls to `BenchmarkTestFunction`, where needed; they are generally already present on surroagate test functions and can be constructed automatically for BoTorch-based problems.

Differential Revision: D65497700
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed Do not delete this pull request or issue due to inactivity. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants