Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add hpobench wrapper #993

Open
wants to merge 5 commits into
base: develop
Choose a base branch
from
Open

Conversation

donglinjy
Copy link
Collaborator

Description

Add an Orion benchmark task wrapper over HPOBench.

Tests

  • I added corresponding tests for bug fixes and new features. If possible, the tests fail without the changes
  • All new and existing tests are passing ($ tox -e py38; replace 38 by your Python version if necessary)

Documentation

  • I have updated the relevant documentation related to my changes

Quality

  • I have read the CONTRIBUTING doc
  • My commits messages follow this format
  • My code follows the style guidelines ($ tox -e lint)

@donglinjy
Copy link
Collaborator Author

donglinjy commented Sep 21, 2022

This could be a potential way to re-use singularity container in HPOBench, basically create it first when create Orion benchmark task and save its socket id, then create it again with this socket_id when run the task.

But this may not work in multiple worker scenario as we could create Orion benchmark task at local client node and run the task at a different node.
https://github.com/automl/HPOBench/blob/d8b45b1eca9a61c63fe79cdfbe509f77d3f5c779/hpobench/container/client_abstract_benchmark.py#L106

local benchmark could ask for total difference extra requirements. With ``pip install orion[hpobench]``,
you will be able to run local benchmark ``benchmarks.ml.tabular_benchmark``.
If you want to run other benchmarks in local, refer HPOBench `Run a Benchmark Locally`_. To run
containerized benchmarks, you will need to install `singularity`_.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe there should be a quick word on how to use it with singularity once singularity is installed. Even if the usage is exactly the same, then it should be mentioned.

@@ -185,6 +187,13 @@ def _from_categorical(dim: CategoricalHyperparameter) -> Categorical:
return Categorical(dim.name, choices)


@to_oriondim.register(OrdinalHyperparameter)
def _from_ordinal(dim: OrdinalHyperparameter) -> Categorical:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is ordinal best mapped to categorical or integer? Categorical is loosing the importance of the ordering.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

def __init__(
self,
max_trials: int,
hpo_benchmark_class: Union[str, None] = None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should it support None as default? Or maybe '' by default to simplify typing.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wow, I forgot what I was thinking ^^

Comment on lines 39 to 40
benchmark_kwargs: dict = dict(),
objective_function_kwargs: dict = dict(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having non-mutable objects as defaults is a bit risky.

self.objective_function_kwargs = objective_function_kwargs

def call(self, **kwargs) -> List[Dict]:
hpo_benchmark = self.hpo_benchmark_cls(**self.benchmark_kwargs)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@donglinjy Did you figure out why the singularity container is destroyed at the end of the subprocess call? If we could avoid this then we could only pay the price of building it during task instantiation.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is caused by how we now run the trials, if we create the containerized benchmark during task init, we serialize the task instance and then deserialize it as a new task instance to run, after the run complete, this new instance will be destroyed which will cause the singularity container shutdown too. Although I mentioned another possible solution at #993 (comment), which will ask some additional change to make it work in remote workers scenario.

from orion.algo.space import Space
from orion.benchmark.task import Branin, CarromTable, EggHolder, HPOBench, RosenBrock

try:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really necessary, but you could use the class ImportOptional like here and here.

Copy link
Member

@bouthilx bouthilx Sep 26, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And HPOBench will likely require additional tests for singularity, and so would best be in a separate test module.

Copy link
Collaborator Author

@donglinjy donglinjy Oct 9, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, let me see If I can change it to similar stuff as ImportOptional, I actually saw such ImportOptional code and was just trying to make it simple in the test cases as profet and configspace.

I have not figured out an easy way to install singularity in our test env, maybe I can try it a little bit.


def test_create_with_container_benchmark(self):
"""Test to create HPOBench container benchmark"""
with pytest.raises(AttributeError) as ex:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With message 'Can not run conterized benchmark without Singularity'?

code, message = subprocess.getstatusoutput("singularity -h")
if code != 0:
raise AttributeError(
"Can not run conterized benchmark without Singularity: {}".format(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"Can not run conterized benchmark without Singularity: {}".format(
"Can not run containerized benchmark without Singularity: {}".format(

)
class TestHPOBench:
"""Test benchmark task HPOBenchWrapper"""

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we would like to test invalid benchmark names we would need to support singularity in the test suite so that _verify_benchmark does not fail and the container fails during call, am I wrong?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be true if we have singularity installed. But we can also test invalid benchmark names without singularity, because invalid name will cause the creation of benchmark fail before it reachs the point of using singularity.

@donglinjy
Copy link
Collaborator Author

@bouthilx I already added the work-around for runner test case to succeed in 3afad9a. But there build will fail in python 3.10 test added recently, and HPOBench can only support to run up to python version 3.10.0. Any suggestion about this?

ERROR: Package 'hpobench' requires a different Python: 3.10.8 not in '<=3.10,>=3.6'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants