-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connect experimental/alp autotuning with harness' tuning hooks #109
Comments
Hi @shabalind , I don't think #93, #94, #95 are blockers, because those are only needed for us to have an easier life when analyzing performance (assembly, intermediate IRs, etc...). OpenTuner is agnostic of how we collect performance. Basically, here:
We compile, run the and feed the result back to the framework. Compile and run can be obtained in any way, as long as we are able to "extract" flops or time from the execution to implement the feedback loop. If you could show me how to do that with the current python harness, then I can come up with a very simply autotuner example. Then it's mostly about how to structure the folder within the repo. Thanks, |
@giuseros If you look at examples/matmul/bench.py you'll see a few key things that you need to build an equivalent of your snippet.
The configuraton is defined by an expert, which is a composition of a number of transformations (see experts.py). Each transformation can be parametrized by variables (see transforms.py). Variables are roughly equivalent to OpenTuner parameters: they are typed and have a clear precise domain of all possible valid values. Harness accepts an instantiation of expert to some concrete values and runs it on a given problem definition (i.e. matmul vs conv vs reduction vs ...) + sizes + information of wheather sizes should be passed statically or dynamically. Once harness can be invoked programmatically to obtain performance numbers it looks like it should be easy to make an equivalent of the code snippet you provided above. |
As discussed in iree-org#109 there is an opportunity to connect OpenTuner to tune parameters for existing experts. This change makes test_harness return performance result as a nested dictionary that contains the same information as what's currently printed to stdout. As a result, it can be invoked programmaticaly as part of tuning loop.
As discussed in iree-org#109 there is an opportunity to connect OpenTuner to tune parameters for existing experts. This change makes test_harness return performance result as a nested dictionary that contains the same information as what's currently printed to stdout. As the result, it can be invoked programmaticaly.
As discussed in iree-org#109 there is an opportunity to connect OpenTuner to tune parameters for existing experts. This change makes test_harness return performance result as a nested dictionary that contains the same information as what's currently printed to stdout. As the result, it can be invoked programmatically.
As discussed in iree-org#109 there is an opportunity to connect OpenTuner to tune parameters for existing experts. This change makes test_harness return performance result as a nested dictionary that contains the same information as what's currently printed to stdout. As the result, it can be invoked programmatically.
As discussed in iree-org#109 there is an opportunity to connect OpenTuner to tune parameters for existing experts. This change makes test_harness return performance result as a nested dictionary that contains the same information as what's currently printed to stdout. As the result, it can be invoked programmatically.
* Make harness return performance results As discussed in #109 there is an opportunity to connect OpenTuner to tune parameters for existing experts. This change makes test_harness return performance result as a nested dictionary that contains the same information as what's currently printed to stdout. As the result, it can be invoked programmatically. * Address feedback
Hi @shabalind , Thanks, |
@giuseros Great! Let me know if there is anything else that blocks your progress -- I'd be happy to help to push this forward. |
Earlier, we experimented with simple randomised sampling of harness' tuning variables (see variables.py and their use in transforms.py), but eventually we decided against rolling our own tuning framework in the sandbox and it was removed in #76.
The longer term idea was to use externally provided search such as OpenTuner that is now used in experimental/alp. Conceptually both rely on the same concept of tuning variables with fixed domains, so it seems like there is an opportunity for a common codebase.
This issue is meant to be a discussion ground on whether it's a good idea, and what are the incremental steps to move towards that goal.
The text was updated successfully, but these errors were encountered: