-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LinuxPerf, @profile, and other experiments #377
Comments
Some discussion from Slack: |
@gbaraldi I remember you being interested in this in the past. |
I think a few questions I have remaining are:
|
@vchuravy says: |
It's starting to seem to me that BenchmarkTools really ought to define separate "samplers" which can measure different metrics using different tools and experiment loops, and provide infrastructure to run different samplers across suites of benchmarks. |
@vchuravy I think we should probably move forward with a short-term straightforward LinuxPerf PR like #375, (assuming we can get a few reviews on it). We would mark the feature as experimental so we can make breaking changes to it. Later, we can work towards a BenchmarkTools interface which allows for more ergonomic custom benchmarking extensions (with |
This issue is to document various PRs surrounding Linuxperf and other extensible benchmarking in Benchmarktools. I've seen many great approaches, with various differences in semantics and interfaces. It seems that #375 profiles each eval loop (toggling on and off with a boolean), #347 is a generalized version of the same (unclear whether this can generalize to more than one extension at a time, such as profiling and perfing), and #325 only perfs a single execution.I recognize that different experiments require different setups. A sampling profiler requires a warmup and a minimum runtime, but probably doesn't need fancy tuning. A wall-clock time benchmark requires a warmup and a fancy eval loop where the evaluations are tuned, and maybe a gc scrub. What does LinuxPerf actually need? Are there any other experiments we also want to run (other than linuxperf?). Do we need to use metaprogramming to inline the LinuxPerf calls, or are function calls wrapping the samplefunc sufficient here?
@vchuravy @DilumAluthge @topolarity @Zentrik
The text was updated successfully, but these errors were encountered: