-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lib: add afterEach hook option to each benchmark #35
base: main
Are you sure you want to change the base?
Conversation
I'm still evaluating how useful would be it since it's an additional instruction that will run after each function call, of course, affecting the benchmark result. I believe this is just replaced by the userland code adding those checks. |
afterEach is necessary for benchmarking code that can be negatively impacted by GC pressure which wouldn't normally occur, but do in the context of a tight looping benchmark. This isn't usually an issue for small micro benchmarks, but for larger integration style benchmarks it's quite common. However as currently implemented it's not useful, because the hook is affecting the timing. Instead each iteration should have individual timing which occurs before the afterEach hook. You could keep the aggregate timing as is for cases without afterEach, but I'm not sure what's gained from that. Individual timings also allow extracting percentiles. |
So, your suggestion would be:
I assume the first one, right? I'd need to analyse how it would behave since making unique comparisons would be more affected by external code. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should recommend people to use the managed timer, so they can add before/after hooks without affecting the performance of the benchmark.
Yes, however, deoptimizing the setup/cleanup might not be their intention though https://github.com/RafaelGSS/bench-node?tab=readme-ov-file#setup-and-teardown |
@RafaelGSS correct the first one. Having batched samples is pretty easy to do in the user benchmark by wrapping in a loop, or possible with a plugin that does a loop. |
suite.add('my benchmark with gc cleanup', (timer) => {
timer.start()
myBenchFunction()
timer.end(1)
gc();
}) or suite.add('my benchmark with gc cleanup', (timer) => {
for (let i = 0; i < timer.count; i++) {
timer.start();
myBenchFunction()
timer.end(1)
gc();
}
}) Wouldn't it solve your problem? |
Fixes: #31