-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unrealistic values of same tests after repeated executions #306
Comments
Can you show the full result with |
I performed repeatedly the following test:
and here are the results for one of the executions:: There are significant differences in the time evaluations for |
So the only thing that comes to mind is that |
I wonder what influence could this have on the time estimations. Is any way to ask for a specified number of tests? In some runs, there are 10000 executions, in other only about 1500. What is the logic behind this choice? For smaller number of samples the execution times are usually larger!
|
BenchmarkTools takes has a time limit of about 5s and a upper number of samples it will take. https://juliaci.github.io/BenchmarkTools.jl/stable/manual/#Benchmark-Parameters |
I think this is related to mutation, so it should be fixed if you provide a setup for each benchmark and set |
At least if we solve #24 |
Can you check it this still happens on the master branch? Our latest PR #318 should have fixed this |
The following simple tests behave differently in a second execution.
These are the results folowing a new start of Julia (other executions produce different figures):
Some values show a more than 10-times slow down. Is any explanation for this behaviour?
The text was updated successfully, but these errors were encountered: