Replies: 2 comments
-
Beta Was this translation helpful? Give feedback.
-
You should not need to set the number of invocations or iterations unless you have a good understanding of what they mean: https://benchmarkdotnet.org/articles/guides/how-it-works.html
https://github.com/dotnet/performance/blob/main/docs/microbenchmark-design-guidelines.md#loops
Just create a new collection, add all the items and return the collection. If you want the results to get scaled, you can use https://github.com/dotnet/performance/blob/main/docs/microbenchmark-design-guidelines.md#operationsperinvoke You can also take a look at the collection benchmarks used by the .NET Team: https://github.com/dotnet/performance/tree/main/src/benchmarks/micro/libraries/System.Collections |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
So I incorrectly assumed that
[GlobalSetup]
was run once before each Benchmark, and while I am kinda right, unfortunately it seems that the benchmark is run multiple times before it actually gets run for real which means the state is all messed up, for example here is a simple test trying to check a built in .net type against a custom type and compare results:The tests never run as it will blow up saying the key has already been entered, if I were to move to an
IterationSetup
approach that works but as warned in the docs these tests are done in under 1ms so the iteration step would skew the results.I am fine refactoring the benchmark into another format but I dont want to have to new up and add elements to the objects as part of the benchmark logic as it will skew results with allocations and time spent, I just want to be able to go in with an empty object for first 2 benchmarks then a pre-populated object for last 4 ones.
Any advice?
Beta Was this translation helpful? Give feedback.
All reactions