ORM Benchmarks #15
Replies: 1 comment 2 replies
-
Thanks a lot for the feedback! About the benchmarking itself, BenchmarkDotNet itself runs each benchmark multiple times before starting to count (warmup). However to ensure that all ORM's get to build their cache independently of what BenchmarkDotNet is doing. See this: Nonetheless I'd would be more than happy to change it to raw SQL for RepoDb as well, as I am assuming best case scenarios for all ORM's. So if you could help me on that, it would be great! On another note, I am not quite sure what RawDataAccessBencher is actually measuring, as I never looked into it. However, it is quite interesting to see such a huge difference between the RepoDb results and the ones from Dapper, especially for SingleQueries. Any ideas? |
Beta Was this translation helpful? Give feedback.
-
Hey mate, nice ORM and congrats to this.
In relation to the benchmark, RepoDB requires an initial materialization (or startup) as the first compilation is hitting some degree of performance-degradation, so it is impacting the average a lot. We also place that on our README.
On this link:
Venflow/src/Venflow/Venflow.Benchmarks/Benchmarks/QueryBenchmarks/QueryBatchAsyncBenchmark.cs
Line 90 in 54d81e6
The first execution is very slow and the 2nd execution would be fast. RepoDB has a special mechanism to cache everything based on the projected columns, therefore I expect it will never be slower than Dapper starting from the 2nd execution (since they are not doing that null-checks). On top of that, we have added more compilations on the passed-parameter which other is not doing, and that has brought RepoDB to lead the benchmar bencher of Frans Bouma.
Lastly, RepoDB supports the raw SQL execution like Dapper, so if you care about showing the performance and efficiency, you can utilize that on this materialization.
Looking forward to the further development of this ORM. 🚀
Beta Was this translation helpful? Give feedback.
All reactions