-
Notifications
You must be signed in to change notification settings - Fork 7
Benchmarks
The work of a mapper is divided into two steps: initialization and execution. While the initialization step is performed only once, the generated code is executed many times.
Of course, mappers will try to generate the minimal code needed to achieve the result. Lesser the code, better the performance.
But is the most performant mapper the one who performs the execution step faster? Not really.
A mapper able to handle complex scenarios will have a more complex, and thus slower, initialization step. The more are the features a mapper provides, the slower its initialization step will be.
In facts, the initialization step is so complex that the amount of time it takes will not be negligible even running the generated code millions of times.
So, clearly, the best performant mapper is the one who is able to generate and execute the minimal code needed to map in the shortest time. It does not make sense to benchmark the execution step only.
Even if it would make sense to benchmark the execution time only, it would not be fair to UltraMapper since it generates the mappings 'on demand', while executing the mapping, as soon as they are needed, with no explicit initialization/registration.
Mainly for the above reason the following benchmarks show the sum of the initialization time + execution time.
I did not write the benchmarks. It's way too easy to craft specific test in order to show off as the best. These benchamarks are provided by ExpressMapper, who already compared the most relevant mappers around. I only added UltraMapper to the list of compared mappers and included the initialization time in the bill.
Here AutoMapper, ValueInjecter (which are really slow) and TinyMapper (which can handle only unrealisticly naive scenarios) are removed to make room for the other mappers.
The purpose of these benchmarks is just to show UltraMapper has good performances. This is verified.
UltraMapper shows up to be even faster that native code on complex scenarios (called L, XL, XXL). How can it be possible? Because UltraMapper does things right.
It is not actually faster than native code but it is smart: if an object is referenced twice, it is mapped once, cached and this cached map referenced twice. This is called reference tracking and it really pays off in real world scenarios.
In my opinion UltraMapper and AutoMapper (with PreserveReferences on, which slows it down even more) are the only two mappers capable of doing things right at the moment: the reference scheme in the object being mapped must be preserved!