We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
random Insert and erase begin is a quite interesting benchmark I think you know why some hash maps maybe very slow and others are quite efficient.
#include "Map.h" #include "bench.h" #include "hex.h" #include "sfc64.h" #include <algorithm> #include <sstream> BENCHMARK(RandomInsertEraseBegin) { size_t max_n = 10000; using M = Map<uint64_t, uint32_t>; for (int i = 0; i < 6; ++i) { #ifdef USE_POOL_ALLOCATOR Resource<uint64_t, uint32_t> resource; M map{0, M::hasher{}, M::key_equal{}, &resource}; #else M map; #endif std::stringstream ss; ss << (max_n / 1000000.) << " M cycles"; sfc64 rng(999 + i); // benchmark randomly inserting & erasing begin bench.beginMeasure(ss.str().c_str()); for (size_t i = 0; i < max_n; ++i) map.emplace(rng(), 0); for (size_t i = 0; i < 2 * max_n; ++i) { map.erase(map.begin()); map.emplace(rng(), 0); } bench.endMeasure(map.size(), map.size()); max_n *= 3; } }
The text was updated successfully, but these errors were encountered:
random Insert and erase continue key is another benchmark even it's not offten used by the other benchamrk.
#include "Map.h" #include "bench.h" #include "hex.h" #include "sfc64.h" #include <algorithm> #include <sstream> BENCHMARK(RandomInsertEraseContinue) { size_t max_n = 100000; using M = Map<uint32_t, uint32_t>; for (int i = 0; i < 4; ++i) { #ifdef USE_POOL_ALLOCATOR Resource<uint32_t, uint32_t> resource; M map{0, M::hasher{}, M::key_equal{}, &resource}; #else M map; #endif std::stringstream ss; ss << (max_n / 1000000.) << " M cycles"; sfc64 rng(2023 + i); // benchmark randomly inserting & erasing begin bench.beginMeasure(ss.str().c_str()); for (size_t i = 0; i < max_n / 2; ++i) map.emplace((uint32_t)rng(), 0); auto key = map.begin()->first; for (size_t i = max_n; i > 0; i--) { auto it = map.find(key); if (it == map.end()) { it = map.begin(); key = it->first; } if constexpr(std::is_void_v<decltype(map.erase(it))>) { map.erase(it), key = (++it)->first; } else { it = map.erase(it), key = it->first; } map.emplace((uint32_t)rng(), 0); } bench.endMeasure(map.size(), map.size()); max_n *= 7; } }
Sorry, something went wrong.
No branches or pull requests
random Insert and erase begin is a quite interesting benchmark
I think you know why some hash maps maybe very slow and others are quite efficient.
The text was updated successfully, but these errors were encountered: