Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic growth of slab pool for concurrent_map #22

Open
wants to merge 61 commits into
base: master
Choose a base branch
from

Conversation

Nicolas-Iskos
Copy link

Modifications were made to SlabHash's concurrent_map as well as SlabAlloc to support dynamic growth of the device-side pool allocator used to extend slab lists.

In the SlabHash implementation currently available on the master branch, the number of slabs in the device-side slab pool is fixed (to 1GB). Consequently, the total memory footprint of the slab hash is fixed during run time. The code in this PR allows the number of slabs in the pool to increase dynamically during run time. Specifically, ConcurrentMap's buildBulk and buildBulkWithUniqueKeys determine whether the upcoming insertion batch will cause the structure to exceed a user-configurable threshold load factor. If so, new super blocks are added to the slab pool via cudaMalloc such that the structure's total memory footprint (sum of sizes of the base slabs and slab pool) doubles. Insertion then proceeds as normal. This allows the slab hash to continually grow its total memory footprint until device memory is exhausted.

gpu_hash_table has 3 new parameters (with default values). First, there are two template parameters for the number of memory blocks and number of super blocks with which to initialize the SlabAlloc, allowing the user to configure the initial size of the slab pool. Next, there is a threshold load factor argument which allows the user to configure how much of the SlabHash's total memory footprint can be occupied by key-value pairs before the slab pool is grown to double the total memory footprint.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant