This repository was archived by the owner on Feb 2, 2024. It is now read-only.
WIP: interface for map-reduce style kernels #284
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR adds new APIs to be used by pandas functions implementers to help parallelize theirs kernels:
map_reduce(arg, init_val, map_func, reduce_func)map_reduce_chunked(arg, init_val, map_func, reduce_func)Parameters:
arg- list-like object (it can be python list, numpy array or any other object with similar interface)init_val- initial valuemap_func- map function which will be applied to each element/elements range in parallel (on different processes of on different nodes)reduce_func- reduction function to combine initial value and results from different processes/nodesThe difference between these two functions:
map_reducewill apply map function to each element in range (map function must take single element and return single element) and then apply reduce function pairwise (reduce function must take two elements and return single element)map_reduce_chunkedwill apply map function to range of elements, belonging to current thread/node (map function must take range of elements as paramenter and return list/array as result) and then apply reduce to entire ranges (reduce function must take two ranges as parameters and return list/array)You can also call any of these functions from inside map or reduce func to support nested parallelism.
These functions usable for both thread/mpi parallelism.
If you call them from numba
@njitfunction they will be parallelized by numba buiilt-in parallelisation machinery.If you call them from
@hpat.jitthey will be distributed by hpat parallelisation pass (doesn't work currently)Wrote parallel series sorting (numpy.sort + hand-written merge) as example.
Current issues:
numpy.sort, need to fixmap_reduce_chunkedhandcode as 4, will fixThe second part of this PR is distribution depth knob to (not-so)fine-tune nested parallelism between distribution and threading:
SDC_DISTRIBUTION_DEPTHcontrols how much nested parallel loops will be distributed by DistributionPassmap_reduce*functions or manually writtenprangeloops.1which means that only the most outer loop will be distributed by mpi, then next loop will parallelised by numba, and then all deeper loops will be executed sequentually (as numba doesn't support nested parallelisation)SDC_DISTRIBUTION_DEPTHto0to disable distribution.