-
Notifications
You must be signed in to change notification settings - Fork 249
Dlejeune/ck tile 2d multiple reductions #3147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds support for multi-reduction kernels to the CK_TILE library, enabling multiple reduction operations to be performed simultaneously on tensors. The implementation includes both threadwise and multiblock reduction variants, with supporting infrastructure for code generation, testing, and examples.
- Implements
MultiReduceThreadWiseandMultiReduceMultiblockkernels for GPU reduction operations - Adds a CMake-based code generation system that creates test instances from JSON configurations
- Provides comprehensive test coverage with both threadwise and multiblock implementations
Reviewed Changes
Copilot reviewed 22 out of 22 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
tile_engine/ops/reduce/reduce_instance_builder.py |
Code generator for test instances using configuration-driven approach |
tile_engine/ops/reduce/reduce_config.py |
Configuration loader for reduction kernel parameters |
tile_engine/ops/reduce/CMakeLists.txt |
Build system integration with Python code generation |
tile_engine/ops/CMakeLists.txt |
Added reduce subdirectory to build |
test/ck_tile/reduce/test_multi_reduce2d_* |
Test infrastructure for both threadwise and multiblock kernels |
include/ck_tile/ops/reduce/kernel/multi_reduce2d_* |
Core kernel implementations |
include/ck_tile/host/reference/reference_reduce.hpp |
Reference implementations for validation |
include/ck_tile/ops/reduce.hpp |
Updated API surface with new kernel includes |
example/ck_tile/05_reduce/multiple_reduce_*.cpp |
Example applications demonstrating usage |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
aosewski
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's have a bit more offline discussion about the overall design.
| static_assert(std::is_same_v<XDataType, typename XDistributedTensor_::DataType>, "wrong!"); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What problems did you have with that?
| if constexpr(std::is_lvalue_reference_v<Y&&> && !std::is_const_v<raw_t<Y>>) | ||
| { | ||
| y = ck_tile::type_convert<raw_t<Y>>(x); | ||
| } | ||
| /* otherwise (r-value or const) → do nothing */ | ||
| } | ||
|
|
||
| template <typename Y, typename X> | ||
| CK_TILE_HOST_DEVICE void operator()(Y& y, const X& x) const | ||
| { | ||
| y = ck_tile::type_convert<raw_t<Y>>(x); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bit confusing in the context of above overload for universal references.... Is the above version correct at all ??? @ThruptiRajLakshmanaGowda @ThomasNing
| } | ||
|
|
||
| CK_TILE_HOST_DEVICE static void CalculateBlockGroupParams(const int reduce_total_length, | ||
| [[maybe_unused]] int K_BlockTileSize, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If unused why you add it here?
| if(num_block_tile_iterations == 0) | ||
| { | ||
| num_block_tile_iterations = 1; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The (num_block_tile_iterations would equal to zero only if reduce_total_length is zero. Then you could set the block_group_size to 0 and return - it would be equal to zero anyway from below expression.
| static constexpr index_t CalculateInputVectorSize() | ||
| { | ||
| using S = typename Problem::BlockShape; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm right now wondering whether we want to expose through Problem and Shape the ThreadTile_N which might be used as vector size.... Maybe we could deduce the ThreadTile_N - that is number of elements per thread in N dim based on other parameters.
|
|
||
| if(pass_op) | ||
| { | ||
| std::cout << "✅" << std::endl; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add some more information ;)
| using BlockWarps = ck_tile::sequence<4, 1>; | ||
| using BlockTile = ck_tile::sequence<128, 128>; | ||
| using WarpTile = ck_tile::sequence<32, 128>; | ||
| using Vector = ck_tile::sequence<8, 8>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is ThreadTile
| for(int iN = __builtin_amdgcn_readfirstlane(0); iN < num_n_tile_iteration; ++iN) | ||
| { | ||
| auto x = load_tile(x_window); | ||
|
|
||
| // Apply the elementwise operation before the reduction | ||
| auto x_compute = cast_tile<ComputeDataType>(x); | ||
|
|
||
| tile_elementwise_inout(elementwise_ops.get(number<i>{}), x_compute, x_compute); | ||
|
|
||
| block_reduce2d(x_compute, y_compute, reduce_ops.get(number<i>{})); | ||
|
|
||
| move_tile_window(x_window, {0, S::Block_N}); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to run multiple reductions at the same time? In current version I see you're loading data multiple times.
I assume that for multiple reductions running in parallel you might need a multiple of all resources. Maybe we could have some heuristic with max number of reductions, after which we fallback to sequential execution (as is right now).
| // 3. Atomically operation between the register tile and DRAM | ||
| auto atomic_ops = | ||
| interblock_reduce_ops.get(number<i>{}) | ||
| .template GetAtomic<YDataType, y_thread_buf.N>(); // TODO: check if we | ||
| // need YDataType | ||
| atomic_ops(p_y_tile, y_thread_buf); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not just use update_tile API?
| return false; | ||
| } | ||
|
|
||
| if(input_strides.at(number<input_strides.size() - 1>{}) != 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why's that? You don't have to do vectorized reads only on rightmost (innermost) dimension .
Proposed changes
Implementation of multi reduce ops, both in a threadwise or multiblock (aka blockwise) fashion. It migrate most of the feature already present in old CK with a few exception:
Also some notable limitations are to be noted:
Checklist
Please put an
xinto the boxes that apply. You can also fill these out after creating the PR. If you're not sure, please don't hesitate to ask.clang-formaton all changed files