[GPU][Codegen] Allowing mfma for narrow problem config sizes #19615
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The motivation of this PR is convolution performance for resnet50 configs. With this PR (and a few pending ones), conv performance with igemm pipeline get decent speedup in situation where a standalone dimension size is smaller than intrinsic size. (Take dispatch 69 as example, the select tile m:7, n:512, k:4608 will be rejected from mfma because m tile is smaller than intrinsic size of 16). This happens because previously we are too defensive about when to use intrinsic: in situation when alignment is not required, we still enforce mfma to be picked up only when m/n/k tiles are all larger than intrinsic size.
With @nirvedhmeshram's #19271 and #19484, padding is allowed in tile and fuse matmul and igemm tile and fuse pipelines, it is no longer necessary to be as conservative as before. I am therefore getting rid of the conditional check that blocks mfma from being picked up.
This will impact a few pipelines that use
canTargetIntrinsic()
:LLVMGPUPadAndVectorDistribute
will allow narrow m/n/k dimension sizes for batch matmuliree-codegen-rocdl-configuration-pipeline
, will allow narrow m/n/k dimension sizes for matmul (instead of warp reduction)