Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Dispatch] Don't sink collapse_shape through k2 dims #19379

Merged
merged 1 commit into from
Dec 6, 2024

Conversation

IanWood1
Copy link
Contributor

@IanWood1 IanWood1 commented Dec 5, 2024

This change adds linalgExtExpansionFn to limit sinking of collapse_shape ops through iree_linalg_ext.attention only when the k2 dimensions are not expanded by the reshape fusion. Currently, GPU Codegen cannot support unit dims on the k2 dimensions, so any collapse_shape that expands out unit dimensions on these dims will cause compilation errors.

This fixes the unit dim error in #19263 but it uncovered furtherk but unrelated, compilation errors tracked in #19377.

@IanWood1
Copy link
Contributor Author

IanWood1 commented Dec 5, 2024

Merging with #19381 resolves the compilation issue in #19263

@IanWood1 IanWood1 linked an issue Dec 5, 2024 that may be closed by this pull request
Copy link
Contributor

@MaheshRavishankar MaheshRavishankar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Copy link
Contributor

@Groverkss Groverkss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain why we need this? I dont understand why its not good to do this

@IanWood1
Copy link
Contributor Author

IanWood1 commented Dec 6, 2024

It's to deal with the fact that GPU codegen can't handle unit length dimensions on the reduction dims. For some more context:

#19263 (comment)

On the GPU side, this looks like it is coming because of inner unit dims for K2 dimension of attention. We could either collapse those unit dims to make it work, or I can send a patch tommorow to add support for multiple M/N dimension for intrinsic targetting.

It's just a workaround for now until either can be implemented (or the unit dims get folded out during DropUnitExtentDims)

@IanWood1 IanWood1 requested a review from manupak December 6, 2024 21:08
@Groverkss
Copy link
Contributor

It's to deal with the fact that GPU codegen can't handle unit length dimensions on the reduction dims. For some more context:

#19263 (comment)

On the GPU side, this looks like it is coming because of inner unit dims for K2 dimension of attention. We could either collapse those unit dims to make it work, or I can send a patch tommorow to add support for multiple M/N dimension for intrinsic targetting.

It's just a workaround for now until either can be implemented (or the unit dims get folded out during DropUnitExtentDims)

Right, i think passing inner unit dims to a dispatch is always bad. Shouldn’t this be handled by CollapseDimensions pass though? My understanding was reshaped are propagated to enable fusion properties and then CollapseDimensions collapses the extra reshapes that werent required for the dispatch.

I’ll unblock, but this seems like a hack because CollapseDimensions doesn’t understand reduction dimensions well. Can you confirm if my thinking is right?

@Groverkss Groverkss dismissed their stale review December 6, 2024 21:10

Comment above

@IanWood1
Copy link
Contributor Author

IanWood1 commented Dec 6, 2024

I’ll unblock, but this seems like a hack because CollapseDimensions doesn’t understand reduction dimensions well. Can you confirm if my thinking is right?

Yes, this is very hacky and should be handled by collapse dimensions. The problem is that the attention op has dynamic dims which need some work to support.

We also need to figure out how to handle bitcast-like ops. This is where the unit dims are coming from. For example:

%expanded_24 = tensor.expand_shape %214 [[0, 1, 2], [3]] output_shape [4, %21, 1, 128] : tensor<?x128xf16> into tensor<4x?x1x128xf16>
%323 = flow.tensor.bitcast %expanded_24 : tensor<4x?x1x128xf16>{%21} -> tensor<4x?x1x64xcomplex<f16>>{%21}
%collapsed_40 = tensor.collapse_shape %323 [[0], [1, 2], [3]] : tensor<4x?x1x64xcomplex<f16>> into tensor<4x?x64xcomplex<f16>>

But since tensor and flow dialects are kept separate the unit dims never get folded a way. And then the unit dims get propagated throughout the IR again.

@IanWood1
Copy link
Contributor Author

IanWood1 commented Dec 6, 2024

Okay, I'll merge this so that llama compilation is unblocked after mahesh's PR, but I'll start working a non-hack solution for this.

@IanWood1 IanWood1 merged commit d150a80 into iree-org:main Dec 6, 2024
38 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Llama 3.1 8B fp16 TP8 sharded fails to compile for CPU and GPU
3 participants