-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle tcp.bind_symbolic_shape
ops in fusion algorithm
#82
Handle tcp.bind_symbolic_shape
ops in fusion algorithm
#82
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please add some lit tests?
@@ -58,6 +65,10 @@ GenericBottomUpFuser::matchAndRewrite(Operation *op, | |||
uses.push_back(use.getOwner()); | |||
} | |||
|
|||
// All its uses are tcp.bind_symbolic_shpae ops. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: typo
// CHECK: %[[V5:.+]] = tcp.add %[[V4]], %[[V4]] : tensor<?x?xf32>, tensor<?x?xf32> -> tensor<?x?xf32> | ||
// CHECK: tcp.yield %[[V5]] : tensor<?x?xf32> | ||
// CHECK: } : tensor<?x?xf32> | ||
// CHECK: tcp.bind_symbolic_shape %[[V2]], [%[[V0]], %[[V1]]], affine_map<()[s0, s1] -> (s0, s1)> : tensor<?x?xf32> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it work the same way when the tcp.group returns more than one output? i.e. the tcp.bind_symbolic_shape
ops are outside the groups?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! LGTM besides one question.
The current fusion algorithm does not handle
tcp.bind_symbolic_shape
ops very well because these ops cause an op to have multiple downstream uses. This change handles these ops "specially". We collect all the uses of an op which only specify the shape of the output into a separate use category. We then move the bind shape ops along with the original op if it was moved to another region.