-
Notifications
You must be signed in to change notification settings - Fork 3.6k
[Relax][PyTorch] Add Logaddexp op support for exported program #17803
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Relax][PyTorch] Add Logaddexp op support for exported program #17803
Conversation
include/tvm/tir/expr.h
Outdated
/*! | ||
* \brief LogAddExp operation, computes log(exp(a) + exp(b)). | ||
*/ | ||
class LogAddExpNode : public BinaryOpNode<LogAddExpNode> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is such an operator needed in TIR? In my opinion, it's not a common intrinsic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you still need this one, for example, there is a hardware intrinsic. Please move it to call_intrinsic, aka "tir.log_add_exp" similar to "tir.log" and "tir.exp"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please give more context on what hardware intrinsics are in TVM?
Also, is it correct that if a hardware intrinsic exists, I can map the op directly in TIR, and if not, I should decompose the op at the Relax or TOPI level?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could decompose it at tir level :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, I can decompose my Op in src/tir/op/op.cc rather than including it in include/tvm/tir/expr.h
Is that right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The op has been decomposed at the TIR level, hope that's in line with what's expected!
Please also test tir log_add_exp on CPU and GPU backend :) (Note that it's not e2e test but also tir op unittest |
This PR adds support for logaddexp operation in the Relax and TIR pipeline for PyTorch in the main branch of TVM. It enables parsing and conversion of torch.logaddexp to corresponding Relax IR and lower level intrin (Core Op)