Skip to content

[Relax][PyTorch] Add Logaddexp op support for exported program #17803

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

AishwaryaElango
Copy link
Contributor

This PR adds support for logaddexp operation in the Relax and TIR pipeline for PyTorch in the main branch of TVM. It enables parsing and conversion of torch.logaddexp to corresponding Relax IR and lower level intrin (Core Op)

@AishwaryaElango AishwaryaElango marked this pull request as ready for review April 3, 2025 12:36
/*!
* \brief LogAddExp operation, computes log(exp(a) + exp(b)).
*/
class LogAddExpNode : public BinaryOpNode<LogAddExpNode> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is such an operator needed in TIR? In my opinion, it's not a common intrinsic.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you still need this one, for example, there is a hardware intrinsic. Please move it to call_intrinsic, aka "tir.log_add_exp" similar to "tir.log" and "tir.exp"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please give more context on what hardware intrinsics are in TVM?
Also, is it correct that if a hardware intrinsic exists, I can map the op directly in TIR, and if not, I should decompose the op at the Relax or TOPI level?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could decompose it at tir level :)

Copy link
Contributor Author

@AishwaryaElango AishwaryaElango Apr 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, I can decompose my Op in src/tir/op/op.cc rather than including it in include/tvm/tir/expr.h
Is that right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The op has been decomposed at the TIR level, hope that's in line with what's expected!

@Hzfengsy
Copy link
Member

Hzfengsy commented Apr 4, 2025

Please also test tir log_add_exp on CPU and GPU backend :) (Note that it's not e2e test but also tir op unittest

@Hzfengsy Hzfengsy merged commit 1bb7833 into apache:main Apr 17, 2025
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants