Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[torchlib] Implement various upsample functions #1159

Open
8 tasks done
justinchuby opened this issue Nov 16, 2023 · 5 comments · Fixed by #1254
Open
8 tasks done

[torchlib] Implement various upsample functions #1159

justinchuby opened this issue Nov 16, 2023 · 5 comments · Fixed by #1254
Assignees
Labels
topic: torch_lib Related to the torch/aten function lib in development

Comments

@justinchuby
Copy link
Collaborator

justinchuby commented Nov 16, 2023

Reference:

@torch_op("aten::upsample_bilinear2d", trace_only=True)
def aten_upsample_bilinear2d(
self: TReal,
output_size: Optional[INT64] = None,
scales_h: Optional[float] = None,
scales_w: Optional[float] = None,
align_corners: bool = True, # pylint: disable=unused-argument
) -> TReal:
"""upsample_bilinear2d(Tensor self, SymInt[2] output_size, bool align_corners, float? scales_h=None, float? scales_w=None) -> Tensor"""
if output_size is not None:
result = _aten_upsample_bilinear2d_output_size(self, output_size)
else:
assert scales_h is not None
assert scales_h == scales_w
result = _aten_upsample_bilinear2d_scales(self, scales_h, scales_w)
return result

@justinchuby justinchuby added topic: torch_lib Related to the torch/aten function lib in development contribution welcome We welcome code contributions for this labels Nov 16, 2023
@justinchuby justinchuby changed the title Implement various upsample functions. Implement various upsample functions Nov 16, 2023
@justinchuby justinchuby changed the title Implement various upsample functions [torchlib] Implement various upsample functions Nov 16, 2023
@xiaowuhu xiaowuhu self-assigned this Dec 5, 2023
@justinchuby
Copy link
Collaborator Author

justinchuby commented Jan 3, 2024

@BowenBao
Copy link
Contributor

How are we validating this works e2e for pytorch export?

I think there might be an issue inside pytorch that decompositions are forced for these ops, similar to the einsum situation. pytorch/pytorch#116684

@BowenBao
Copy link
Contributor

align_corners: bool = True, # pylint: disable=unused-argument

Why is align_corners an unused argument?

@BowenBao
Copy link
Contributor

BowenBao commented Jan 11, 2024

Regarding Resize.nearest_mode. In torchscript exporter it was set to "floor". In torchlib it is left as default "round_prefer_floor"

Is the change intentional?

@justinchuby
Copy link
Collaborator Author

We do need to implement them and match the behaviors @xiaowuhu please correct me if I’m missing anything

BowenBao added a commit that referenced this issue Jan 16, 2024
Fixes
#1159 (comment)
which indeed turns out to be a problem uncovered by PyTorch CI
https://github.com/pytorch/pytorch/actions/runs/7508784822/job/20445196351?pr=117314.

> Fixes `align_corner` default value. The default value from pytorch
signature is `False`
https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html#torch.nn.Upsample.
> That said, it shouldn't matter since `align_corner` in aten signature
in `native_functions.yaml` is a required argument, so in practice this
function will never be invoked w/o `align_corner`.

Above is outdated. The case is more complicated.
#1254 (comment).

In short this PR fixes the torchlib op signature to match with aten
spec, and updates input wrangler for test case to bridge from sample
test inputs for function `torch.nn.functional.upsample_bilinear`.

---------

Co-authored-by: Justin Chu <[email protected]>
@justinchuby justinchuby reopened this Jan 16, 2024
@justinchuby justinchuby removed the contribution welcome We welcome code contributions for this label Feb 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic: torch_lib Related to the torch/aten function lib in development
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants