Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Completed Max_unpool1d implementation #22938

Closed
wants to merge 0 commits into from
Closed

Conversation

mohame54
Copy link
Contributor

@mohame54 mohame54 commented Sep 1, 2023

PR Description

Completed max_unpool1d function implementation I didn't add paddle backend implementation cuz paddle.max_unpool1d had some errors related paddle itself

@mohame54 mohame54 requested a review from AnnaTz September 1, 2023 22:14
@github-actions
Copy link
Contributor

github-actions bot commented Sep 1, 2023

Thanks for contributing to Ivy! 😊👏
Here are some of the important points from our Contributing Guidelines 📝:
1. Feel free to ignore the run_tests (1), run_tests (2), … jobs, and only look at the display_test_results job. 👀 It contains the following two sections:
- Combined Test Results: This shows the results of all the ivy tests that ran on the PR. ✔️
- New Failures Introduced: This lists the tests that are passing on main, but fail on the PR Fork. Please try to make sure that there are no such tests. 💪
2. The lint / Check formatting / check-formatting tests check for the formatting of your code. 📜 If it fails, please check the exact error message in the logs and fix the same. ⚠️🔧
3. Finally, the test-docstrings / run-docstring-tests check for the changes made in docstrings of the functions. This may be skipped, as well. 📚
Happy coding! 🎉👨‍💻

@ivy-leaves ivy-leaves added Array API Conform to the Array API Standard, created by The Consortium for Python Data API Standards Ivy API Experimental Run CI for testing API experimental/New feature or request PaddlePaddle Backend Developing the Paddle Paddle Backend. Ivy Functional API labels Sep 1, 2023
@ivy-leaves
Copy link

If you are working on an open task, please edit the PR description to link to the issue you've created.

For more information, please check ToDo List Issues Guide.

Thank you 🤗

@mohame54
Copy link
Contributor Author

mohame54 commented Sep 2, 2023

my implementation is close to torch 2.0.1+cu118 version I don't know why gpu version of torch.max_unpool1d is different from the cpu version @AnnaTz

@AnnaTz
Copy link
Contributor

AnnaTz commented Sep 5, 2023

my implementation is close to torch 2.0.1+cu118 version I don't know why gpu version of torch.max_unpool1d is different from the cpu version @AnnaTz

I get this error when I run the test locally:

E       AssertionError:  the results from backend torch and ground truth framework jax do not match
E        [[[-1. -1. -1.  0.]
E         [-1. -1. -1.  0.]
E         [-1. -1. -1.  0.]
E         [-1. -1. -1.  0.]]
E       
E        [[-1. -1. -1.  0.]
E         [-1. -1. -1.  0.]
E         [-1. -1. -1.  0.]
E         [-1. -1. -1.  0.]]
E       
E        [[-1. -1. -1.  0.]
E         [-1. -1. -1.  0.]
E         [-1. -1. -1.  0.]
E         [-1.  0.  1.  0.]]]!=[[[-1. -1. -1.  0.]
E         [-1. -1. -1.  0.]
E         [-1. -1. -1.  0.]
E         [-1. -1. -1.  0.]]
E       
E        [[-1. -1. -1.  0.]
E         [-1. -1. -1.  0.]
E         [-1. -1. -1.  0.]
E         [-1. -1. -1.  0.]]
E       
E        [[-1. -1. -1.  0.]
E         [-1. -1. -1.  0.]
E         [-1. -1. -1.  0.]
E         [-1.  0.  2.  0.]]] 
E       
E       
E       Falsifying example: test_max_unpool1d(
E           backend_fw='torch',
E           on_device='cpu',
E           x_k_s_p=(['float32', 'int64'], array([[[-1., -1., -1.],
E                    [-1., -1., -1.],
E                    [-1., -1., -1.],
E                    [-1., -1., -1.]],
E            
E                   [[-1., -1., -1.],
E                    [-1., -1., -1.],
E                    [-1., -1., -1.],
E                    [-1., -1., -1.]],
E            
E                   [[-1., -1., -1.],
E                    [-1., -1., -1.],
E                    [-1., -1., -1.],
E                    [-1.,  1.,  2.]]], dtype=float32), array([[[0, 1, 2],
E                    [0, 1, 2],
E                    [0, 1, 2],
E                    [0, 1, 2]],
E            
E                   [[0, 1, 2],
E                    [0, 1, 2],
E                    [0, 1, 2],
E                    [0, 1, 2]],
E            
E                   [[0, 1, 2],
E                    [0, 1, 2],
E                    [0, 1, 2],
E                    [0, 2, 2]]]), (2,), (1,), 0),

It seems the failure happens when running on cpu, so I don't understand what you mean exactly.
Is there a hands-on example that shows the original torch function is inconsistent across versions?

@mohame54
Copy link
Contributor Author

mohame54 commented Sep 5, 2023

Yeah, I will show you one example.

@mohame54
Copy link
Contributor Author

mohame54 commented Sep 5, 2023

image

@mohame54
Copy link
Contributor Author

mohame54 commented Sep 5, 2023

you can see here that's torch cuda max_unpool output from the same input above which is different from the cpu version @AnnaTz

@AnnaTz
Copy link
Contributor

AnnaTz commented Sep 6, 2023

Ohh I get what's happening 😅
If you run the specific example many times it will sometimes return the correct value. The result is non-deterministic because the indices contain repeated values.
They actually mention it in their documentation
image

@mohame54
Copy link
Contributor Author

mohame54 commented Sep 6, 2023

Yeah, that's a catch I tried before to remove the repeated indices but it was not useful also the tests were giving me errors

@AnnaTz
Copy link
Contributor

AnnaTz commented Sep 6, 2023

Yeah, that's a catch I tried before to remove the repeated indices but it was not useful also the tests were giving me errors

What kind of errors? I think you should be able to force the indices to not include identical values. We should at least know if that's the only problem (if the test passes for all other cases) before we decide what to do about it.

@mohame54
Copy link
Contributor Author

mohame54 commented Sep 6, 2023

output mismatch errors with torch backend if tried to exclude the repeated indices, as you can see so I didn't add the excluding part to the implementation .

@mohame54
Copy link
Contributor Author

mohame54 commented Sep 6, 2023

What kind of errors? I think you should be able to force the indices to not include identical values. We should at least know if
that's the only problem (if the test passes for all other cases) before we decide what to do about it.

I tried as many tests as I could but the only errors that appeared to me were output mismatching errors with of course torch backend .

@mohame54
Copy link
Contributor Author

mohame54 commented Sep 6, 2023

Also I didn't complete the paddle backend implementation cuz the paddle function was giving me errors also and when I tried it on colab with paddle 2.4.0 installed it didn't work for me. it seems that the function has not been implemented yet.

@AnnaTz
Copy link
Contributor

AnnaTz commented Sep 6, 2023

I'm inclined to say we will need a compositional implementation for this 💀

@mohame54
Copy link
Contributor Author

mohame54 commented Sep 6, 2023

->I'm inclined to say we will need a compositional implementation for this 💀
but Why?

@mohame54
Copy link
Contributor Author

mohame54 commented Sep 6, 2023

Do you mean compositional implementation using ivy API instead of backend specific API, wouldn't that affect the performance 😕

@AnnaTz
Copy link
Contributor

AnnaTz commented Sep 6, 2023

->I'm inclined to say we will need a compositional implementation for this 💀 but Why?

We need the ivy function to work consistently for all backends in all cases. The native torch function does not work as expected in the repeated indices case, and the paddle function is not implemented in some versions and giving errors in others. So, we need an ivy function implementation to cover these.

@AnnaTz
Copy link
Contributor

AnnaTz commented Sep 6, 2023

Do you mean compositional implementation using ivy API instead of backend specific API, wouldn't that affect the performance 😕

Also, about that, according to what @vedpatwardhan has told me, I think the implementation we have for the numpy, jax and tensorflow backends could be replaced by an ivy composition. It seems that we are doing the exact same operations in all these backends, so it would be better to avoid code repetition.

@mohame54
Copy link
Contributor Author

mohame54 commented Sep 6, 2023

Also, about that, according to what @vedpatwardhan has told me, I think the implementation we have for the numpy, jax and
tensorflow backends could be replaced by an ivy composition. It seems that we are doing the exact same operations in all >these backends, so it would be better to avoid code repetition.

if we want we can ignore the torch max_unpool1d backend implementation and consider custom torch implementation like jax and numpy case.

@mohame54
Copy link
Contributor Author

mohame54 commented Sep 6, 2023

But I agree with you in order to reduce the code repetition we need compositional implementation

@AnnaTz
Copy link
Contributor

AnnaTz commented Sep 6, 2023

Also, about that, according to what @vedpatwardhan has told me, I think the implementation we have for the numpy, jax and
tensorflow backends could be replaced by an ivy composition. It seems that we are doing the exact same operations in all >these backends, so it would be better to avoid code repetition.

if we want we can ignore the torch max_unpool1d backend implementation and consider custom torch implementation like jax and numpy implementations.

We should not completely ignore the native torch function, because in the cases it does work it will be faster than a composition. Hence, we need to convert ivy.max_unpool1d into a partial mixed function.

@AnnaTz
Copy link
Contributor

AnnaTz commented Sep 6, 2023

I think it will be quite simple since you have already implemented the backend functions compositionally. I think you will mainly copy that and replace native calls with ivy calls.

@mohame54
Copy link
Contributor Author

mohame54 commented Sep 6, 2023

sorry I didn't mean to edit your comment so sorry @AnnaTz 😄

@mohame54
Copy link
Contributor Author

mohame54 commented Sep 6, 2023

it's done @AnnaTz

ivy/data_classes/array/experimental/layers.py Outdated Show resolved Hide resolved
indices: ivy.Array,
kernel_size: Union[Tuple[int], int],
strides: Union[int, Tuple[int]] = None,
padding: Union[int, Tuple[int]] = 0,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, please remove the docs/demos changes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I remove them there will be conflicts with the main branch

if padding > (kernel_size[0] // 2):
padding = 0

values, indices = torch.nn.functional.max_pool1d(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sherry30 are we allowed to use native functions in testing?

output[indices] = values
if revert:
output = output.permute_dims([0, 2, 1])
return output

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code looks good overall. The only thing remaining is to add the partial torch backend.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by partial torch backend? I did not get what you meant here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mohame54 mohame54 requested a review from AnnaTz September 8, 2023 12:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Array API Conform to the Array API Standard, created by The Consortium for Python Data API Standards Ivy API Experimental Run CI for testing API experimental/New feature or request Ivy Functional API PaddlePaddle Backend Developing the Paddle Paddle Backend.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants