-
Notifications
You must be signed in to change notification settings - Fork 401
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable autograd graph to propagate after multi-device syncing (for loss functions in ddp
)
#2754
base: master
Are you sure you want to change the base?
Conversation
That sounds good to me, but can we add a test for this enhancement? |
Thanks for the prompt response @Borda. I'm thinking that I can make an additional unittest in |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #2754 +/- ##
=======================================
- Coverage 69% 69% -0%
=======================================
Files 334 320 -14
Lines 18153 17992 -161
=======================================
- Hits 12570 12400 -170
- Misses 5583 5592 +9 |
yeah, that sounds good to me :) |
6c926d7
to
1d0dabe
Compare
Update: to accommodate both cases where tensors from different ranks have the same/different shape, the line to put the original tensor (holding the AD graph) back into the gathered list was added in two places in the code. Because of the two cases, I wrote two unittests to account for each. Interestingly, both pass |
that is strange and worse some more investigation... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I looked briefly why the tests do not pass on older versions of Pytorch but could not find a reason.
I think we should just only support this for Pytorch > 2.0 and then add this to the documentation.
dc35370
to
e693ace
Compare
ce5dca1
to
ffc67f6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seeems the two test functions are now included twice in the test_ddp.py
file, please check
for more information, see https://pre-commit.ci
ah yes, sorry about that -- probably left behind when I tried rebasing and force-pushing with the pre-commit CI commits. Thanks for the review and changes @SkafteNicki. Unfortunately, it looks like the 2.X stable tests are failing now. This may suggest that something more subtle was happening with the failure of the torch < 2.0 tests earlier? |
That is strange, but yes @cw-tan it may very well be the case that this is also what caused the error in the older Pytorch versions. |
@cw-tan this is really strange, I am trying to debug this locally and I am seeing that the tests are failing at random. Eg. if I run them 10 times in a row I get a output from pytest like this:
with 4/10 of the "same shape" tests failing and 5/10 of the "different shape" test failing. But I cannot see there is any randomization going on in the tests? |
@SkafteNicki indeed this is an odd one. Though adding the |
@SkafteNicki sorry for the mess, I'm just trying to use the CI tests on all torch versions again but hopefully incorporating several trials (to check for indeterminism) and with the |
@cw-tan That is completely okay whatever it takes to debug the issue. If you want to locally to run tests multiple times i recommend installing: |
What does this PR do?
Fixes #2745
Single-line enhancement proposed in #2745, that is, to enable the propagation of the autograd graph after the
all_gather
operation. This is useful for syncing loss functions in addp
setting.Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃
📚 Documentation preview 📚: https://torchmetrics--2754.org.readthedocs.build/en/2754/