-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add unbatcher node #1416
Add unbatcher node #1416
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/data/1416
Note: Links to docs will display an error until the docs builds have been completed. ❌ 13 New FailuresAs of commit b771bea with merge base 62092dd ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
self._batch_idx = 0 | ||
|
||
def next(self) -> T: | ||
while self._batch_idx >= len(self._batch): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be if
instead of while
? i.e. if the _batch_idx overshoots the current _batch, get a new _batch and reset _batch_idx to 0.
EDIT: while also works though, in case the next batch if of size 0 and we want to skip that too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@divyanshk yes i was worried about next batch of size 0 case, although I'm assuming that's unlikely but it's an edge case none the less
Test failure seems unrelated, not sure what caused this but it seems weird to rely on a particular implementation of the RNG |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Required for #1415
Changes