You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I used 4 distributed training strategies to train llama-3.1-8B from scratch. And I use same dataset and super parameters. But I got inconsistent training losses.
To Reproduce
Steps to reproduce the behavior. The easier it is to reproduce the faster it will get maintainer attention.
Expected behavior
Regardless of the distributed training strategy used, the training loss should remain consistent.
Stack trace/logs
Environment (please complete the following information):
Describe the bug
I used 4 distributed training strategies to train llama-3.1-8B from scratch. And I use same dataset and super parameters. But I got inconsistent training losses.
To Reproduce
Steps to reproduce the behavior. The easier it is to reproduce the faster it will get maintainer attention.
Expected behavior
Regardless of the distributed training strategy used, the training loss should remain consistent.
Stack trace/logs
Environment (please complete the following information):
Proposed fix
If you have a proposal for how to fix the issue state it here or link to a PR.
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: