Replies: 1 comment
-
Marking as stale. No activity in 60 days. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
In the README for the distributed optimizer, it is mentioned that when using bf16 training, a combination of bf16 model parameters and fp32 model grads is employed, and the distributed optimizer's fp32 main gradients are the same as the model's fp32 gradients. However, I am aware that in PyTorch, after the forward and backward passes, the gradients after forward+backward typically match the data type of the parameters. So, there should be always bf16 model grads given bf16 mdoel params, and this is apparently true in the case of fp16 training where an extra copy of fp32 main grads in the optimizer is necessary.
Could you please explain how it is possible to have bf16 parameters with fp32 gradients in the context of bf16 training? I am wondering why is there a difference between fp16 and bf16 training.
Beta Was this translation helpful? Give feedback.
All reactions