You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The training loop should offload these tensors to the CPU right after their aggregation is finished. Especially because the logging prints will do that anyways under the hood
The text was updated successfully, but these errors were encountered:
I think the reason we keep it is because we don't want call .item() (which incurs synchronization between CPU and GPU) unless hitting a log step. I do agree that if logging is disabled / infrequent, this overhead is unnecessary. Although, may I ask what's the use case where you'd log too infrequently so that this overhead becomes unacceptable?
One case would be debugging or testing: you don't want to dump logs to TensorBoard but then your memory will grow until OOM. The code currently expects that you eventually log
Another is small and fast models where the step time is small and it's excessive to log that often. There you might notice the ragged look in the memory metrics.
Also, I don't personally like the default behaviour of smoothing the loss over the last log_freq steps. I find it misleading when compared to log_freq=1. My expectation was that log_freq > 1 would simply log the loss at that modulo step. But that might just be me.
If logging is disabled (or very infrequent), the memory usage slowly grows because the max and average loss is kept in a list on-device: https://github.com/pytorch/torchtitan/blob/main/train.py#L353-L354
The training loop should offload these tensors to the CPU right after their aggregation is finished. Especially because the logging prints will do that anyways under the hood
The text was updated successfully, but these errors were encountered: