You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I trained the Swin-L model with bigger(1200*2000) input and batch size is set to 1. And I trained with DDP mode。 The trainning logs showed it is 11256 max memory cost but the actual GPU memory cost is nearly 26G. Is it normal?
The text was updated successfully, but these errors were encountered:
I think this large model was trained on v100, so 26G could be possible.The max memory shown in training logs is always less than the actual cost but the difference is indeed large. On May 15, 2023, at 05:19, runzeer ***@***.***> wrote:
I trained the Swin-L model with bigger(1200*2000) input and batch size is set to 1. And I trained with DDP mode。 The trainning logs showed it is 11256 max memory cost but the actual GPU memory cost is nearly 26G. Is it normal?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
I trained the Swin-L model with bigger(1200*2000) input and batch size is set to 1. And I trained with DDP mode。 The trainning logs showed it is 11256 max memory cost but the actual GPU memory cost is nearly 26G. Is it normal?
The text was updated successfully, but these errors were encountered: