You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TorchSnapshot supports async snapshot, which allows training to resume before the storage I/O of a snapshot completes. For training workloads that are not storage I/O bound, this results in better resource utilization.
Today the feature is implemented roughly as follows:
Calculate a RAM budget based on available host resources.
Pipeline data from GPU -> RAM -> storage while keeping RAM usage under the budget.
Once all data is either moved to RAM or storage, give the control back to training and continue storage I/O in background.
This works well when host RAM is abundant. However, the smaller the RAM budget, the smaller the benefit async snapshot offers over sync snapshot. In such cases, if the target storage is slow (e.g. cloud storage), async snapshot can benefit from leveraging local disk as a staging area in addition to RAM.
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered:
🚀 The feature
Leverage local disk for async snapshot.
Motivation, pitch
TorchSnapshot supports async snapshot, which allows training to resume before the storage I/O of a snapshot completes. For training workloads that are not storage I/O bound, this results in better resource utilization.
Today the feature is implemented roughly as follows:
This works well when host RAM is abundant. However, the smaller the RAM budget, the smaller the benefit async snapshot offers over sync snapshot. In such cases, if the target storage is slow (e.g. cloud storage), async snapshot can benefit from leveraging local disk as a staging area in addition to RAM.
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: