Skip to content

Update docstring of handle_sharded_tensor_elasticity #153

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions torchsnapshot/manifest_ops.py
Original file line number Diff line number Diff line change
Expand Up @@ -126,8 +126,8 @@ def handle_sharded_tensor_elasticity(
:class:`ShardedTensor` can be elastic in several ways:

- A rank loads a portion of a sharded tensor different from what it saved
- A rank loads a sharded tensor that it did not participate in saving
- A rank doesn't load a sharded tensor that it participated in saving
- A rank loads a sharded tensor that did not participate in saving
- A rank doesn't load a sharded tensor that participated in saving

The first scenario is taken care of by :func:`get_manifest_for_rank`, which
makes all shards available to all instances of :class:`ShardedTensorEntry`.
Expand All @@ -143,7 +143,7 @@ def handle_sharded_tensor_elasticity(

NOTE: this function only takes effect if all sharded tensors are at the
root of the state dict. This means the elastic behavior is supported for
most model but not supported for most optimizers.
most models but not supported for most optimizers.

Args:
manifest: The local manifest for the rank.
Expand Down