You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/perf/spmd_advanced.md
+6-6Lines changed: 6 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -80,14 +80,14 @@ The main use case for `XLAShardedTensor` [[RFC](https://github.com/pytorch/xla/i
80
80
There is also an ongoing effort to integrate <code>XLAShardedTensor</code> into <code>DistributedTensor</code> API to support XLA backend [[RFC](https://github.com/pytorch/pytorch/issues/92909)].
81
81
82
82
### DTensor Integration
83
-
PyTorch has prototype-released [DTensor](https://github.com/pytorch/pytorch/blob/main/torch/distributed/_tensor/README.md)in 2.1.
83
+
PyTorch has prototype-released [DTensor](https://github.com/pytorch/pytorch/blob/main/torch/distributed/tensor/README.md)since 2.1.
84
84
We are integrating PyTorch/XLA SPMD into DTensor API [RFC](https://github.com/pytorch/pytorch/issues/92909). We have a proof-of-concept integration for `distribute_tensor`, which calls `mark_sharding` annotation API to shard a tensor and its computation using XLA:
85
85
```python
86
86
import torch
87
-
from torch.distributed importDeviceMesh, Shard, distribute_tensor
87
+
from torch.distributed.tensorimportinit_device_mesh, Shard, distribute_tensor
88
88
89
89
# distribute_tensor now works with `xla` backend using PyTorch/XLA SPMD.
0 commit comments