You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: intermediate_source/FSDP1_tutorial.rst
+26-26Lines changed: 26 additions & 26 deletions
Original file line number
Diff line number
Diff line change
@@ -4,19 +4,19 @@ Getting Started with Fully Sharded Data Parallel(FSDP)
4
4
**Author**: `Hamid Shojanazeri <https://github.com/HamidShojanazeri>`__, `Yanli Zhao <https://github.com/zhaojuanmao>`__, `Shen Li <https://mrshenli.github.io/>`__
5
5
6
6
.. note::
7
-
|edit| View and edit this tutorial in `github <https://github.com/pytorch/tutorials/blob/main/intermediate_source/FSDP_tutorial.rst>`__.
7
+
|edit| FSDP1 is deprecated. Please check out `FSDP2 tutorial <https://docs.pytorch.org/tutorials/intermediate/FSDP_tutorial.html>`_.
8
8
9
-
Training AI models at a large scale is a challenging task that requires a lot of compute power and resources.
9
+
Training AI models at a large scale is a challenging task that requires a lot of compute power and resources.
10
10
It also comes with considerable engineering complexity to handle the training of these very large models.
11
11
`PyTorch FSDP <https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/>`__, released in PyTorch 1.11 makes this easier.
12
12
13
-
In this tutorial, we show how to use `FSDP APIs <https://pytorch.org/docs/stable/fsdp.html>`__, for simple MNIST models that can be extended to other larger models such as `HuggingFace BERT models <https://huggingface.co/blog/zero-deepspeed-fairscale>`__,
14
-
`GPT 3 models up to 1T parameters <https://pytorch.medium.com/training-a-1-trillion-parameter-model-with-pytorch-fully-sharded-data-parallel-on-aws-3ac13aa96cff>`__ . The sample DDP MNIST code courtesy of `Patrick Hu <https://github.com/yqhu/>`_.
13
+
In this tutorial, we show how to use `FSDP APIs <https://pytorch.org/docs/stable/fsdp.html>`__, for simple MNIST models that can be extended to other larger models such as `HuggingFace BERT models <https://huggingface.co/blog/zero-deepspeed-fairscale>`__,
14
+
`GPT 3 models up to 1T parameters <https://pytorch.medium.com/training-a-1-trillion-parameter-model-with-pytorch-fully-sharded-data-parallel-on-aws-3ac13aa96cff>`__ . The sample DDP MNIST code courtesy of `Patrick Hu <https://github.com/yqhu/>`_.
15
15
16
16
17
17
How FSDP works
18
18
--------------
19
-
In `DistributedDataParallel <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html>`__, (DDP) training, each process/ worker owns a replica of the model and processes a batch of data, finally it uses all-reduce to sum up gradients over different workers. In DDP the model weights and optimizer states are replicated across all workers. FSDP is a type of data parallelism that shards model parameters, optimizer states and gradients across DDP ranks.
19
+
In `DistributedDataParallel <https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html>`__, (DDP) training, each process/ worker owns a replica of the model and processes a batch of data, finally it uses all-reduce to sum up gradients over different workers. In DDP the model weights and optimizer states are replicated across all workers. FSDP is a type of data parallelism that shards model parameters, optimizer states and gradients across DDP ranks.
20
20
21
21
When training with FSDP, the GPU memory footprint is smaller than when training with DDP across all workers. This makes the training of some very large models feasible by allowing larger models or batch sizes to fit on device. This comes with the cost of increased communication volume. The communication overhead is reduced by internal optimizations like overlapping communication and computation.
22
22
@@ -44,7 +44,7 @@ At a high level FSDP works as follow:
44
44
* Run all_gather to collect all shards from all ranks to recover the full parameter in this FSDP unit
45
45
* Run backward computation
46
46
* Run reduce_scatter to sync gradients
47
-
* Discard parameters.
47
+
* Discard parameters.
48
48
49
49
One way to view FSDP's sharding is to decompose the DDP gradient all-reduce into reduce-scatter and all-gather. Specifically, during the backward pass, FSDP reduces and scatters gradients, ensuring that each rank possesses a shard of the gradients. Then it updates the corresponding shard of the parameters in the optimizer step. Finally, in the subsequent forward pass, it performs an all-gather operation to collect and combine the updated parameter shards.
50
50
@@ -57,7 +57,7 @@ One way to view FSDP's sharding is to decompose the DDP gradient all-reduce into
57
57
58
58
How to use FSDP
59
59
---------------
60
-
Here we use a toy model to run training on the MNIST dataset for demonstration purposes. The APIs and logic can be applied to training larger models as well.
60
+
Here we use a toy model to run training on the MNIST dataset for demonstration purposes. The APIs and logic can be applied to training larger models as well.
61
61
62
62
*Setup*
63
63
@@ -116,7 +116,7 @@ We add the following code snippets to a python script “FSDP_mnist.py”.
116
116
defcleanup():
117
117
dist.destroy_process_group()
118
118
119
-
2.1 Define our toy model for handwritten digit classification.
119
+
2.1 Define our toy model for handwritten digit classification.
120
120
121
121
.. code-block:: python
122
122
@@ -131,7 +131,7 @@ We add the following code snippets to a python script “FSDP_mnist.py”.
131
131
self.fc2 = nn.Linear(128, 10)
132
132
133
133
defforward(self, x):
134
-
134
+
135
135
x =self.conv1(x)
136
136
x = F.relu(x)
137
137
x =self.conv2(x)
@@ -146,7 +146,7 @@ We add the following code snippets to a python script “FSDP_mnist.py”.
146
146
output = F.log_softmax(x, dim=1)
147
147
return output
148
148
149
-
2.2 Define a train function
149
+
2.2 Define a train function
150
150
151
151
.. code-block:: python
152
152
@@ -169,7 +169,7 @@ We add the following code snippets to a python script “FSDP_mnist.py”.
@@ -336,18 +336,18 @@ The following is the peak memory usage from FSDP MNIST training on g4dn.12.xlarg
336
336
337
337
FSDP Peak Memory Usage
338
338
339
-
Applying *auto_wrap_policy* in FSDP otherwise, FSDP will put the entire model in one FSDP unit, which will reduce computation efficiency and memory efficiency.
340
-
The way it works is that, suppose your model contains 100 Linear layers. If you do FSDP(model), there will only be one FSDP unit which wraps the entire model.
339
+
Applying *auto_wrap_policy* in FSDP otherwise, FSDP will put the entire model in one FSDP unit, which will reduce computation efficiency and memory efficiency.
340
+
The way it works is that, suppose your model contains 100 Linear layers. If you do FSDP(model), there will only be one FSDP unit which wraps the entire model.
341
341
In that case, the allgather would collect the full parameters for all 100 linear layers, and hence won't save CUDA memory for parameter sharding.
342
-
Also, there is only one blocking allgather call for the all 100 linear layers, there will not be communication and computation overlapping between layers.
342
+
Also, there is only one blocking allgather call for the all 100 linear layers, there will not be communication and computation overlapping between layers.
343
343
344
344
To avoid that, you can pass in an auto_wrap_policy, which will seal the current FSDP unit and start a new one automatically when the specified condition is met (e.g., size limit).
345
345
In that way you will have multiple FSDP units, and only one FSDP unit needs to collect full parameters at a time. E.g., suppose you have 5 FSDP units, and each wraps 20 linear layers.
346
346
Then, in the forward, the 1st FSDP unit will allgather parameters for the first 20 linear layers, do computation, discard the parameters and then move on to the next 20 linear layers. So, at any point in time, each rank only materializes parameters/grads for 20 linear layers instead of 100.
347
347
348
348
349
349
To do so in 2.4 we define the auto_wrap_policy and pass it to FSDP wrapper, in the following example, my_auto_wrap_policy defines that a layer could be wrapped or sharded by FSDP if the number of parameters in this layer is larger than 100.
350
-
If the number of parameters in this layer is smaller than 100, it will be wrapped with other small layers together by FSDP.
350
+
If the number of parameters in this layer is smaller than 100, it will be wrapped with other small layers together by FSDP.
351
351
Finding an optimal auto wrap policy is challenging, PyTorch will add auto tuning for this config in the future. Without an auto tuning tool, it is good to profile your workflow using different auto wrap policies experimentally and find the optimal one.
352
352
353
353
.. code-block:: python
@@ -388,7 +388,7 @@ Applying the auto_wrap_policy, the model would be as follows:
388
388
389
389
CUDA event elapsed time on training loop 41.89130859375sec
390
390
391
-
The following is the peak memory usage from FSDP with auto_wrap policy of MNIST training on a g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch Profiler.
391
+
The following is the peak memory usage from FSDP with auto_wrap policy of MNIST training on a g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch Profiler.
392
392
It can be observed that the peak memory usage on each device is smaller compared to FSDP without auto wrap policy applied, from ~75 MB to 66 MB.
@@ -398,13 +398,13 @@ It can be observed that the peak memory usage on each device is smaller compared
398
398
399
399
FSDP Peak Memory Usage using Auto_wrap policy
400
400
401
-
*CPU Off-loading*: In case the model is very large that even with FSDP wouldn't fit into GPUs, then CPU offload can be helpful here.
401
+
*CPU Off-loading*: In case the model is very large that even with FSDP wouldn't fit into GPUs, then CPU offload can be helpful here.
402
402
403
403
Currently, only parameter and gradient CPU offload is supported. It can be enabled via passing in cpu_offload=CPUOffload(offload_params=True).
404
404
405
405
Note that this currently implicitly enables gradient offloading to CPU in order for params and grads to be on the same device to work with the optimizer. This API is subject to change. The default is None in which case there will be no offloading.
406
406
407
-
Using this feature may slow down the training considerably, due to frequent copying of tensors from host to device, but it could help improve memory efficiency and train larger scale models.
407
+
Using this feature may slow down the training considerably, due to frequent copying of tensors from host to device, but it could help improve memory efficiency and train larger scale models.
408
408
409
409
In 2.4 we just add it to the FSDP wrapper
410
410
@@ -430,7 +430,7 @@ Compare it with DDP, if in 2.4 we just normally wrap the model in DPP, saving th
430
430
431
431
CUDA event elapsed time on training loop 39.77766015625sec
432
432
433
-
The following is the peak memory usage from DDP MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch profiler.
433
+
The following is the peak memory usage from DDP MNIST training on g4dn.12.xlarge AWS EC2 instance with 4 GPUs captured from PyTorch profiler.
@@ -440,9 +440,9 @@ The following is the peak memory usage from DDP MNIST training on g4dn.12.xlarge
440
440
DDP Peak Memory Usage using Auto_wrap policy
441
441
442
442
443
-
Considering the toy example and tiny MNIST model we defined here, we can observe the difference between peak memory usage of DDP and FSDP.
443
+
Considering the toy example and tiny MNIST model we defined here, we can observe the difference between peak memory usage of DDP and FSDP.
444
444
In DDP each process holds a replica of the model, so the memory footprint is higher compared to FSDP which shards the model parameters, optimizer states and gradients over DDP ranks.
445
-
The peak memory usage using FSDP with auto_wrap policy is the lowest followed by FSDP and DDP.
445
+
The peak memory usage using FSDP with auto_wrap policy is the lowest followed by FSDP and DDP.
446
446
447
447
Also, looking at timings, considering the small model and running the training on a single machine, FSDP with and without auto_wrap policy performed almost as fast as DDP.
448
448
This example does not represent most of the real applications, for detailed analysis and comparison between DDP and FSDP please refer to this `blog post <https://pytorch.medium.com/6c8da2be180d>`__ .
**Author**: `Wei Feng <https://github.com/weifengpy>`__, `Will Constable <https://github.com/wconstab>`__, `Yifan Mao <https://github.com/mori360>`__
5
5
6
6
.. note::
7
-
|edit| Check out the code in this tutorial from `pytorch/examples <https://github.com/pytorch/examples/tree/main/distributed/FSDP2>`__.
7
+
|edit| Check out the code in this tutorial from `pytorch/examples <https://github.com/pytorch/examples/tree/main/distributed/FSDP2>`_. FSDP1 will be deprecated. The old tutorial can be found `here <https://docs.pytorch.org/tutorials/intermediate/FSDP1_tutorial.html>`_.
8
8
9
9
How FSDP2 works
10
10
--------------
@@ -166,7 +166,7 @@ Explicit prefetching works well in following situation:
166
166
Enabling Mixed Precision
167
167
~~~~~~~~~~~~~~~
168
168
169
-
FSDP2 offers a flexible `mixed precision policy <https://docs.pytorch.org/docs/main/distributed.fsdp.fully_shard.html#torch.distributed.fsdp.MixedPrecisionPolicy>`_ to speed up training. One typical use case are
169
+
FSDP2 offers a flexible `mixed precision policy <https://docs.pytorch.org/docs/main/distributed.fsdp.fully_shard.html#torch.distributed.fsdp.MixedPrecisionPolicy>`_ to speed up training. One typical use case is
170
170
171
171
* Casting float32 parameters to bfloat16 for forward/backward computation, see ``param_dtype=torch.bfloat16``
172
172
* Upcasting gradients to float32 for reduce-scatter to preserve accuracy, see ``reduce_dtype=torch.float32``
@@ -399,13 +399,13 @@ sync_module_states=True/False: Moved to DCP. User can broadcast state dicts from
399
399
400
400
forward_prefetch: Manual control over prefetching is possible with
401
401
402
-
* Manually call ``fsdp_module.unshard()``
403
-
* Use these APIs to control automatic prefetching, ``set_modules_to_forward_prefetch`` and ``set_modules_to_backward_prefetch``
* Use these APIs to control automatic prefetching, `set_modules_to_forward_prefetch<https://docs.pytorch.org/docs/main/distributed.fsdp.fully_shard.html#torch.distributed.fsdp.FSDPModule.set_modules_to_forward_prefetch>`_ and `set_modules_to_backward_prefetch<https://docs.pytorch.org/docs/main/distributed.fsdp.fully_shard.html#torch.distributed.fsdp.FSDPModule.set_modules_to_backward_prefetch>`_
404
404
405
405
limit_all_gathers: No longer needed, because ``fully_shard`` removed cpu synchronization
406
406
407
407
use_orig_params: Original params are always used (no more flat parameter)
ignored_params and ignored_states: `ignored_params<https://docs.pytorch.org/docs/main/distributed.fsdp.fully_shard.html#torch.distributed.fsdp.fully_shard>`_
0 commit comments