From 6e5fbd0d78a566dd1b705928254270a60a55ed65 Mon Sep 17 00:00:00 2001 From: Tianyu Liu Date: Tue, 7 Jan 2025 14:44:46 -0800 Subject: [PATCH] Update [ghstack-poisoned] --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 8f5648d6..0db6afcc 100644 --- a/README.md +++ b/README.md @@ -35,7 +35,7 @@ Our guiding principles when building `torchtitan`: - [FSDP2](docs/fsdp.md) with per-parameter sharding - [Tensor Parallel](https://pytorch.org/docs/stable/distributed.tensor.parallel.html) (including [async TP](https://discuss.pytorch.org/t/distributed-w-torchtitan-introducing-async-tensor-parallelism-in-pytorch/209487)) - [Pipeline Parallel](https://discuss.pytorch.org/t/distributed-w-torchtitan-training-with-zero-bubble-pipeline-parallelism/214420) - - Context Parallel + - [Context Parallel](https://discuss.pytorch.org/t/distributed-w-torchtitan-breaking-barriers-training-long-context-llms-with-1m-sequence-length-in-pytorch-using-context-parallel/215082) 2. Selective layer and operator activation checkpointing 3. [Distributed checkpointing](https://discuss.pytorch.org/t/distributed-w-torchtitan-optimizing-checkpointing-efficiency-with-pytorch-dcp/211250) (including async checkpointing) - [Interoperable checkpoints](docs/checkpoint.md) which can be loaded directly into [`torchtune`](https://github.com/pytorch/torchtune) for fine-tuning