From c26ff2af1d0095403468c125a03f031019e9e035 Mon Sep 17 00:00:00 2001 From: Zhanghao Wu Date: Thu, 12 Dec 2024 15:53:56 -0800 Subject: [PATCH] Update examples/distributed-pytorch/README.md Co-authored-by: Romil Bhardwaj --- examples/distributed-pytorch/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/distributed-pytorch/README.md b/examples/distributed-pytorch/README.md index 29ff5194d77..68324b5ba9a 100644 --- a/examples/distributed-pytorch/README.md +++ b/examples/distributed-pytorch/README.md @@ -23,7 +23,7 @@ The following command will spawn 2 nodes with 2 L4 GPU each: `sky launch -c train.yaml` -In the [train.yaml](./train.yaml), we use `torchrun` to launch the training and set the arguments for distributed training using environment variables provided by SkyPilot. +In [train.yaml](./train.yaml), we use `torchrun` to launch the training and set the arguments for distributed training using [environment variables](https://docs.skypilot.co/en/latest/running-jobs/environment-variables.html#skypilot-environment-variables) provided by SkyPilot. ```yaml run: |