diff --git a/docs/source/tutorial/multi_node_multi_gpu_vanilla.rst b/docs/source/tutorial/multi_node_multi_gpu_vanilla.rst index 2db1a4bccf4e..41f60cb6a4c5 100644 --- a/docs/source/tutorial/multi_node_multi_gpu_vanilla.rst +++ b/docs/source/tutorial/multi_node_multi_gpu_vanilla.rst @@ -164,7 +164,7 @@ Then, open another Slurm login terminal, and type: squeue -u export jobid= -In this step, we are saving the job ID of our slurm job from the first step. +In this step, we are saving the job ID of our Slurm job from the first step. Now, we are going to pull a container with a functional :pyg:`PyG` and CUDA environment onto each node: @@ -175,7 +175,7 @@ Now, we are going to pull a container with a functional :pyg:`PyG` and CUDA envi --container-mounts=/ogb-papers100m/:/workspace/dataset true NVIDIA provides a ready-to-use :pyg:`PyG` container that is updated each month with the latest from NVIDIA and :pyg:`PyG`. -You can sign up for early access at `here `_. +You can sign up for early access `here `_. General availability on `NVIDIA NGC `_ is set for the end of 2023. Alternatively, see `docker.com `_ for information on how to create your own container.