In all of the above cases, ghcr.io/nvidia/jax:XXX
points to the most recent
nightly build of the container for XXX
. These containers are also tagged as
ghcr.io/nvidia/jax:XXX-YYYY-MM-DD
, if a stable reference is required.
This repo currently hosts a public CI for JAX on NVIDIA GPUs and covers some JAX libraries like: T5x, PAXML, Transformer Engine, Pallas and others to come soon.
We currently support the following frameworks and models. More details about each model and the available containers can be found in their respective READMEs.
Framework | Supported Models | Use-cases | Container |
---|---|---|---|
Paxml | GPT, LLaMA, MoE | pretraining, fine-tuning, LoRA | ghcr.io/nvidia/jax:pax |
T5X | T5, ViT | pre-training, fine-tuning | ghcr.io/nvidia/jax:t5x |
T5X | Imagen | pre-training | ghcr.io/nvidia/t5x:imagen-2023-10-02.v3 |
Big Vision | PaliGemma | fine-tuning, evaluation | ghcr.io/nvidia/jax:gemma |
levanter | GPT, LLaMA, MPT, Backpacks | pretraining, fine-tuning | ghcr.io/nvidia/jax:levanter |
maxtext | LLaMA, Gemma | pretraining | ghcr.io/nvidia/jax:maxtext |
We will update this table as new models become available, so stay tuned.
The JAX image is embedded with the following flags and environment variables for performance tuning:
XLA Flags | Value | Explanation |
---|---|---|
--xla_gpu_enable_latency_hiding_scheduler |
true |
allows XLA to move communication collectives to increase overlap with compute kernels |
--xla_gpu_enable_triton_gemm |
false |
use cuBLAS instead of Trition GeMM kernels |
Environment Variable | Value | Explanation |
---|---|---|
CUDA_DEVICE_MAX_CONNECTIONS |
1 |
use a single queue for GPU work to lower latency of stream operations; OK since XLA already orders launches |
NCCL_NVLS_ENABLE |
0 |
Disables NVLink SHARP (1). Future releases will re-enable this feature. |
There are various other XLA flags users can set to improve performance. For a detailed explanation of these flags, please refer to the GPU performance doc. XLA flags can be tuned per workflow. For example, each script in contrib/gpu/scripts_gpu sets its own XLA flags.
See this page for more information about how to profile JAX programs on GPU.
`bus error` when running JAX in a docker container
Solution:
docker run -it --shm-size=1g ...
Explanation:
The bus error
might occur due to the size limitation of /dev/shm
. You can address this by increasing the shared memory size using
the --shm-size
option when launching your container.
enroot/pyxis reports error code 404 when importing multi-arch images
Problem description:
slurmstepd: error: pyxis: [INFO] Authentication succeeded
slurmstepd: error: pyxis: [INFO] Fetching image manifest list
slurmstepd: error: pyxis: [INFO] Fetching image manifest
slurmstepd: error: pyxis: [ERROR] URL https://ghcr.io/v2/nvidia/jax/manifests/<TAG> returned error code: 404 Not Found
Solution: Upgrade enroot or apply a single-file patch as mentioned in the enroot v3.4.0 release note.
Explanation: Docker has traditionally used Docker Schema V2.2 for multi-arch manifest lists but has switched to using the Open Container Initiative (OCI) format since 20.10. Enroot added support for OCI format in version 3.4.0.
- AWS
- GCP
- Azure
- OCI