Skip to content

Add support for JetPack 6.2 build #3453

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 15 additions & 16 deletions docsrc/getting_started/jetpack.rst
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
.. _Torch_TensorRT_in_JetPack_6.1
.. _Torch_TensorRT_in_JetPack_6.2

Overview
##################

JetPack 6.1
JetPack 6.2
---------------------
Nvida JetPack 6.1 is the latest production release ofJetPack 6.
Nvida JetPack 6.2 is the latest production release ofJetPack 6.
With this release it incorporates:
CUDA 12.6
TensorRT 10.3
cuDNN 9.3
DLFW 24.09
DLFW 24.0

You can find more details for the JetPack 6.1:
You can find more details for the JetPack 6.2:

* https://docs.nvidia.com/jetson/jetpack/release-notes/index.html
* https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform/index.html
Expand All @@ -22,7 +22,7 @@ Prerequisites
~~~~~~~~~~~~~~


Ensure your jetson developer kit has been flashed with the latest JetPack 6.1. You can find more details on how to flash Jetson board via sdk-manager:
Ensure your jetson developer kit has been flashed with the latest JetPack 6.2. You can find more details on how to flash Jetson board via sdk-manager:

* https://developer.nvidia.com/sdk-manager

Expand Down Expand Up @@ -57,10 +57,10 @@ Ensure libcusparseLt.so exists at /usr/local/cuda/lib64/:
.. code-block:: sh

# if not exist, download and copy to the directory
wget https://developer.download.nvidia.com/compute/cusparselt/redist/libcusparse_lt/linux-sbsa/libcusparse_lt-linux-sbsa-0.5.2.1-archive.tar.xz
tar xf libcusparse_lt-linux-sbsa-0.5.2.1-archive.tar.xz
sudo cp -a libcusparse_lt-linux-sbsa-0.5.2.1-archive/include/* /usr/local/cuda/include/
sudo cp -a libcusparse_lt-linux-sbsa-0.5.2.1-archive/lib/* /usr/local/cuda/lib64/
wget https://developer.download.nvidia.com/compute/cusparselt/redist/libcusparse_lt/linux-aarch64/libcusparse_lt-linux-aarch64-0.7.1.0-archive.tar.xz
tar xf libcusparse_lt-linux-aarch64-0.7.1.0-archive.tar.xz
sudo cp -a libcusparse_lt-linux-aarch64-0.7.1.0-archive/include/* /usr/local/cuda/include/
sudo cp -a libcusparse_lt-linux-aarch64-0.7.1.0-archive/lib/* /usr/local/cuda/lib64/


Build torch_tensorrt
Expand All @@ -71,7 +71,7 @@ Install bazel

.. code-block:: sh

wget -v https://github.com/bazelbuild/bazelisk/releases/download/v1.20.0/bazelisk-linux-arm64
wget -v https://github.com/bazelbuild/bazelisk/releases/download/v1.25.0/bazelisk-linux-arm64
sudo mv bazelisk-linux-arm64 /usr/bin/bazel
chmod +x /usr/bin/bazel

Expand All @@ -86,8 +86,8 @@ Install pip and required python packages:

.. code-block:: sh

# install pytorch from nvidia jetson distribution: https://developer.download.nvidia.com/compute/redist/jp/v61/pytorch
python -m pip install torch https://developer.download.nvidia.com/compute/redist/jp/v61/pytorch/torch-2.5.0a0+872d972e41.nv24.08.17622132-cp310-cp310-linux_aarch64.whl
# install pytorch from nvidia jetson distribution: https://pypi.jetson-ai-lab.dev/jp6/cu126/
pip3 install torch torchvision torchaudio --index-url https://pypi.jetson-ai-lab.dev/jp6/cu126/

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the index, there are jp62 builds for PyTorch 2.7.0 and CUDA 12.8? What are the rules behind these builds / Jetson compute stack if you know? My understanding was jp62 was CUDA 12.6 / TensorRT 10.3

Copy link
Author

@johnnynunez johnnynunez Apr 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you look inside index you can see pytorch 2.6.0 stack:
for jetpack 6 is 12.6, jetpack 7 is coming with ubuntu 24.04 so this is why we move everything to ubuntu 24.04
https://pypi.jetson-ai-lab.dev/jp6/cu126

Pytorch index for ubuntu 24.04: https://pypi.jetson-ai-lab.dev/jp6/cu128/+simple/torch/

.. code-block:: sh

Expand All @@ -101,10 +101,9 @@ Install pip and required python packages:
Build and Install torch_tensorrt wheel file


Since torch_tensorrt version has dependencies on torch version. torch version supported by JetPack6.1 is from DLFW 24.08/24.09(torch 2.5.0).
Since torch_tensorrt version has dependencies on torch version. torch version supported by JetPack 6.2 is from NVIDA NGC 24.0 (torch 2.6.0) or distributed wheels https://pypi.jetson-ai-lab.dev/jp6/cu126

Please make sure to build torch_tensorrt wheel file from source release/2.5 branch
(TODO: lanl to update the branch name once release/ngc branch is available)
Please make sure to build torch_tensorrt wheel file from source release/2.6 branch

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might need to split the toolchain updates from the docs updates since the toolchain needs to land in the release/2.6 branch.

.. code-block:: sh

Expand Down
14 changes: 7 additions & 7 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -156,14 +156,14 @@ def load_dep_info():
JETPACK_VERSION = "4.6"
elif version == "5.0":
JETPACK_VERSION = "5.0"
elif version == "6.1":
JETPACK_VERSION = "6.1"
elif version == "6.2":
JETPACK_VERSION = "6.2"

if not JETPACK_VERSION:
warnings.warn(
"Assuming jetpack version to be 6.1, if not use the --jetpack-version option"
"Assuming jetpack version to be 6.2, if not use the --jetpack-version option"
)
JETPACK_VERSION = "6.1"
JETPACK_VERSION = "6.2"

if PRE_CXX11_ABI:
warnings.warn(
Expand Down Expand Up @@ -225,9 +225,9 @@ def build_libtorchtrt_cxx11_abi(
elif JETPACK_VERSION == "5.0":
cmd.append("--platforms=//toolchains:jetpack_5.0")
print("Jetpack version: 5.0")
elif JETPACK_VERSION == "6.1":
cmd.append("--platforms=//toolchains:jetpack_6.1")
print("Jetpack version: 6.1")
elif JETPACK_VERSION == "6.2":
cmd.append("--platforms=//toolchains:jetpack_6.2")
print("Jetpack version: 6.2")

if CI_BUILD:
cmd.append("--platforms=//toolchains:ci_rhel_x86_64_linux")
Expand Down
2 changes: 1 addition & 1 deletion toolchains/jetpack/BUILD
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,6 @@ constraint_value(
)

constraint_value(
name = "6.1",
name = "6.2",
constraint_setting = ":jetpack",
)
12 changes: 6 additions & 6 deletions toolchains/jp_workspaces/MODULE.bazel.tmpl
Original file line number Diff line number Diff line change
Expand Up @@ -4,18 +4,18 @@ module(
version = "${BUILD_VERSION}"
)

bazel_dep(name = "googletest", version = "1.14.0")
bazel_dep(name = "platforms", version = "0.0.10")
bazel_dep(name = "rules_cc", version = "0.0.9")
bazel_dep(name = "rules_python", version = "0.34.0")
bazel_dep(name = "googletest", version = "1.16.0")
bazel_dep(name = "platforms", version = "0.0.11")
bazel_dep(name = "rules_cc", version = "0.1.1")
bazel_dep(name = "rules_python", version = "1.3.0")

python = use_extension("@rules_python//python/extensions:python.bzl", "python")
python.toolchain(
ignore_root_user_error = True,
python_version = "3.11",
python_version = "3.10",
)

bazel_dep(name = "rules_pkg", version = "1.0.1")
bazel_dep(name = "rules_pkg", version = "1.1.0")
git_override(
module_name = "rules_pkg",
commit = "17c57f4",
Expand Down
5 changes: 3 additions & 2 deletions toolchains/jp_workspaces/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
setuptools==70.2.0
numpy<2.0.0
--index-url https://pypi.jetson-ai-lab.dev/jp6/cu126
setuptools>=70.2.0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Id like to add this index to the pyproject.toml to support uv as a build tool which I think has the best UX but there's likely some important details we need to think about. cc: @lanluo-nvidia for later

numpy
packaging
pyyaml
11 changes: 7 additions & 4 deletions toolchains/jp_workspaces/test_requirements.txt
Original file line number Diff line number Diff line change
@@ -1,9 +1,12 @@
expecttest==0.1.6
networkx==2.8.8
numpy<2.0.0
--index-url https://pypi.jetson-ai-lab.dev/jp6/cu126
expecttest>=0.1.6
networkx>=2.8.8
numpy
parameterized>=0.2.0
pytest>=8.2.1
pytest-xdist>=3.6.1
pyyaml
transformers
# TODO: currently timm torchvision nvidia-modelopt does not have distributions for jetson
timm
torchvision
# TODO: currently nvidia-modelopt does not have distributions for jetson
Loading