Hosted on NVIDIA GPU Cloud (NGC) are the following Docker container images for machine learning on Jetson:
Below are the instructions to build and test the containers using the included Dockerfiles.
To enable access to the CUDA compiler (nvcc) during docker build
operations, add "default-runtime": "nvidia"
to your /etc/docker/daemon.json
configuration file before attempting to build the containers:
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
You will then want to restart the Docker service or reboot your system before proceeding.
To rebuild the containers from a Jetson device running JetPack 4.4 or newer, first clone this repo:
$ git clone https://github.com/SwanandkEN/enap-containers.git
$ cd enap-containers
To build the ML containers (l4t-pytorch
, l4t-tensorflow
, l4t-ml
), use scripts/docker_build_ml.sh
- along with an optional argument of which container(s) to build:
$ ./scripts/docker_build_ml.sh all # build all: l4t-pytorch, l4t-tensorflow, and l4t-ml
$ ./scripts/docker_build_ml.sh pytorch # build only l4t-pytorch
$ ./scripts/docker_build_ml.sh tensorflow # build only l4t-tensorflow
You have to build
l4t-pytorch
andl4t-tensorflow
to buildl4t-ml
, because it uses those base containers in the multi-stage build.
Note that the TensorFlow and PyTorch pip wheel installers for aarch64 are automatically downloaded in the Dockerfiles from the Jetson Zoo.
To run a series of automated tests on the packages installed in the containers, run the following from your jetson-containers
directory:
$ ./scripts/docker_test_ml.sh all # test all: l4t-pytorch, l4t-tensorflow, and l4t-ml
$ ./scripts/docker_test_ml.sh pytorch # test only l4t-pytorch
$ ./scripts/docker_test_ml.sh tensorflow # test only l4t-tensorflow