Skip to content

Latest commit

 

History

History

AV-Solutions

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Autonomous Vehicle Solutions

This folder contains samples for autonomous vehicle on NVIDIA DRIVE platform, including deployment of SOTA methods with TensorRT and inference application design. More is on the way. Please stay tuned.

Sparsity in INT8

Sparsity in INT8 contains the PyTorch codebase for sparsity INT8 training and TensorRT inference, demonstrating the workflow for leveraging both structured sparsity and quantization for more efficient deployment. Please refer to "Sparsity in INT8: Training Workflow and Best Practices for NVIDIA TensorRT Acceleration" for more details..

Multi-task model inference on multiple devices

Multi-task model inference on multiple devices is to demonstrate the deployment of a multi-task network on NVIDIA Drive Orin platform using both GPU and DLA. Please refer to our webinar on Optimizing Multi-task Model Inference for Autonomous Vehicles

StreamPETR-TensorRT

StreamPETR-TensorRT is a sample application to demonstrate the deployment of StreamPETR on NVIDIA Drive Orin platform using TensorRT.

UniAD-TensorRT

UniAD is an end-to-end model for autonomous driving. UniAD-TensorRT demostrates the deployment of UniAD on NVIDIA Drive Orin platform using TensorRT.

DCNv4-TensorRT

DCNv4-TensorRT is a sample application to demonstrate the deployment and optimization of Deformable Convolution v4 (DCNv4) on NVIDIA Drive Orin platform using TensorRT with multiple plugin implementations.

BEVFormer: INT8 explicit quantization for TensorRT

BEVFormer-INT8-EQ is an end-to-end example to demonstrate the explicit quantization and deployment of BEVFormer on NVIDIA GPUs using TensorRT.