We provide support to some popular deployment tools. This part is built upon the implementation of YOLOX Deployment and the adaptation by ByteTrack.
-
convert the pytorch model to onnx checkpoints, we provide an example here.
# In pratice you may want smaller model for faster inference. python deploy/scripts/export_onnx.py --output-name ocsort.onnx -f exps/example/mot/yolox_x_mix_det.py -c pretrained/bytetrack_x_mot17.pth.tar
-
run on the provided model video by
cd $OCSORT_HOME/deploy/ONNXRuntime python onnx_inference.py
-
Follow TensorRT Installation Guide and torch2trt to install TensorRT (Version 7 recommended) and torch2trt.
-
Convert Model
# you have to download checkpoint bytetrack_s_mot17.pth.tar from model zoo of ByteTrack python3 deploy/scripts/trt.py -f exps/example/mot/yolox_s_mix_det.py -c pretrained/bytetrack_s_mot17.pth.tar
-
Run on a demo video
python3 tools/demo_track.py video -f exps/example/mot/yolox_s_mix_det.py --trt --save_result
Note: We haven't validated the C++ support for TensorRT yet, please refer to ByteTrack guidance for adaptation for now.
Please follow the guidelines from ByteTrack to deploy by support from ncnn.