You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I got the pth model from NVIDIA TAO Docker and then converted it to onnx also using the NVIDIA TAO. Then, I converted it to Engine file using my C++ code (cannot share). I had no problem for YOLOX conversion.
Now when I use enqueueV2 I get this error. Could you please help in fixing/debugging this error?
This is the version of my TRT TensorRT v8503 and
$ uname -a
Linux DOS 6.5.0-28-generic #29~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Apr 4 14:39:20 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt):
The text was updated successfully, but these errors were encountered:
monajalal
changed the title
XXX failure of TensorRT 8503 when running enqueueV2 on GPU 3080 (C++)
failure of TensorRT 8503 when running enqueueV2 on GPU 3080 (C++)
May 17, 2024
When I had this error it was related to not setting the bindings correctly.
Make yourself a cup of coffee, sip it calmly, and check the input and output tensor shapes of your model.
Once you got that, verify that your bindings in the inference function call are correct. Check the amount of input and output tensors and verify that each one is assigned the correct shape.
for
context->enqueueV2(buffers, cuda_stream, nullptr);
I get this error:
I got the pth model from NVIDIA TAO Docker and then converted it to onnx also using the NVIDIA TAO. Then, I converted it to Engine file using my C++ code (cannot share). I had no problem for YOLOX conversion.
Now when I use enqueueV2 I get this error. Could you please help in fixing/debugging this error?
This is the version of my TRT
TensorRT v8503
andDescription
Environment
TensorRT Version:
NVIDIA GPU:
NVIDIA Driver Version:
CUDA Version:
CUDNN Version:
Operating System:
Python Version (if applicable):
Tensorflow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if so, version):
Relevant Files
Model link:
Steps To Reproduce
Commands or scripts:
Have you tried the latest release?:
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (
polygraphy run <model.onnx> --onnxrt
):The text was updated successfully, but these errors were encountered: