Unable to run my own model! #5405
-
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
The gpu used is nvidia's 3080 |
Beta Was this translation helpful? Give feedback.
-
Hi @hh123445, TensorRT models are specific to the version of TensorRT installed and the model of the GPU which it was created on when they run.
I am unsure whether As a naive solution you can try to build the model yourself on your 3080 with TensorRT 8.2.3 installed and see if that solves your issue. |
Beta Was this translation helpful? Give feedback.
-
Yes, you are right. I converted the model on another computer with different computing power from the 3080, so the model failed to start. When I re-convert the model on the 3080 and deploy it, I can successfully run the model! Thank you!
You need to create a new plugins folder outside the models folder and copy the libmmdeploy_tensorrt_ops.so file stored in the MMDeploy/build/lib corresponding to the virtual environment to the plugins folder. |
Beta Was this translation helpful? Give feedback.
Hi @hh123445, TensorRT models are specific to the version of TensorRT installed and the model of the GPU which it was created on when they run.
I am unsure whether
mmdeploy
generates the engine file local to your device or it's simply converting the downloaded model. In the latter case, it could be the downloaded model wasn't generated on a GPU with the same compute capability as your 3080. Further up in the triton logs there should be a more verbose error when loading yourFaster_rcnn
model. Can you please include that? This will help us root cause.As a naive solut…