Skip to content

Unable to run my own model! #5405

Closed Answered by nv-kmcgill53
hh123445 asked this question in Q&A
Discussion options

You must be logged in to vote

Hi @hh123445, TensorRT models are specific to the version of TensorRT installed and the model of the GPU which it was created on when they run.

The model is the official fast r-cnn model downloaded from mmdetection and converted into an engine file through mmdeploy

I am unsure whether mmdeploy generates the engine file local to your device or it's simply converting the downloaded model. In the latter case, it could be the downloaded model wasn't generated on a GPU with the same compute capability as your 3080. Further up in the triton logs there should be a more verbose error when loading your Faster_rcnn model. Can you please include that? This will help us root cause.

As a naive solut…

Replies: 3 comments

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Answer selected by dyastremsky
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants
Converted from issue

This discussion was converted from issue #5350 on February 23, 2023 00:41.