Replies: 3 comments 10 replies
-
Hello - thank you for the question. I am able to reproduce the error in the TorchScript path on the latest Using Latest Nightly
Compile Modelimport torch
import torch_tensorrt
...
# Model Definition
...
trt_optimized_module = torch.compile(model,
backend="tensorrt",
dynamic=False,
options={"truncate_long_and_double": True,
"enabled_precisions": {torch.half}})
out = trt_optimized_module(torch.randn((1, 3, 224, 224)).cuda()) Regarding TorchScript, it seems that this model also cannot be traced with JIT (only scripted), since |
Beta Was this translation helpful? Give feedback.
-
Unfortunately I have to keep to versions that are available through NGC, so I can't install nightly builds. When I run with
|
Beta Was this translation helpful? Give feedback.
-
Hello. I'm encountering a similar problem. I'm using the latest Nvidia container (nvcr.io/nvidia/pytorch:24.04-py3), and deploying the same code as above. The compiling part works fine, but the scripting for the TensorScript model fails, with the following error:
I'm using this modified code (the first script, before the compiling, works. The second fail):
Scripting with example_inputs or with tracing did not work as well. I've also installed MonkeyType in the container. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Here is a minimal example reproducing the issue:
I get this error:
How do I make this conversion?
Beta Was this translation helpful? Give feedback.
All reactions