You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a ONNX model (opset21) that has a groupnorm layer. The input num_channels for the groupnorm layer is 16 and num_groups is 4. The shape of scale and bias affine param arrays in the groupnorm layer is 16. If I run a forward pass with this ONNX model, its output matches with the PyTorch output. However, I am not able to parse this model using trtexec --onnx=model.onnx command.
Using onnx-graphsurgeon, if I manually set the scale and bias array in the ONNX graph to an array with shape 4 (which is the same value as num_groups), I am able to parse and serialize the ONNX model using trtexec.
However, this means there's a mismatch in the GroupNorm definition in ONNX+PyTorch vs TensorRT
ONNX and PyTorch allow for separate affine scale and bias params per channel, whereas it seems TRT only allows for scale and bias per group. I was able to confirm this on the TensorRT operator documentation page as well - link, where it states that scale and bias must have a shape of (1,G,1,1), where G is num_groups.
Is there a workaround for this issue? I tried using GroupNormalizationPlugin provided in this repo, but I am having problems loading that plugin as well (have created a separate issue for it)
Description
I have a ONNX model (opset21) that has a groupnorm layer. The input num_channels for the groupnorm layer is 16 and num_groups is 4. The shape of scale and bias affine param arrays in the groupnorm layer is 16. If I run a forward pass with this ONNX model, its output matches with the PyTorch output. However, I am not able to parse this model using
trtexec --onnx=model.onnx
command.Using onnx-graphsurgeon, if I manually set the scale and bias array in the ONNX graph to an array with shape 4 (which is the same value as num_groups), I am able to parse and serialize the ONNX model using
trtexec
.However, this means there's a mismatch in the GroupNorm definition in ONNX+PyTorch vs TensorRT
ONNX and PyTorch allow for separate affine scale and bias params per channel, whereas it seems TRT only allows for scale and bias per group. I was able to confirm this on the TensorRT operator documentation page as well - link, where it states that scale and bias must have a shape of
(1,G,1,1)
, whereG
is num_groups.Is there a workaround for this issue? I tried using
GroupNormalizationPlugin
provided in this repo, but I am having problems loading that plugin as well (have created a separate issue for it)Environment
TensorRT Version: 10.7.0.23
NVIDIA GPU: Jetson AGX Orin
NVIDIA Driver Version:
CUDA Version: 12.6
CUDNN Version:
Operating System: Jetson native build
Python Version (if applicable):
Tensorflow Version (if applicable):
PyTorch Version (if applicable): 2.5.1
Baremetal or Container (if so, version):
Relevant Files
Model link:
Steps To Reproduce
Commands or scripts:
Have you tried the latest release?: Yes
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (
polygraphy run <model.onnx> --onnxrt
):The text was updated successfully, but these errors were encountered: