We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
For given IR, it failed to legalize: onnx.QLinearConv
module { func.func @mxnet_converted_model(%arg0: !torch.vtensor<[?,3,224,224],f32>) -> !torch.vtensor<[?,64,112,112],ui8> attributes {torch.onnx_meta.ir_version = 4 : si64, torch.onnx_meta.opset_version = 21 : si64, torch.onnx_meta.opset_versions = {ai.onnx.ml = 2 : si64, ai.onnx.preview.training = 1 : si64, ai.onnx.training = 1 : si64, com.microsoft = 1 : si64, com.microsoft.experimental = 1 : si64, com.microsoft.mlfeaturizers = 1 : si64, com.microsoft.nchwc = 1 : si64}, torch.onnx_meta.producer_name = "onnx.quantize", torch.onnx_meta.producer_version = "0.1.0"} { %0 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<114> : tensor<ui8>} : () -> !torch.vtensor<[],ui8> %1 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<0.018658448> : tensor<f32>} : () -> !torch.vtensor<[],f32> %2 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1> : tensor<64x3x7x7xsi8>} : () -> !torch.vtensor<[64,3,7,7],si8> %3 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1.0> : tensor<64xf32>} : () -> !torch.vtensor<[64],f32> %4 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<0> : tensor<64xsi8>} : () -> !torch.vtensor<[64],si8> %5 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1> : tensor<64xsi32>} : () -> !torch.vtensor<[64],si32> %6 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<0> : tensor<ui8>} : () -> !torch.vtensor<[],ui8> %7 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<0.0215634871> : tensor<f32>} : () -> !torch.vtensor<[],f32> %8 = torch.operator "onnx.QuantizeLinear"(%arg0, %1, %0) : (!torch.vtensor<[?,3,224,224],f32>, !torch.vtensor<[],f32>, !torch.vtensor<[],ui8>) -> !torch.vtensor<[?,3,224,224],ui8> %9 = torch.operator "onnx.QLinearConv"(%8, %1, %0, %2, %3, %4, %7, %6, %5) {torch.onnx.auto_pad = "NOTSET", torch.onnx.dilations = [1 : si64, 1 : si64], torch.onnx.group = 1 : si64, torch.onnx.kernel_shape = [7 : si64, 7 : si64], torch.onnx.pads = [3 : si64, 3 : si64, 3 : si64, 3 : si64], torch.onnx.strides = [2 : si64, 2 : si64]} : (!torch.vtensor<[?,3,224,224],ui8>, !torch.vtensor<[],f32>, !torch.vtensor<[],ui8>, !torch.vtensor<[64,3,7,7],si8>, !torch.vtensor<[64],f32>, !torch.vtensor<[64],si8>, !torch.vtensor<[],f32>, !torch.vtensor<[],ui8>, !torch.vtensor<[64],si32>) -> !torch.vtensor<[?,64,112,112],ui8> return %9 : !torch.vtensor<[?,64,112,112],ui8> } }
model impacted: 19
mobilenetv2-12-int8 vgg16-12-int8 resnet50-v1-12-int8 zfnet512-12-int8 squeezenet1.0-12-int8 bvlcalexnet-12-int8 caffenet-12-int8 densenet-12-int8 googlenet-12-int8 inception-v1-12-int8 mnist-12-int8 shufflenet-v2-12-int8 MaskRCNN-12-int8 ResNet101-DUC-12-int8 ssd-12-int8 FasterRCNN-12-int8 fcn-resnet50-12-int8 version-RFB-320-int8 arcfaceresnet100-11-int8
The text was updated successfully, but these errors were encountered:
Partially fixed by llvm/torch-mlir#3917. The Onnx->Torch path is fixed by this PR.
Sorry, something went wrong.
vivekkhandelwal1
No branches or pull requests
For given IR, it failed to legalize: onnx.QLinearConv
model impacted: 19
The text was updated successfully, but these errors were encountered: