Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to legalize operation 'torch.operator' that was explicitly marked illegal: onnx.QLinearConv #894

Open
pdhirajkumarprasad opened this issue Dec 6, 2024 · 1 comment
Assignees

Comments

@pdhirajkumarprasad
Copy link

For given IR, it failed to legalize: onnx.QLinearConv

module {
  func.func @mxnet_converted_model(%arg0: !torch.vtensor<[?,3,224,224],f32>) -> !torch.vtensor<[?,64,112,112],ui8>  attributes {torch.onnx_meta.ir_version = 4 : si64, torch.onnx_meta.opset_version = 21 : si64, torch.onnx_meta.opset_versions = {ai.onnx.ml = 2 : si64, ai.onnx.preview.training = 1 : si64, ai.onnx.training = 1 : si64, com.microsoft = 1 : si64, com.microsoft.experimental = 1 : si64, com.microsoft.mlfeaturizers = 1 : si64, com.microsoft.nchwc = 1 : si64}, torch.onnx_meta.producer_name = "onnx.quantize", torch.onnx_meta.producer_version = "0.1.0"} {
    %0 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<114> : tensor<ui8>} : () -> !torch.vtensor<[],ui8> 
    %1 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<0.018658448> : tensor<f32>} : () -> !torch.vtensor<[],f32> 
    %2 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1> : tensor<64x3x7x7xsi8>} : () -> !torch.vtensor<[64,3,7,7],si8> 
    %3 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1.0> : tensor<64xf32>} : () -> !torch.vtensor<[64],f32> 
    %4 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<0> : tensor<64xsi8>} : () -> !torch.vtensor<[64],si8> 
    %5 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<1> : tensor<64xsi32>} : () -> !torch.vtensor<[64],si32> 
    %6 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<0> : tensor<ui8>} : () -> !torch.vtensor<[],ui8> 
    %7 = torch.operator "onnx.Constant"() {torch.onnx.value = dense<0.0215634871> : tensor<f32>} : () -> !torch.vtensor<[],f32> 
    %8 = torch.operator "onnx.QuantizeLinear"(%arg0, %1, %0) : (!torch.vtensor<[?,3,224,224],f32>, !torch.vtensor<[],f32>, !torch.vtensor<[],ui8>) -> !torch.vtensor<[?,3,224,224],ui8> 
    %9 = torch.operator "onnx.QLinearConv"(%8, %1, %0, %2, %3, %4, %7, %6, %5) {torch.onnx.auto_pad = "NOTSET", torch.onnx.dilations = [1 : si64, 1 : si64], torch.onnx.group = 1 : si64, torch.onnx.kernel_shape = [7 : si64, 7 : si64], torch.onnx.pads = [3 : si64, 3 : si64, 3 : si64, 3 : si64], torch.onnx.strides = [2 : si64, 2 : si64]} : (!torch.vtensor<[?,3,224,224],ui8>, !torch.vtensor<[],f32>, !torch.vtensor<[],ui8>, !torch.vtensor<[64,3,7,7],si8>, !torch.vtensor<[64],f32>, !torch.vtensor<[64],si8>, !torch.vtensor<[],f32>, !torch.vtensor<[],ui8>, !torch.vtensor<[64],si32>) -> !torch.vtensor<[?,64,112,112],ui8> 
    return %9 : !torch.vtensor<[?,64,112,112],ui8>
  }
}

model impacted: 19

mobilenetv2-12-int8
vgg16-12-int8
resnet50-v1-12-int8
zfnet512-12-int8
squeezenet1.0-12-int8
bvlcalexnet-12-int8
caffenet-12-int8
densenet-12-int8
googlenet-12-int8
inception-v1-12-int8
mnist-12-int8
shufflenet-v2-12-int8
MaskRCNN-12-int8
ResNet101-DUC-12-int8
ssd-12-int8
FasterRCNN-12-int8
fcn-resnet50-12-int8
version-RFB-320-int8
arcfaceresnet100-11-int8
@vivekkhandelwal1
Copy link
Contributor

Partially fixed by llvm/torch-mlir#3917. The Onnx->Torch path is fixed by this PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants