You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I Start importing pytorch...
/home/sasha/.face_chain_cache/model_files/feature_extractor/masked/at_0015.jit ********************
D import clients finished
W Pt model version is 1.6(same as you can check through <netron>), but the installed pytorch is 1.9.0+cu102. This may cause the model to fail to load.
E Catch exception when loading pytorch model: /home/sasha/.face_chain_cache/model_files/feature_extractor/masked/at_0015.jit!
E Traceback (most recent call last):
E File "rknn/api/rknn_base.py", line 399, in rknn.api.rknn_base.RKNNBase.load_pytorch
E File "rknn/base/RKNNlib/RK_nn.py", line 161, in rknn.base.RKNNlib.RK_nn.RKnn.load_pytorch
E File "rknn/base/RKNNlib/app/importer/import_pytorch.py", line 129, in rknn.base.RKNNlib.app.importer.import_pytorch.ImportPytorch.run
E File "rknn/base/RKNNlib/converter/convert_pytorch_new.py", line 5120, in rknn.base.RKNNlib.converter.convert_pytorch_new.convert_pytorch.load
E File "rknn/base/RKNNlib/converter/convert_pytorch_new.py", line 4902, in rknn.base.RKNNlib.converter.convert_pytorch_new.PyTorchOpConverter.report_missing_conversion
E NotImplementedError: The following operators are not implemented: ['quantized::batch_norm']
E Please feedback the detailed log file <conversion.log> to the RKNN Toolkit development team.
E You can also check github issues: https://github.com/rockchip-linux/rknn-toolkit/issues
Load Pytorch JIT model failed!
So, I see that problem in not implemented: ['quantized::batch_norm']. But what is workaround?
The text was updated successfully, but these errors were encountered:
pi-null-mezon
changed the title
Can not convert qunatized jit model into rknn
Can not convert quantized jit model into rknn
Mar 21, 2024
I have quantized *.jit model (quantization has been performed in torch). But rknn-toolkit can not load it :(
rknn-toolkit_v1.7.3 + torch_v1.9.0
console output:
So, I see that problem in not implemented: ['quantized::batch_norm']. But what is workaround?
The text was updated successfully, but these errors were encountered: