Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can not convert quantized jit model into rknn #442

Open
pi-null-mezon opened this issue Mar 21, 2024 · 0 comments
Open

Can not convert quantized jit model into rknn #442

pi-null-mezon opened this issue Mar 21, 2024 · 0 comments

Comments

@pi-null-mezon
Copy link

I have quantized *.jit model (quantization has been performed in torch). But rknn-toolkit can not load it :(

rknn-toolkit_v1.7.3 + torch_v1.9.0

...
rknn.config(quantize_input_node=True,            
                mean_values=mean,
                std_values=std,
                quantized_dtype='dynamic_fixed_point-i8',                
                target_platform='rv1126',
                batch_size=100)
                
print('--> Loading model')
ret = rknn.load_pytorch(model=jit_model_file, input_size_list=input_size)
if ret != 0:
    print('Load Pytorch JIT model failed!')
    exit(ret)
...

console output:

I Start importing pytorch...
/home/sasha/.face_chain_cache/model_files/feature_extractor/masked/at_0015.jit ********************
D import clients finished
W Pt model version is 1.6(same as you can check through <netron>), but the installed pytorch is 1.9.0+cu102. This may cause the model to fail to load.
E Catch exception when loading pytorch model: /home/sasha/.face_chain_cache/model_files/feature_extractor/masked/at_0015.jit!
E Traceback (most recent call last):
E   File "rknn/api/rknn_base.py", line 399, in rknn.api.rknn_base.RKNNBase.load_pytorch
E   File "rknn/base/RKNNlib/RK_nn.py", line 161, in rknn.base.RKNNlib.RK_nn.RKnn.load_pytorch
E   File "rknn/base/RKNNlib/app/importer/import_pytorch.py", line 129, in rknn.base.RKNNlib.app.importer.import_pytorch.ImportPytorch.run
E   File "rknn/base/RKNNlib/converter/convert_pytorch_new.py", line 5120, in rknn.base.RKNNlib.converter.convert_pytorch_new.convert_pytorch.load
E   File "rknn/base/RKNNlib/converter/convert_pytorch_new.py", line 4902, in rknn.base.RKNNlib.converter.convert_pytorch_new.PyTorchOpConverter.report_missing_conversion
E NotImplementedError: The following operators are not implemented: ['quantized::batch_norm']
E Please feedback the detailed log file <conversion.log> to the RKNN Toolkit development team.
E You can also check github issues: https://github.com/rockchip-linux/rknn-toolkit/issues
Load Pytorch JIT model failed!

So, I see that problem in not implemented: ['quantized::batch_norm']. But what is workaround?

@pi-null-mezon pi-null-mezon changed the title Can not convert qunatized jit model into rknn Can not convert quantized jit model into rknn Mar 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant