-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
导出onnx的stream模型时可以优化一点点的两个方法 #19
Comments
@SherryYu33 非常感谢您的建议,受教了! |
|
@GuanHengcong 因为重构以后的Unfold它的名称就和原来state_dict里面的对不上了,最简单的办法就是把convert_to_stream里面的
给注释了 |
大佬,很抱歉没有及时回复您消息,我把reshape和else都去掉了,报错如下,您可以再帮忙看看吗 |
@GuanHengcong 整体模型的里面的那个SFE的channel数是3,你把那个换成
|
佬,还是报类似的错误,形状对应不上,我可以用您的微信或者QQ联系您吗,实在是太打扰了,再拖下去我怕是要明年毕业了,/(ㄒoㄒ)/~~ |
我测试了上面SFE的优化写法在板子上的实际实时率,基本无变化。 |
我们发现GTCRN的功耗比相同算力的模型高很多 @Xiaobin-Rong @SherryYu33 |
然后会在当前文件夹生成一个gtcrn_pnnx.py的文件,里面有一个export_onnx()的函数,可以按喜好修改输出形式,最后当然也可以用onnxsim再跑一次
The text was updated successfully, but these errors were encountered: