-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pre-train model & prune model 模型压缩与转换 #27
Comments
目前是使用更小的模型,比如使用的mbnet-0.75。剪枝只是把某些权重设置为0,稀疏的模型对于k210没有加速和减少内存消耗。更好的缩小模型参数方法是使用模型蒸馏。 |
非常感谢回复,我打算往k210移植目标检测模型,但是目前训练出来的模型偏大,无法转换成功,应该如和更换pre-train模型训练并成功转换成K模型呢? |
不用换,mbnetv1-0.75通过之前的nncase转换为kmodel是可以在k210运行的 |
I follow the readme doc and I have the same problem too, the h5 model size is about 15.8M and the converted tflite model is about 15.4M, did you solve this problem? @erickyunyi @zhen8838 Also, when I try to convert the tflite file to kmodel, the ncc command gives me the following error message:
|
Please use nncase v0.1.0-rc5. The quantized mobilenet-v1 0.75 model can be used on k210. |
Solved, the reason for this issue is that I use TensorFlow 2.0 to convert the h5 model to tflite. |
hello zheng:
当我用连接提供的voc训练集完成训练后,默认生成的.h5模型大小为15.8M,通过提剪枝命令:make train MODEL=xxxx MAXEP=1 ILR=0.0003 DATASET=voc CLSNUM=20 BATCH=16 PRUNE=True CKPT=log/xxxxxx/yolo_model.h5 END_EPOCH=1 进行剪枝后生成的模型大小没有变化 ,模型通过nncase转换提示失败(模型超出K210内存) 我的目的时想把模型移植到K210上面跑,该如何实现,需要修改pre-train model 还是可以通过剪枝实现模型压缩?
The text was updated successfully, but these errors were encountered: