-
Notifications
You must be signed in to change notification settings - Fork 654
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] jetson上安装mmdeploy的意义是?运行需要mmdet,mmdet需要mmengine,但安装只能到0.4.0,源码编译又有MMLogger的问题 #2593
Comments
hi, pls check if this PR is helpful to you: #2587 |
Hi, The reason you need to build this package is that you can utilize it for model transformation as well as inference using its API. |
You mentioned
But I follow the guide and it install the mmengine 0.8.0. Install Model Converter# build TensorRT custom operators
mkdir -p build && cd build
cmake .. -DMMDEPLOY_TARGET_BACKENDS="trt"
make -j$(nproc) && make install
# install model converter
cd ${MMDEPLOY_DIR}
pip install -v -e .
# "-v" means verbose, or more output
# "-e" means installing a project in editable mode,
# thus any local modifications made to the code will take effect without re-installation. |
我把这行代码注销掉了,不影响正常运行 |
Hi. If you don't need inference result for test image, it is easily to convert model on your training machine, get the onnx model, and deploy TensorRT model from onnx model on your jetson by Thus, the steps are below. On your host traning machine (that mmdeploy installed).python ./tools/deploy.py configs/mmdet/detection/detection_tensorrt_dynamic-64x64-608x608.py /home/user/mmdeploy_ws/deploypth/configs/rtmdet/rtmdet_l_8xb32-300e_coco.py /home/user/mmdeploy_ws/deploypth/epoch_300.pth /home/user/mmdeploy_ws/deploypth/mil_sea_renchuan_kr_TV_014952_20230403131233319_visible.JPG --work-dir ../deploypth --device cuda:0 --show and, you'll get Next, place On your jetsontrtexec --onnx=end2end.onnx --saveEngine=end2end.engine Additional options may be needed for dynamic shapes, quantization and so on. This way will be much easier. But I'm not sure if it fits your use case.
In my opinion, it helps us to avoid complex configuration about calibration when int8 quantization is needed. Also, it's nice that we can get images infer on jetson immediately. I hope this information may help you. And I would appreciate it if you could point out any mistakes. |
This issue is marked as stale because it has been marked as invalid or awaiting response for 7 days without any further response. It will be closed in 5 days if the stale label is not removed or if there is no further response. |
This issue is closed because it has been stale for 5 days. Please open a new issue if you have similar issues or you have any new updates now. |
@613B Hi, thanks for your reply and sorry for my late `[01/04/2024-01:13:15] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. [01/04/2024-01:13:17] [E] [TRT] ModelImporter.cpp:776: --- End node --- build TensorRT custom operatorsmkdir -p build && cd build what is the problem? |
bro ,do you have more efficient ways in jetson nano?I have try the onnxruntime and tensorrt to deploy my model ,but always have problem……… onnxruntime[2024-01-06 19:15:19.539] [mmdeploy] [info] [model.cpp:35] [DirectoryModel] Load model: "/home/nvidia/文档/mmdeploy_models/rtdetr1" TensorRTmmdeploy) nvidia@nvidia-desktop:~/文档/mmdeploy/build/bin$ ./object_detection cuda /home/nvidia/文档/mmdeploy_models/rt-detr /home/nvidia/图片/resources/test.jpg |
Hi. Probably, the cause is plugin I remember this problem being solved somehow before. I will rifle through the documents about that. If possible, please indicate to me the model you would like to use and its configuration files. |
Hi. Where did you exec |
File structure--deploy.json so should I use this pipeline? |
follow you advice I use the trtexec to convert my onnx file ,and get bug message:how to fix this,thanks you help again。(mmdeploy) nvidia@nvidia-desktop:~/文档/mmdeploy_models/rt-detr 2$ trtexec --onnx=end2end.onx --saveEngine=end2end.engine --plugins=/home/nvidia/文档/mmdeploy/mmdeploy/lib/libmmdeploy_tensorrt_ops.so --optShapes=input:1x3x640x640 |
Correct typo like [end2end.onx -> end2end.onnx] and try it again. Thank you. |
Thanks, very effective method, thanks for the reply |
@RunningLeon |
Checklist
Describe the bug
想请问下 jetson上安装mmdeploy的意义是什么,是可以在jetson上转换模型吗?
在jetson上安装mmdeploy后
运行需要mmdet,mmdet需要mmengine,但安装只能到0.4.0,源码编译又有MMLogger的错误。
Reproduction
就是 mmdeploy.py的指令
Environment
Error traceback
No response
The text was updated successfully, but these errors were encountered: