-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TAO YoloV4 .etlt model with Triton server #12
Comments
@rsicak Any update on How you made it working?? |
Hi @imSrbh I believe that you have to write a client for your model to get the desired outputs that you are looking for and I may be able to assist you with this if necessary. Let me know if you still need help. |
Hi @imSrbh could you share your analytics config file where you are defining object count for a particular class id? you probably want to update the class-id if you are able to see counts in one model and no counts in another. |
You can firstly run the default steps in this github. |
@morganh-nv @monjha could you help us with post-processor parser |
I could help you with that |
Do you have a branch that you're currently working on that you could share? |
The post-processing is available at https://github.com/NVIDIA-AI-IOT/tao-toolkit-triton-apps/blob/main/tao_triton/python/postprocessing/yolov3_postprocessor.py |
Hi, is there any guide how to implement Yolo v4 TAO model into Triton inference server? I have trained Yolo v4 custom data model via TAO toolkit and looking for an guide how to implement this model with Triton inference server. Thank you.
The text was updated successfully, but these errors were encountered: