Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attempting to Perform Post Training Quantization On Tiny Yolo version 3 #1119

Closed
lilheng opened this issue Dec 6, 2024 · 4 comments
Closed
Labels
enhancement New feature or request

Comments

@lilheng
Copy link

lilheng commented Dec 6, 2024

Upon reading the issues that was closed regarding YOLO post training quantization, I have tried using the evaluate.py to create a quantized model, but I am stuck. Are there any tips and maybe what I should modify to be able to quantize the model?
Any help would be greatly appreciated!

@lilheng lilheng added the enhancement New feature or request label Dec 6, 2024
@Giuseppe5
Copy link
Collaborator

Hello,

Would you mind sharing some extra details about your problem? How exactly are you stuck and what could we do to help?

@lilheng
Copy link
Author

lilheng commented Dec 6, 2024

Hello,

so I am doing a research and development course for my university, where my team is trying to implement a real time object detection using the smartcam application developed by Xilinx. we want to replace the DPU with FINN framework compiling a quantized yolo model using brevitas to train it with the coco128 dataset, the precision can be very low as long as it can be deployed.
I am not sure how to define the yolo tiny 3 model with the brevitas functions like quanconv2d, quantReLu, and MaxPool2d.
What I did was change the yaml configuration file for tiny yolo v3 with the brevitas implementations, and then proceeed training. But I do not get any good result, is this the complete wrong way to use brevitas?

@Giuseppe5
Copy link
Collaborator

As you can imagine, we are unable to provide a full end-to-end help for your specific problem.

We don't have enough scope about what you are trying to accomplish and even if we did, we probably would be forced to allocate limited time to study the specific details of your use-case, in order to come up with the best solution.

My suggestion is to use Brevitas and FINN examples and tests to see how we deal with defining quantized network from scratch and export them so that they are FINN compliant.

If you have more specific issues, please feel free to let us know and we'll do our best to help you.

@Giuseppe5
Copy link
Collaborator

Hello,

I am going to close this. As mentioned above, please feel free to reopen if we can help you with more specific issues.

I hope you appreciate this, and thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants