-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add usage of int8-mixed-bf16 quantization with X86InductorQuantizer #2668
Add usage of int8-mixed-bf16 quantization with X86InductorQuantizer #2668
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/2668
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit e7e2525 with merge base 56c7b4e (): This comment was automatically generated by Dr. CI and updates every 15 minutes. |
3be20b6
to
d396e90
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LG, thanks!
Can you please update the branch so we can test and merge this. |
48d84b5
to
e7e2525
Compare
@svekars rebased to the latest main branch. Please help to take a look again. |
Description
Add usage of int8-mixed-bf16 quantization in X86InductorQuantizer.
Checklist
cc @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen