-
Notifications
You must be signed in to change notification settings - Fork 308
[Feat] Allow symmetric_no_clipping_error for KleidiAI kernels, update Readme and validate Kleidi INT4 quantization path #2570
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ated quantizer API - Switched to quantize_() with Int8DynamicActivationIntxWeightConfig - Validated the move of packed_linear_int8_dynamic_activation_intx_weight_layout.py in torchao/dtypes/uintx - Fixed handling of SYMMETRIC_NO_CLIPPING_ERR mapping type - Validated INT4 path on a 2-layer nn.Sequential model with torch.int4 weights - Compared SYMMETRIC vs SYMMETRIC_NO_CLIPPING_ERR across PerAxis and PerGroup granularities
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/2570
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 3 Cancelled Jobs, 9 PendingAs of commit c4c1e50 with merge base 2eb4f97 ( NEW FAILURES - The following jobs have failed:
CANCELLED JOBS - The following jobs were cancelled. Please retry:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Hi @gausah-arm! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
@digantdesai @jerryzh168 @metascroy can you please review this PR |
@metascroy - can you please take a look? |
28c603c
to
e9746df
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall changes look great! Let’s see what CI says.
@nikhil-arm can you fix the linter error (ruff) and then we can land. You could also add the new mapping type to the tests here to get CI coverage: https://github.com/pytorch/ao/blob/main/torchao/experimental/tests/test_int8_dynamic_activation_intx_weight.py#L56 |
Thankyou so much! |
Thankyou! I have fixed the lint error and done with the changes around test. Please review. |
Running CI. I'll merge if it passes |
This PR validates the support for the INT4 quantization path using KleidiAI by checking the integration to use the new
quantize_()
API andInt8DynamicActivationIntxWeightConfig
. The PR addresses breakages like Symmetric_no_clipping_err caused by API refactors and ensures compatibility with the current layout.