-
Notifications
You must be signed in to change notification settings - Fork 713
Cortex_m backend: Simplify add + linear fusion passes #15526
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Reuses the FoldAndAnnotateQParamsPass from the Arm backend to greatly simplify the logic for fusing the ops. Additionally updates the linear kernel to be numerically correct and computes the kernel_sum aot in the quantized_linear_fusion pass. Note that since this replaces the bias node it typically causes no extra memory usage. Updates the Linear tests to mirror this, including removing the various matmul tests. Since the linear is handled as a separate op rather than a particular type of matmul these tests are not related anymore. Removes unnecessary stub definitions in operators.py, operators.yaml and op_quantized_linear.cpp Leaving a few TODO:s since the patch is large already. Signed-off-by: Adrian Lundell <[email protected]> Change-Id: I194228ee3ae4b64a92f3f818afb2e045cc3acf91
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/15526
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ No FailuresAs of commit 9b8cdf2 with merge base 6ab8723 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
@AdrianLundell Just want to better understand the rationale behind removing per channel support / scratch buffer code and in general stateful ops related code ? (I do understand the FoldDQQ related changes to the pass though), (edited) |
|
The implementation didn't produce numerically results originally so I wanted to fix that before adding per-channel support. Not doing that in this patch was mostly to keep the patch size down and prioritizing the most important functionality before going into details. Regarding the stateful scratch buffer, since we compute the kernel sum which the buffer was used for AOT and replace the bias, there is no reason to do that in the runtime as I see it. The only use-case I see where that would make sense (using the MVE implementation) is if you are very tight on memory and prefer to spend some extra time during each interference to compute the kernel sum in a scratch buffer rather than keeping it in memory. But then again there is no reason to do everything in one single patch, better to get a good minimal working example running IMO. |
|
Reuses the FoldAndAnnotateQParamsPass from the Arm backend to greatly simplify the logic for fusing the ops.
Additionally updates the linear kernel to be numerically correct and computes the kernel_sum aot in the quantized_linear_fusion pass. Note that since this replaces the bias node it typically causes no extra memory usage.
Updates the Linear tests to mirror this, including removing the various matmul tests. Since the linear is handled as a separate op rather than a particular type of matmul these tests are not related anymore.
Removes unnecessary stub definitions in operators.py, operators.yaml and op_quantized_linear.cpp
Leaving a few TODO:s since the patch is large already.
cc @freddan80 @per @zingo @oscarandersson8218 @digantdesai