You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 1, 2024. It is now read-only.
Thanks again for your great work and code.
I tried different ways to take care of pruned weights, i.e. keep them zero, for example running the following line for each batch:
output.register_hook(lambda grad: grad * mask.float())
But this is very slow. I looked for your solution as yours is much faster but could not find the specific lines. Can you please elaborate on what you do to prevent pruned weights from updating (gradients backprop)?
The text was updated successfully, but these errors were encountered:
rahimentezari
changed the title
How no to update pruned weights?
How not to update pruned weights?
Oct 26, 2020
Thanks again for your great work and code.
I tried different ways to take care of pruned weights, i.e. keep them zero, for example running the following line for each batch:
output.register_hook(lambda grad: grad * mask.float())
But this is very slow. I looked for your solution as yours is much faster but could not find the specific lines. Can you please elaborate on what you do to prevent pruned weights from updating (gradients backprop)?
The text was updated successfully, but these errors were encountered: