Skip to content
This repository has been archived by the owner on May 1, 2024. It is now read-only.

How not to update pruned weights? #6

Open
rahimentezari opened this issue Oct 26, 2020 · 1 comment
Open

How not to update pruned weights? #6

rahimentezari opened this issue Oct 26, 2020 · 1 comment

Comments

@rahimentezari
Copy link

Thanks again for your great work and code.
I tried different ways to take care of pruned weights, i.e. keep them zero, for example running the following line for each batch:
output.register_hook(lambda grad: grad * mask.float())

But this is very slow. I looked for your solution as yours is much faster but could not find the specific lines. Can you please elaborate on what you do to prevent pruned weights from updating (gradients backprop)?

@rahimentezari rahimentezari changed the title How no to update pruned weights? How not to update pruned weights? Oct 26, 2020
@jfrankle
Copy link
Contributor

jfrankle commented Nov 3, 2020

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants