You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 1, 2024. It is now read-only.
If open_lth framework is used for Lottery Ticket Hypothesis experiments on a model, will it result in improvement in terms of inference speed/memory usage of the models?
As far as I know, even if the models are pruned, they would use the same memory as earlier and hence would take the time and memory as before.
The text was updated successfully, but these errors were encountered:
The structured sparsity of the model learned using Lottery simplifies and speeds up the computation on inference since a lot of weights are set to zero.
Since PyTorch does not support sparse operations, this most probably will not improve the inference speed right? I assume the pruned channels/filters/weights are just set to 0 and are not literally removed? They still take up the same memory as an unpruned model right?
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
If
open_lth
framework is used for Lottery Ticket Hypothesis experiments on a model, will it result in improvement in terms of inference speed/memory usage of the models?As far as I know, even if the models are pruned, they would use the same memory as earlier and hence would take the time and memory as before.
The text was updated successfully, but these errors were encountered: