Skip to content
This repository has been archived by the owner on May 1, 2024. It is now read-only.

Does open_lth give inference speed/memory usage improvement? #18

Open
ajktym94 opened this issue Aug 23, 2022 · 2 comments
Open

Does open_lth give inference speed/memory usage improvement? #18

ajktym94 opened this issue Aug 23, 2022 · 2 comments

Comments

@ajktym94
Copy link

If open_lth framework is used for Lottery Ticket Hypothesis experiments on a model, will it result in improvement in terms of inference speed/memory usage of the models?

As far as I know, even if the models are pruned, they would use the same memory as earlier and hence would take the time and memory as before.

@2016312357
Copy link

2016312357 commented Nov 9, 2022

The structured sparsity of the model learned using Lottery simplifies and speeds up the computation on inference since a lot of weights are set to zero.

@ajktym94
Copy link
Author

ajktym94 commented Dec 6, 2022

Since PyTorch does not support sparse operations, this most probably will not improve the inference speed right? I assume the pruned channels/filters/weights are just set to 0 and are not literally removed? They still take up the same memory as an unpruned model right?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants