You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This work focused on GSM8K and downstream datasets. But can we get better foundational sparse models by fine-tuning on datasets like SlimPajama or Instruction tuning datasets like Guanaco, etc to re-gain the lost performance during SparseGPT pruning? Are there any such extensions that you are working on or early results?
The text was updated successfully, but these errors were encountered:
🚀 Feature Request
Motivation
[Optional] Implementation
Additional context
This work focused on GSM8K and downstream datasets. But can we get better foundational sparse models by fine-tuning on datasets like SlimPajama or Instruction tuning datasets like Guanaco, etc to re-gain the lost performance during SparseGPT pruning? Are there any such extensions that you are working on or early results?
The text was updated successfully, but these errors were encountered: