You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, only models corresponding to the PreTrainedModel instance are supported. It would be useful to add support for models using Parameter-Efficient Fine-Tuning (🤗 PEFT) methods.
Motivation
Adding support for 🤗 PEFT models would allow the same analyses to be performed on models optimised and trained to be efficient on consumer hardware.
Additional context
Mostly tbd, as PEFT uses a small number of different (trainable) parameters to those in the original PreTrainedModel model.
Commit to Help
I'm willing to help with this feature.
The text was updated successfully, but these errors were encountered:
Thanks for the report @DanielSc4! We'll evaluate how complex it would be to support out-of-the-box PeftModel classes in Inseq.
In the meantime, a viable workaround is to use model.merge_and_unload() to convert the model in its equivalent type in transformers (XXXForSeq2SeqLM or XXXForCausalLM) before passing it to inseq.load_model.
Add support for PEFT models
Description
Currently, only models corresponding to the
PreTrainedModel
instance are supported. It would be useful to add support for models using Parameter-Efficient Fine-Tuning (🤗 PEFT) methods.Motivation
Adding support for 🤗 PEFT models would allow the same analyses to be performed on models optimised and trained to be efficient on consumer hardware.
Additional context
Mostly tbd, as PEFT uses a small number of different (trainable) parameters to those in the original
PreTrainedModel
model.Commit to Help
The text was updated successfully, but these errors were encountered: