Replies: 2 comments 1 reply
-
Hi @markusdr , Great idea. We can add an option to save only LoRA parameters instead of whole model in the save method Feel free to create an issue for this. If you are familiar with OpenDelta and can add it to xTuring quickly, it will be very helpful. Thank you. |
Beta Was this translation helpful? Give feedback.
0 replies
-
@markusdr The library already does save the parameters separately for base model and LoRA weights. The use case you mentioned is on our roadmap. If interested, we can collaborate on it. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Is it possible to save and load only the parameters that were actually fine-tuned with LoRA, for example?
That way, one could fine-tune different specialized models and save a small "delta" model for each, rather than the full model that is typically multiple GB large. At inference time, one would load the base model and then attach such a delta model.
That scenario is supported and described here for the OpenDelta library: https://github.com/thunlp/OpenDelta
Beta Was this translation helpful? Give feedback.
All reactions