You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Unsloth's implementations for finetuning are always the most accurate, so technically yes, we do influence the training trajectory - for eg we fixed dozens of bugs in Gemma, Llama & Phi, Mistral, helped fix a gradient accumulation bug and more.
We also make the finetuning process faster as well!
Unsloth's implementations for finetuning are always the most accurate, so technically yes, we do influence the training trajectory - for eg we fixed dozens of bugs in Gemma, Llama & Phi, Mistral, helped fix a gradient accumulation bug and more.
We also make the finetuning process faster as well!
Thank you so much for your reply! Further, are there any papers about Unsloth? I would like to further understand the principles of Unsloth so that I can apply them to my project. :D
Will using Unsloth affect the training results, or does it only serve to accelerate the process?
The text was updated successfully, but these errors were encountered: