You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi again, @cnbeining@shreyansh26
In latest release we added Generic model, you can use it for llama-13b models in our library!
Also maybe a good option for you will be falcon-7b model, that is now supported by xturing.
We are also working on addition kbit quantisation to generic model, so it should be released soon.
Then you will be able to use llama-13b or any model with 4bit quantization.
Hey folks,
Trying to get 13/30B model with 4 bit fine tuning - any chance you folk could release the script used to convert the 7B version of model to 4 bit?
Thanks,
The text was updated successfully, but these errors were encountered: