You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use FastChat as the framework for both training and dialog-based inference, and FastChat supports Meta/Llama. I was excited to try the 3B state Open-Llama model, and the FastChat finetuning scripts all work perfectly with open_llama_3b_v2. Oddly, the FastChat inference framework does not work with my finetuned model, or with the original model. Has anyone figured out how to get FastChat fastchat.serve.cli to support openlm-research models?
The text was updated successfully, but these errors were encountered:
I use FastChat as the framework for both training and dialog-based inference, and FastChat supports Meta/Llama. I was excited to try the 3B state Open-Llama model, and the FastChat finetuning scripts all work perfectly with open_llama_3b_v2. Oddly, the FastChat inference framework does not work with my finetuned model, or with the original model. Has anyone figured out how to get FastChat fastchat.serve.cli to support openlm-research models?
The text was updated successfully, but these errors were encountered: