-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: support Mistral-Large-Instruct-2407 function calling #6778
Comments
Same request! I really need vLLM to support function call API interfaces. In fact, many of the currently released LLMs already support function call capabilities. This is very important for downstream application development. Plz |
if it uses the same function call format as Mistral 7B v0.3 it should be supported in #5649 |
@K-Mistele first of all thank you and good job on what you're doing to implement FC on vllm. I'm using a quantized version of mistral large 2407, don't know if the template is the same but you can check it on line 6176 here https://huggingface.co/ModelCloud/Mistral-Large-Instruct-2407-gptq-4bit/blob/main/tokenizer_config.json |
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you! |
MistralAI highligth the compatibility between vllm and mistral-small https://huggingface.co/mistralai/Mistral-Small-Instruct-2409 but baddly there are several tickets on the fact Vllm do not implement "function calling" aka tools for Mistral-Small. |
🚀 The feature, motivation and pitch
Mistral-Large-Instruct-2407
performance is great, evenTool Use & Function Calling
: https://mistral.ai/news/mistral-large-2407/Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: