-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support mistral interleaved attn #9414
Support mistral interleaved attn #9414
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
2b925d9
to
b6e20a7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good, just some nits
Co-authored-by: Cyrus Leung <[email protected]>
May I know what is the name used in vllm for Ministral ? Also, when I try to load it,
|
@yananchen1989 make sure to load that model with |
shoule I set |
|
Signed-off-by: charlifu <[email protected]>
Signed-off-by: Vinay Damodaran <[email protected]>
Yes that would be nice! We could consider making it the default actually for the mistral organization |
Signed-off-by: Alvant <[email protected]>
Signed-off-by: Amit Garg <[email protected]>
I find the huggingface config of "sliding_window": 32768, So why do we need to support sliding window as a list? |
Signed-off-by: qishuai <[email protected]>
Signed-off-by: Sumit Dubey <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Allow mistral models with interleaved attention patterns to be run-able in vLLM. For now we'll have to cap the max model length to the minimum of the allowed attention pattern until we figure out how to cleanly support interleaved attention pattern (similar to Gemma)