Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support mistral interleaved attn #9414

Conversation

patrickvonplaten
Copy link
Contributor

@patrickvonplaten patrickvonplaten commented Oct 16, 2024

Allow mistral models with interleaved attention patterns to be run-able in vLLM. For now we'll have to cap the max model length to the minimum of the allowed attention pattern until we figure out how to cleanly support interleaved attention pattern (similar to Gemma)

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@patrickvonplaten patrickvonplaten force-pushed the support_mistral_interleaved_attn branch from 2b925d9 to b6e20a7 Compare October 16, 2024 09:18
vllm/config.py Outdated Show resolved Hide resolved
vllm/config.py Outdated Show resolved Hide resolved
vllm/config.py Outdated Show resolved Hide resolved
Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good, just some nits

@DarkLight1337 DarkLight1337 added the ready ONLY add when PR is ready to merge/full CI is needed label Oct 16, 2024
@DarkLight1337 DarkLight1337 enabled auto-merge (squash) October 16, 2024 10:58
@DarkLight1337 DarkLight1337 merged commit 415f76a into vllm-project:main Oct 16, 2024
52 of 53 checks passed
@yananchen1989
Copy link

May I know what is the name used in vllm for Ministral ?
I only find mistralai/Ministral-8B-Instruct-2410 on the huggingface mistral collection https://huggingface.co/mistralai.

Also, when I try to load it,

OSError: Can't load tokenizer for 'mistralai/Ministral-8B-Instruct-2410'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'mistralai/Ministral-8B-Instruct-2410' is the correct path to a directory containing all relevant files for a LlamaTokenizerFast tokenizer.

@mgoin
Copy link
Member

mgoin commented Oct 18, 2024

@yananchen1989 make sure to load that model with tokenizer_mode="mistral" since it is a mistral format checkpoint

@yananchen1989
Copy link

@yananchen1989 make sure to load that model with tokenizer_mode="mistral" since it is a mistral format checkpoint

shoule I set tokenizer_mode="mistral" each time when I am using mistral series ? thanks.

@yananchen1989
Copy link

tokenizer_mode= "mistral" if args.llm_name.startswith('mistralai') else 'auto' not sure if this writting is fine .

charlifu pushed a commit to charlifu/vllm that referenced this pull request Oct 23, 2024
vrdn-23 pushed a commit to vrdn-23/vllm that referenced this pull request Oct 23, 2024
@patrickvonplaten
Copy link
Contributor Author

@yananchen1989 make sure to load that model with tokenizer_mode="mistral" since it is a mistral format checkpoint

shoule I set tokenizer_mode="mistral" each time when I am using mistral series ? thanks.

Yes that would be nice! We could consider making it the default actually for the mistral organization

Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
garg-amit pushed a commit to garg-amit/vllm that referenced this pull request Oct 28, 2024
@heheda12345
Copy link
Collaborator

I find the huggingface config of mistralai/Ministral-8B-Instruct-2410 is like:

"sliding_window": 32768,

So why do we need to support sliding window as a list?

FerdinandZhong pushed a commit to FerdinandZhong/vllm that referenced this pull request Oct 29, 2024
sumitd2 pushed a commit to sumitd2/vllm that referenced this pull request Nov 14, 2024
KuntaiDu pushed a commit to KuntaiDu/vllm that referenced this pull request Nov 20, 2024
mfournioux pushed a commit to mfournioux/vllm that referenced this pull request Nov 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants