Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Which version of vllm should be installed #5

Open
xinghuang2050 opened this issue Jul 1, 2024 · 4 comments
Open

Which version of vllm should be installed #5

xinghuang2050 opened this issue Jul 1, 2024 · 4 comments

Comments

@xinghuang2050
Copy link

Hi, when I follow the default steps to set up environment:
pip install vllm
it will automaticly install vllm 0.5.0.post1, and transformers>=4.40.0 is required.

When installing SPPO ( transformers==4.36.2 are required), I got the following errors:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
vllm 0.5.0.post1 requires tokenizers>=0.19.1, but you have tokenizers 0.15.2 which is incompatible.
vllm 0.5.0.post1 requires transformers>=4.40.0, but you have transformers 4.36.2 which is incompatible.

Should I degrade the vllm version or ignore this error, how could I fix this error?

@angelahzyuan
Copy link
Collaborator

You should be fine with this (at least for vllm 0.5.0). Let me know if you have errors when running the code.

@swyoon
Copy link

swyoon commented Jul 6, 2024

I am afraid the version of transformers does matter.
When I run run_sppo_gemma-2.sh, I get the following error.
ValueError: Tokenizer class GemmaTokenizer does not exist or is not currently imported.
Also, running run_sppo_mistral.sh also gives the following error.

 fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
Exception: data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 40 column 3

Both errors may be caused by the tokenizer version issue.
My current environment has

  • transformers==4.36.2
  • tokenizers==0.15.2

Can you look into this issue? @angelahzyuan

@angelahzyuan
Copy link
Collaborator

angelahzyuan commented Jul 6, 2024

@swyoon Hi, the setup instructions are for Llama3 and Mistral only. Gemma-2 is a newly released model, and the issues are due to compatibility with transformers and vllm. We suggest trying other models first. If you want to use Gemma-2, you might need the most recent versions of the following dependencies:

Most recent version of transformers from git. "pip install git+https://github.com/huggingface/transformers.git", most recent version of vllm (install from source), (pip install -U) accelerate, and trl.

@swyoon
Copy link

swyoon commented Jul 7, 2024

@angelahzyuan thank you so much for a very prompt answer.
Updating transformers==4.42.3, tokensizers==0.19.1, and trl==0.9.4 solved the issue on my side (running Llama3), but not thoroughly tested yet for other cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants