-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Doc][CI/Build] Update docs and tests to use vllm serve
#6431
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Full CI run is still required to merge this PR so once the PR is ready to go, please make sure to run it. If you need all test signals in between PR commits, you can trigger full CI as well. To run full CI, you can do one of these:
🚀 |
I want to hold this off until the release. So people visiting the nightly docs can directly use the CLI |
In the meantime, please feel free to start improving it!!! |
vllm serve
vllm serve
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does --model
still work or will it cause issues with vllm serve
? I'm curious if it is sufficient to replace python -m vllm.entrypoints.openai.api_server
with vllm serve
or if it specifically replaces python -m vllm.entrypoints.openai.api_server --model
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the clarity, this is a nice PR to update usage LGTM!
@simon-mo since |
…ct#6431) Signed-off-by: Alvant <[email protected]>
Follow-up to #5090. As a sanity check, I have also updated the entrypoints tests to use the new CLI.
After this, we can update the Docker images and performance benchmarks to use the new CLI.
cc @EthanqX