-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Frontend] Add readiness and liveness endpoints to OpenAI API server #7078
[Frontend] Add readiness and liveness endpoints to OpenAI API server #7078
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge). To run full CI, you can do one of these:
🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution! Can you add some tests to verify the response returned by the new endpoints?
Sure, I work on it! |
To ensure that #6883 can be finished ASAP, let's defer merging this PR until the other one is done. |
@mfournioux we use the existing |
OK now I see the discussion in the linked issue :) |
@mfournioux Thanks for the great PR! I'm kind of new to Kubernetes so I'm a little confused here but it seems like the readiness probe is going to return a |
Naming-wise I think something more generic would be better, like what was proposed in the original issue (keep |
Sure, one of the discussed issue with the health endpoint is that it only becomes available and returns HTTP 200 after the API server is started up, which could happen after a certain amount of time, depending of the model size. The idea behing this PR is to introduce two endpoints which allows Kubertenes to determinate when VLLM is ready to respond regarding the readiness part, and if VLLM is alive or needed to be restarted as far as for the liveness part. |
Regarding your question "it seems like the readiness probe is going to return a |
if model_weights > 0: | ||
return ReadinessResponse(ready="ok") | ||
else: | ||
return ReadinessResponse(ready="ko") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mfournioux Thanks for the great PR! I'm kind of new to Kubernetes so I'm a little confused here but it seems like the readiness probe is going to return a
200 OK
response irrespective of whether the model is loaded or not right? I was under the assumption that the K8's probes check for status code and not necessarily responses? Should we be adding a test to see what it returns when the model is in fact not loaded?Regarding your question "it seems like the readiness probe is going to return a
200 OK
response irrespective of whether the model is loaded or not right?", in Kubernetes, when you configure your deployment, you can use startup probes to determinate when a container application has started. Liveness and readiness probes do not start until startup probes succeeds. It allows those probes not to interfere with your application startup. This is particularly useful when you have slow starting containers (for model loading for instance) and to do liveness checks on them. It will avoid them getting killed before they are up and running.
I think the confusion stems from here. It seems that the readiness response incorrectly returns a 200 response (with value "ko", not sure whether it means anything to Kubernetes) even when the model hasn't finished loading yet.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mfournioux Thanks for the great PR! I'm kind of new to Kubernetes so I'm a little confused here but it seems like the readiness probe is going to return a
200 OK
response irrespective of whether the model is loaded or not right? I was under the assumption that the K8's probes check for status code and not necessarily responses? Should we be adding a test to see what it returns when the model is in fact not loaded?Regarding your question "it seems like the readiness probe is going to return a
200 OK
response irrespective of whether the model is loaded or not right?", in Kubernetes, when you configure your deployment, you can use startup probes to determinate when a container application has started. Liveness and readiness probes do not start until startup probes succeeds. It allows those probes not to interfere with your application startup. This is particularly useful when you have slow starting containers (for model loading for instance) and to do liveness checks on them. It will avoid them getting killed before they are up and running.I think the confusion stems from here. It seems that the readiness response incorrectly returns a 200 response (with value "ko", not sure whether it means anything to Kubernetes) even when the model hasn't finished loading yet.
Ok I see, thanks for the clarification, the readiness probe should not return 200 if not ready, I correct it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think if None
is returned from the function, then 200 OK is still returned. You should return an error response (or whatever Kubernetes expects) explicitly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think if
None
is returned from the function, then 200 OK is still returned. You should return an error response (or whatever Kubernetes expects) explicitly.
Indeed, Kubernetes should expect any other code less than 200 and greater or equal to 400 indicates failure.
Sure, we should indeed keep "/health" for liveness and only add readiness endpoint, I do the update and the renaming in the same time. I will check for the "ready" semantics already included in the "/health" endpoint in order to find a solution. |
Yes completely, I will add a test to check what the '/ready' endpoint returns if the model is not yet loaded. |
@@ -719,4 +719,4 @@ class DetokenizeRequest(OpenAIBaseModel): | |||
|
|||
|
|||
class DetokenizeResponse(OpenAIBaseModel): | |||
prompt: str | |||
prompt: str |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please avoid deleting the last line here. (Since otherwise, the file remains unchanged)
assert response.status_code == HTTPStatus.OK | ||
|
||
@pytest.mark.asyncio | ||
async def test_get_readiness_ok(client: openai.AsyncOpenAI): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess you're going to update this test to check when the server is not ready?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes exactly, I am working on it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding readiness, I have worked on create a proper unit test when the server is not ready. I was thinking to test that when the model weight were not loaded or KV cache was not set up, readiness endpoints would return an error message.
But when I have checked how VLLM server was launched, I realized that the endpoints would not be callable until the server was properly deployed and model was loaded with KV cache setup. So I don't see how I can test if the model weights are not loaded or KV cache is not set up, because if these conditions are not reached, the endpoint for readiness will not be callable.
So, do you have any other idea how to do this test?
Is it compulsory to add this test for the PR to be merged?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, this would basically defeat the purpose of this PR in terms of fulfilling #6073. If the server cannot accept any requests until everything has been fully loaded, then there is essentially no difference between /ready
and /health
. Instead, we should enable the /health
endpoint to respond before the vLLM engine has finished starting up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
Hmm, this would basically defeat the purpose of this PR in terms of fulfilling [Feature]: Add readiness endpoint /ready and return /health earlier (vLLM on Kubernetes) #6073. If the server cannot accept any requests until everything has been fully loaded, then there is essentially no difference between
/ready
and/health
. Instead, we should enable the/health
endpoint to respond before the vLLM engine has finished starting up.
I understand your point. Furthermore, after having checked the new release 0.5.4, I have noticed several updates have been added on the rpc server to check if it is ready. So, I don't think the readiness endpoint implemented in this PR is still useful.
These are the next possible actions I propose :
- Close this PR for two reasons :
- The readiness endpoint implemented in this PR checks if the model weights are loaded and kv cache set up. An audit of the code shows that the server can not be up until these weights et kv cache are properly loaded. In addition, the 0.5.4 release shows updates which determinate is rpc server is ready. So, there is not point to check these as the health endpoint will do it.
- As cited in [Feature]: Add readiness endpoint /ready and return /health earlier (vLLM on Kubernetes) #6073, regarding deployment on k8s, there is a need to implement k8s probes (startup, readiness, liveness) to have an autonomous deployment which will wait for model to be loaded and then marks the pod as ready when the health checkpoint return 200. I think this not directly related to the vllm server, but it is more about implementing proper helm chart which will configure it.
- I can open a new PR which proposes examples of helm chart for VLLM deployment on k8s, including k8s probes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mfournioux thanks for all your work on this feature.
I do have a deployment using a startup probe and liveness checks afterwards.
The main issue I have with the startup probe is that it is a workaround for applications with a potentially long startup unable to communicate their readiness. One never knows how much time a pod needs (downloading model, weights, ...) and in the meantime the liveness cannot be checked.
Switching to a liveness check that is available very early during startup would be nice, but that then requires a readiness indicator to not send traffic until vLLM is ready.
I have not looked at the recent changes yet. But there really should be a way to bring up the webserver (endpoint) early and also to indicate when it's ready.
Additionally I would love for some metrics to also be returned during the initialization phase, allowing for that to be observed.
As for the Helm chat idea, I am thrilled for an official chart, so people don't have to individually write their deployments and figure out how to best configure vLLM and its, checks or also storage /caching. Also good liveness and readiness checks is something that could come with it. I still stand behind the proposal, that vLLM should be a better K8s citizen and prove these endpoints as best as possible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@frittentheke I have opened a PR #9199 to share a chart helm in order to have an example how to deploy vllm on k8s, including probes configuration.
@mfournioux I know this is rather unrelated to your work here ... but could you maybe "properly" link this PR to the referenced issued it will fix, see https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue It's just nicer to see which issues have PRs already ;-) |
Sure, I completely agree, I have just properly linked the PR to the referenced issue. |
I close this PR because as mentionned earlier the implementation of k8s probes is not directly related to the vllm server, but it is more about implementing proper helm chart which will configure it. I have opened a new PR #9199 which proposes an example of helm chart for VLLM deployment on k8s, including k8s probes. |
This PR add readiness and liveness endpoints to the OpenAI API server.
The readiness will enable Kubernetes to determine when vLLM is ready for requests, and especially when the model weights are loaded.
As for the liveness endpoint, it will allow Kubernetes to determine if vLLM is alive or not.
FIX #6073
PR Checklist (Click to Expand)
Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]
for bug fixes.[CI/Build]
for build or continuous integration improvements.[Doc]
for documentation fixes and improvements.[Model]
for adding a new model or improving an existing model. Model name should appear in the title.[Frontend]
For changes on the vLLM frontend (e.g., OpenAI API server,LLM
class, etc.)[Kernel]
for changes affecting CUDA kernels or other compute kernels.[Core]
for changes in the core vLLM logic (e.g.,LLMEngine
,AsyncLLMEngine
,Scheduler
, etc.)[Hardware][Vendor]
for hardware-specific changes. Vendor name should appear in the prefix (e.g.,[Hardware][AMD]
).[Misc]
for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
format.sh
to format your code.docs/source/
if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.Notes for Large Changes
Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with
rfc-required
and might not go through the PR.What to Expect for the Reviews
The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:
action-required
label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.Thank You
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!