Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Sagemaker async endpoint deployment. #2567

Closed
yinsong1986 opened this issue Nov 17, 2024 · 3 comments
Closed

Support Sagemaker async endpoint deployment. #2567

yinsong1986 opened this issue Nov 17, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@yinsong1986
Copy link

Description

Some one reported error when deploying models on LMI Sagemaker containers using Sagemaker async endpoint. Please refer to vllm-project/vllm#2912

Expected Behavior

Error Message

2024-11-16T23:22:06.943:[sagemaker logs] [xxxxxxxxxxxxxxxxxxxxxx] The response from container primary did not specify the required Content-Length header

How to Reproduce?

(If you developed your own code, please provide a short script that reproduces the error. For existing examples, please provide link.)

Steps to reproduce

[(Paste the commands you ran that produced the error.)

(vllm-project/vllm#2912)

What have you tried to solve it?

@yinsong1986 yinsong1986 added the bug Something isn't working label Nov 17, 2024
@prashantsolanki975
Copy link

Hi Team!
I'm also getting the same error when I'm passing the X-Asynchronus: false in CustomAttributes in Async ? Any idea why ?

@prashantsolanki975
Copy link

Hi @yinsong1986
were you able to get this issues fixed ? is this issue due to sagemaker being incompatible with current endpoints ?

@siddvenk
Copy link
Contributor

Hi All, sorry for the delayed response here.

We have enabled async inference in 0.31.0 container version. The root cause is due to how SageMaker handles HTTP responses for async - we are working with them to fix this issue. A workaround is introduced in 0.31.0 to make this functional.

Note that streaming is not supported on async (you cannot specify "stream": true) in the parameters. Please see this notebook for an example https://github.com/deepjavalibrary/djl-demo/blob/master/aws/sagemaker/large-model-inference/sample-llm/lmi-async-inference-demo.ipynb

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants