Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

目前OpenAIServer是不是只支持LLM不支持VLM? #2581

Closed
llery opened this issue Dec 17, 2024 · 3 comments
Closed

目前OpenAIServer是不是只支持LLM不支持VLM? #2581

llery opened this issue Dec 17, 2024 · 3 comments

Comments

@llery
Copy link

llery commented Dec 17, 2024

使用TensorRT-LLM v0.15.0自带的OpenAIServer来启动一个VLM视觉模型后,发现messages只能传type=text类型的content,如果需要附带图片信息image或image_url是不支持的(等于是只能推理LLM),想问一下这个问题是暂时的吗?后期有计划会解决吗?(源码中runtime/multimodal_model_runner.py中是支持常见的VLM视觉模型推理的,暂时就只能用multimodal_model_runner吗??)

@nv-guomingz
Copy link
Collaborator

@LinPoly would u please take a look this question

@LinPoly
Copy link
Collaborator

LinPoly commented Dec 18, 2024

As far as I know, VLMs are not yet supported in our LLM API and OpenAIServer is built atop LLM API, so we don't have VLMs in our OpenAIServer, we will supported them when LLM API is ready.

@nv-guomingz
Copy link
Collaborator

Hi @llery Please feel free to reopen this ticket if needed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants