You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As far as I know, VLMs are not yet supported in our LLM API and OpenAIServer is built atop LLM API, so we don't have VLMs in our OpenAIServer, we will supported them when LLM API is ready.
使用TensorRT-LLM v0.15.0自带的OpenAIServer来启动一个VLM视觉模型后,发现messages只能传type=text类型的content,如果需要附带图片信息image或image_url是不支持的(等于是只能推理LLM),想问一下这个问题是暂时的吗?后期有计划会解决吗?(源码中runtime/multimodal_model_runner.py中是支持常见的VLM视觉模型推理的,暂时就只能用multimodal_model_runner吗??)
The text was updated successfully, but these errors were encountered: