From 67901b6a199834fb0c41243a1717610064279fdc Mon Sep 17 00:00:00 2001 From: Roger Wang <136131678+ywang96@users.noreply.github.com> Date: Mon, 4 Nov 2024 11:47:11 -0800 Subject: [PATCH] [Doc] Update VLM doc about loading from local files (#9999) Signed-off-by: Roger Wang --- docs/source/models/vlm.rst | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/source/models/vlm.rst b/docs/source/models/vlm.rst index 3377502a6db28..112e9db6a41de 100644 --- a/docs/source/models/vlm.rst +++ b/docs/source/models/vlm.rst @@ -242,6 +242,10 @@ To consume the server, you can use the OpenAI client like in the example below: A full code example can be found in `examples/openai_chat_completion_client_for_multimodal.py `_. +.. tip:: + Loading from local file paths is also supported on vLLM: You can specify the allowed local media path via ``--allowed-local-media-path`` when launching the API server/engine, + and pass the file path as ``url`` in the API request. + .. tip:: There is no need to place image placeholders in the text content of the API request - they are already represented by the image content. In fact, you can place image placeholders in the middle of the text by interleaving text and image content.