Skip to content

Commit

Permalink
[Bugfix] Add image placeholder for OpenAI Compatible Server of MiniCP…
Browse files Browse the repository at this point in the history
…M-V (vllm-project#6787)

Co-authored-by: hezhihui <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Signed-off-by: Alvant <[email protected]>
  • Loading branch information
3 people authored and Alvant committed Oct 26, 2024
1 parent 1652610 commit 7c63149
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 1 deletion.
2 changes: 2 additions & 0 deletions examples/minicpmv_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
from vllm.assets.image import ImageAsset

# 2.0
# The official repo doesn't work yet, so we need to use a fork for now
# For more details, please see: See: https://github.com/vllm-project/vllm/pull/4087#issuecomment-2250397630
# MODEL_NAME = "HwwwH/MiniCPM-V-2"
# 2.5
MODEL_NAME = "openbmb/MiniCPM-Llama3-V-2_5"
Expand Down
4 changes: 3 additions & 1 deletion vllm/entrypoints/chat_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,9 @@ def _image_token_str(model_config: ModelConfig,
if model_type == "phi3_v":
# Workaround since this token is not defined in the tokenizer
return "<|image_1|>"
if model_type in ("blip-2", "chatglm", "fuyu", "minicpmv", "paligemma"):
if model_type == "minicpmv":
return "()"
if model_type in ("blip-2", "chatglm", "fuyu", "paligemma"):
# These models do not use image tokens in the prompt
return None
if model_type.startswith("llava"):
Expand Down

0 comments on commit 7c63149

Please sign in to comment.