Skip to content

Commit

Permalink
[Bugfix] Use "vision_model" prefix for MllamaVisionModel (#9628)
Browse files Browse the repository at this point in the history
Signed-off-by: mgoin <[email protected]>
  • Loading branch information
mgoin authored Oct 24, 2024
1 parent bb01f29 commit b7df53c
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion vllm/model_executor/models/mllama.py
Original file line number Diff line number Diff line change
Expand Up @@ -1053,7 +1053,8 @@ def __init__(self,
self.image_size = config.vision_config.image_size

self.vision_model = MllamaVisionModel(config.vision_config,
quant_config)
quant_config,
prefix="vision_model")
self.language_model = MllamaForCausalLM(
config.text_config,
cache_config=cache_config,
Expand Down

0 comments on commit b7df53c

Please sign in to comment.