Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: MiniCPM-V-2 does not appear to have a file named preprocessor_config.json #6934

Closed
yangJirui opened this issue Jul 30, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@yangJirui
Copy link

Your current environment

Versions of relevant libraries:[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] sentence-transformers==2.7.0
[pip3] torch==2.3.1
[pip3] torchvision==0.18.1
[pip3] transformers==4.43.2
[pip3] transformers-stream-generator==0.0.5
[pip3] triton==2.3.1
[pip3] vllm_nccl_cu12==2.18.1.0.4.0
[conda] blas                      1.0                         mkl  
[conda] mkl                       2023.1.0         h6d00ec8_46342  
[conda] mkl-service               2.4.0           py311h5eee18b_1  
[conda] mkl_fft                   1.3.6           py311ha02d727_1  
[conda] mkl_random                1.2.2           py311ha02d727_1  
[conda] numpy                     1.24.3          py311h08b1b3b_1  
[conda] numpy-base                1.24.3          py311hf175353_1  
[conda] numpydoc                  1.5.0           py311h06a4308_0  
[conda] nvidia-nccl-cu12          2.20.5                   pypi_0    pypi
[conda] sentence-transformers     3.0.1                    pypi_0    pypi
[conda] torch                     2.3.1                    pypi_0    pypi
[conda] transformers              4.42.4                   pypi_0    pypi
[conda] triton                    2.3.1                    pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.3.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    NIC1    NIC2    NIC3    NIC4    CPU Affinity    NUMA Affinity      GPU NUMA ID
GPU0     X      NV8     NV8     NV8     NV8     NV8     NV8     NV8     NODE    PXB     NODE    SYS     SYS     0-31,64-95      0 N/A
GPU1    NV8      X      NV8     NV8     NV8     NV8     NV8     NV8     NODE    PXB     NODE    SYS     SYS     0-31,64-95      0 N/A
GPU2    NV8     NV8      X      NV8     NV8     NV8     NV8     NV8     NODE    NODE    PXB     SYS     SYS     0-31,64-95      0 N/A
GPU3    NV8     NV8     NV8      X      NV8     NV8     NV8     NV8     NODE    NODE    PXB     SYS     SYS     0-31,64-95      0 N/A
GPU4    NV8     NV8     NV8     NV8      X      NV8     NV8     NV8     SYS     SYS     SYS     PXB     NODE    32-63,96-127    1 N/A
GPU5    NV8     NV8     NV8     NV8     NV8      X      NV8     NV8     SYS     SYS     SYS     PXB     NODE    32-63,96-127    1 N/A
GPU6    NV8     NV8     NV8     NV8     NV8     NV8      X      NV8     SYS     SYS     SYS     NODE    PXB     32-63,96-127    1 N/A
GPU7    NV8     NV8     NV8     NV8     NV8     NV8     NV8      X      SYS     SYS     SYS     NODE    PXB     32-63,96-127    1 N/A
NIC0    NODE    NODE    NODE    NODE    SYS     SYS     SYS     SYS      X      NODE    NODE    SYS     SYS
NIC1    PXB     PXB     NODE    NODE    SYS     SYS     SYS     SYS     NODE     X      NODE    SYS     SYS
NIC2    NODE    NODE    PXB     PXB     SYS     SYS     SYS     SYS     NODE    NODE     X      SYS     SYS
NIC3    SYS     SYS     SYS     SYS     PXB     PXB     NODE    NODE    SYS     SYS     SYS      X      NODE
NIC4    SYS     SYS     SYS     SYS     NODE    NODE    PXB     PXB     SYS     SYS     SYS     NODE     X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1  NIC2: mlx5_2
  NIC3: mlx5_3
  NIC4: mlx5_4

🐛 Describe the bug

vllm can successfully load "MiniCPM-Llama3-V-2_5", but it throws an error when loading "MiniCPM-V-2". The code is as follows.

MINICPMV_PATH = "local_path/MiniCPM-V-2"

llm = LLM(model=MINICPMV_PATH, 
                  tensor_parallel_size=1, 
                  gpu_memory_utilization=0.95, 
                  trust_remote_code=True, 
                  max_model_len=4096)

The following is detail logs:

File "/mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/infer_codes/miniCpmV_infer_vllm.py", line 37, in _build_vllm_and_tokenzier
[rank0]:     llm = LLM(model=MINICPMV_PATH, 
[rank0]:   File "/mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/offical_vllm_for_miniCPM/vllm/vllm/entrypoints/llm.py", line 155, in __init__
[rank0]:     self.llm_engine = LLMEngine.from_engine_args(
[rank0]:   File "/mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/offical_vllm_for_miniCPM/vllm/vllm/engine/llm_engine.py", line 447, in from_engine_args
[rank0]:     engine = cls(
[rank0]:   File "/mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/offical_vllm_for_miniCPM/vllm/vllm/engine/llm_engine.py", line 265, in __init__
[rank0]:     self._initialize_kv_caches()
[rank0]:   File "/mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/offical_vllm_for_miniCPM/vllm/vllm/engine/llm_engine.py", line 364, in _initialize_kv_caches
[rank0]:     self.model_executor.determine_num_available_blocks())
[rank0]:   File "/mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/offical_vllm_for_miniCPM/vllm/vllm/executor/gpu_executor.py", line 94, in determine_num_available_blocks
[rank0]:     return self.driver_worker.determine_num_available_blocks()
[rank0]:   File "/mllm/yangjirui03/envs/miniCpmV_vllm2/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
[rank0]:     return func(*args, **kwargs)
[rank0]:   File "/mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/offical_vllm_for_miniCPM/vllm/vllm/worker/worker.py", line 179, in determine_num_available_blocks
[rank0]:     self.model_runner.profile_run()
[rank0]:   File "/mllm/yangjirui03/envs/miniCpmV_vllm2/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
[rank0]:     return func(*args, **kwargs)
[rank0]:   File "/mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/offical_vllm_for_miniCPM/vllm/vllm/worker/model_runner.py", line 927, in profile_run
[rank0]:     model_input = self.prepare_model_input(
[rank0]:   File "/mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/offical_vllm_for_miniCPM/vllm/vllm/worker/model_runner.py", line 1265, in prepare_model_input
[rank0]:     model_input = self._prepare_model_input_tensors(
[rank0]:   File "/mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/offical_vllm_for_miniCPM/vllm/vllm/worker/model_runner.py", line 839, in _prepare_model_input_tensors
[rank0]:     builder.add_seq_group(seq_group_metadata)
[rank0]:   File "/mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/offical_vllm_for_miniCPM/vllm/vllm/worker/model_runner.py", line 493, in add_seq_group
[rank0]:     per_seq_group_fn(inter_data, seq_group_metadata)
[rank0]:   File "/mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/offical_vllm_for_miniCPM/vllm/vllm/worker/model_runner.py", line 468, in _compute_multi_modal_input
[rank0]:     mm_kwargs = self.multi_modal_input_mapper(mm_data)
[rank0]:   File "/mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/offical_vllm_for_miniCPM/vllm/vllm/multimodal/registry.py", line 93, in map_input
[rank0]:     .map_input(model_config, data_value)
[rank0]:   File "/mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/offical_vllm_for_miniCPM/vllm/vllm/multimodal/base.py", line 228, in map_input
[rank0]:     return mapper(InputContext(model_config), data)
[rank0]:   File "/mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/offical_vllm_for_miniCPM/vllm/vllm/multimodal/image.py", line 117, in _default_input_mapper
[rank0]:     image_processor = self._get_hf_image_processor(model_config)
[rank0]:   File "/mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/offical_vllm_for_miniCPM/vllm/vllm/multimodal/image.py", line 109, in _get_hf_image_processor
[rank0]:     return cached_get_image_processor(
[rank0]:   File "/mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/offical_vllm_for_miniCPM/vllm/vllm/transformers_utils/image_processor.py", line 17, in get_image_processor
[rank0]:     processor = AutoImageProcessor.from_pretrained(
[rank0]:   File "/mllm/yangjirui03/envs/miniCpmV_vllm2/lib/python3.10/site-packages/transformers/models/auto/image_processing_auto.py", line 410, in from_pretrained
[rank0]:     config_dict, _ = ImageProcessingMixin.get_image_processor_dict(pretrained_model_name_or_path, **kwargs)
[rank0]:   File "/mllm/yangjirui03/envs/miniCpmV_vllm2/lib/python3.10/site-packages/transformers/image_processing_base.py", line 335, in get_image_processor_dict
[rank0]:     resolved_image_processor_file = cached_file(
[rank0]:   File "/mllm/yangjirui03/envs/miniCpmV_vllm2/lib/python3.10/site-packages/transformers/utils/hub.py", line 373, in cached_file
[rank0]:     raise EnvironmentError(
[rank0]: OSError: /mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/MiniCPM-V-2 does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co//mllm/yangjirui03/workspace/ThirdPartyMLLM/MiniCPM_V/MiniCPM-V-2/tree/None' for available files.
@yangJirui yangJirui added the bug Something isn't working label Jul 30, 2024
@yangJirui
Copy link
Author

duplicate : OpenBMB#10

@DarkLight1337
Copy link
Member

Please refer to the note:

https://github.com/vllm-project/vllm/blob/main/examples/offline_inference_vision_language.py#L85-L88

Sorry for the confusion! We'll update the "Supported Models" page to reflect this.

@yangJirui
Copy link
Author

Please refer to the note:

https://github.com/vllm-project/vllm/blob/main/examples/offline_inference_vision_language.py#L85-L88

Sorry for the confusion! We'll update the "Supported Models" page to reflect this.

Thanks, it works for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants