Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Issue Running LLaVA with vLLM Due to Tensor Size Mismatch #4421

Closed
OualidBougzime opened this issue Apr 28, 2024 · 10 comments
Closed
Labels
bug Something isn't working

Comments

@OualidBougzime
Copy link

Your current environment

The output of `python collect_env.py`

🐛 Describe the bug

I'm attempting to integrate LLaVA with vLLM for image processing, but I'm encountering a tensor size mismatch error when executing my script.

Setup:
I installed vLLM along with other required packages using the following command:
!pip install vllm==0.4.1 kaleido python-multipart torch==2.2.1 torchvision==0.17.1 torchaudio==2.2.1

Code:
Here's the script I used to run LLaVA:

import torch
from vllm import LLM
from vllm.sequence import MultiModalData

def run_llava_pixel_values_debug():

    llm = LLM(
        model="llava-hf/llava-1.5-7b-hf",
        enforce_eager=True,
        tensor_parallel_size=1,
        image_input_type="pixel_values",
        image_token_id=32000,
        image_input_shape=f"1,3,224,224",
        image_feature_size=576,
    )

    prompt = "<image>" * 576 + (
        "\nUSER: What is the content of this image?\nASSISTANT:")

    # Load a smaller or test image file if available, or adjust the existing one to match the test size
    with open("3d-background-with-hexagonal-shapes-texture_23-2150473185.jpg", "rb") as f:
        image_file = f.read()

    outputs = llm.generate(prompt,
                           multi_modal_data=MultiModalData(
                               type=MultiModalData.Type.IMAGE, data=encoded))

    for o in outputs:
        generated_text = o.outputs[0].text
        print(generated_text)

run_llava_pixel_values_debug()

Error:
Upon running this script, I receive the following error:
RuntimeError: The size of tensor a (257) must match the size of tensor b (577) at non-singleton dimension 1.

Could anyone assist in identifying the source of this issue and suggest how I might correct the tensor size mismatch? Any help or suggestions would be greatly appreciated.

@OualidBougzime OualidBougzime added the bug Something isn't working label Apr 28, 2024
@DarkLight1337
Copy link
Member

The image should have size 1,3,336,336.

@OualidBougzime
Copy link
Author

The image should have size 1,3,336,336.

I have the same error even if I change the size.

@DarkLight1337
Copy link
Member

DarkLight1337 commented Apr 28, 2024

The image should have size 1,3,336,336.

I have the same error even if I change the size.

You should ensure that the image that is actually inputted into the model also has this size (i.e. not only change the config). In the future, we will add image preprocessing to vLLM to make this step no longer necessary.

@OualidBougzime
Copy link
Author

The image should have size 1,3,336,336.

I have the same error even if I change the size.

You should ensure that the image that is actually inputted into the model also has this size (i.e. not only change the config).

Yes, the images have the same size but I get the same error.

@DarkLight1337
Copy link
Member

Yes, the images have the same size but I get the same error.

To better pinpoint the issue, can you show the stack trace of the error?

@OualidBougzime
Copy link
Author

Yes, the images have the same size but I get the same error.

To better pinpoint the issue, can you show the stack trace of the error?

It's working now, I'm using this code, which can process any image format:

import torch
from vllm import LLM
from vllm.sequence import MultiModalData
import torchvision.transforms as transforms
from PIL import Image
import io

def run_llava_pixel_values_debug():

    llm = LLM(
        model="llava-hf/llava-1.5-7b-hf",
        enforce_eager=True,
        tensor_parallel_size=1,
        image_input_type="pixel_values",
        image_token_id=32000,
        image_input_shape=f"1,3,336,336",
        image_feature_size=576,
    )

    prompt = "<image>" * 576 + (
        "\nUSER: What is the content of this image?\nASSISTANT:")

    # Load a smaller or test image file if available, or adjust the existing one to match the test size
    with open("3d-background-with-hexagonal-shapes-texture.jpg", "rb") as f:
      image_file = f.read()

    # Convert bytes data to PIL Image
    image = Image.open(io.BytesIO(image_file))

    # Define a transformation to tensor
    transform = transforms.Compose([
        transforms.Resize((336, 336)),
        transforms.ToTensor(),
    ])

    tensor_image = transform(image).unsqueeze(0)  # Add batch dimension

    outputs = llm.generate(prompt,
                           multi_modal_data=MultiModalData(
                               type=MultiModalData.Type.IMAGE, data=tensor_image))

    for o in outputs:
        generated_text = o.outputs[0].text
        print(generated_text)

run_llava_pixel_values_debug()

But I have a question: has LLaVA 1.6 been processed by vLLM yet, or not?

@DarkLight1337
Copy link
Member

It's not supported yet. We are working on it though!

@OualidBougzime
Copy link
Author

It's not supported yet. We are working on it though!

Thank you for the information! I have one other question: how can I specify the number of tokens to generate and the temperature with vLLM in this code?

@DarkLight1337
Copy link
Member

You can pass SamplingParams to LLM.generate.

@DarkLight1337
Copy link
Member

DarkLight1337 commented Apr 29, 2024

Btw, if you absolutely must use LLaVA-1.6, I have a fork in #4199 which adds experimental support for it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants