Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[VLM] Support caching in merged multi-modal processor #11341

Open
wants to merge 27 commits into
base: main
Choose a base branch
from

Conversation

DarkLight1337
Copy link
Member

@DarkLight1337 DarkLight1337 commented Dec 19, 2024

V1 multi-modal cache is currently incompatible with the merged multi-modal processor. To mitigate the performance hit, this PR adds a cache inside the merged multi-modal processor.

Note: Even with this PR, none of the models that currently use merged multi-modal processor actually support fine-grained caching because their HF processors all require text inputs. Now supported by using the inner modality-specific processor.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@DarkLight1337 DarkLight1337 changed the title [VLM} Refactor merged multi-modal processor to support caching [VLM] Refactor merged multi-modal processor to support caching Dec 19, 2024
@DarkLight1337 DarkLight1337 changed the title [VLM] Refactor merged multi-modal processor to support caching [VLM] Support caching in merged multi-modal processor Dec 19, 2024
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
@DarkLight1337 DarkLight1337 marked this pull request as ready for review December 19, 2024 18:03
Signed-off-by: DarkLight1337 <[email protected]>
@mergify mergify bot added the documentation Improvements or additions to documentation label Dec 19, 2024
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Comment on lines 616 to 661
def _iter_bytes_to_hash(self, key: str, obj: object) -> Iterable[bytes]:
# Recursive cases
if isinstance(obj, (list, tuple)):
for elem in obj:
yield from self._iter_bytes_to_hash(key, elem)
return
if isinstance(obj, dict):
for k, v in obj.items():
yield from self._iter_bytes_to_hash(f"{key}.{k}", v)
return

# Simple cases
if isinstance(obj, str):
yield key.encode("utf-8")
yield obj.encode("utf-8")
return
if isinstance(obj, bytes):
yield key.encode("utf-8")
yield obj
return
if isinstance(obj, Image):
yield key.encode("utf-8")
yield obj.tobytes()
return

# Convertible to NumPy arrays
if isinstance(obj, torch.Tensor):
obj = obj.numpy()
if isinstance(obj, (int, float)):
obj = np.array(obj)
if isinstance(obj, np.ndarray):
yield key.encode("utf-8")
yield obj.tobytes()
return

msg = f"Unable to hash object of type {type(obj)}"
raise NotImplementedError(msg)

def _hash_kwargs(self, **kwargs: object) -> str:
hasher = blake3()

for k, v in kwargs.items():
for item_bytes in self._iter_bytes_to_hash(k, v):
hasher.update(item_bytes)

return hasher.hexdigest()
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit worried about unintentional hash collisions. Is there a better way to do this?

Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
@ywang96 ywang96 self-assigned this Dec 20, 2024
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants