Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] OpenAI-Compatible Tools API + Streaming for Hermes & Mistral models #5649

Merged
merged 237 commits into from
Sep 4, 2024

Conversation

K-Mistele
Copy link
Contributor

@K-Mistele K-Mistele commented Jun 18, 2024

DRAFT: OpenAI Tool Use Checklist

This (Draft) PR will add support for OpenAI-style tool calling in a way that is minimally opinionated about tool use formats & prompt formatting.

The following features are expected to be supported:

  • Custom tool use system prompt template (if desired) -- prevent opinionation about if/how the model uses a system prompt to enable tool use
  • Custom tool call return value prompt template (if desired) -- prevent opinionation about the format that tool return values should be passed to the model
  • Support for tool_choice="auto" - named tool choice is already supported via guided decoding
  • Streaming tool call responses from the chat completions API
  • Verified Support & examples for at least the following models:

I'd welcome anyone who wants to contribute on this, and would be happy to add you to the Constellate AI vllm fork that this PR is based off of - please just leave a comment!

Checklist/roadmap:

  • validation of tools and tool_choice
  • sending tools to the model with tool_choice="auto"
    • CLI Argument: enable auto tool choice
    • CLI argument: tool use system prompt template path - specify the template for the prompt that tells the model how to use the provided tools
    • render template with specified tools
    • prepend the rendered template to the existing system prompt OR use it as the only one if the client didn’t specify one.
    • verify that the model will return a tool call as the chat completion response
  • returning tool calls to the client
    • Detect if the model is returning a tool call via the first token
      • CLI argument - specify the token / token ID for tool use responses
    • implement custom extractor class that can be used and implemented for different models to extract their tool call formats to OpenAi's format
    • non-streaming chat completion response: return the JSON as appropriate response.tool_calls
    • streaming chat completion: if the tool use token is called, start streaming the tokens to the client.
  • providing tool call results to the model
    • enable specifying a custom chat template
    • support using huggingface transformers to select the tool_use chat template
  • verify support:
    • Nous Research’s Hermes 2 Pro LLama 3 8B
    • Mistral 7B instruct v0.3
  • Add documentation

FIX #3237 #4656 (link existing issues this PR will resolve)

BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE


PR Checklist (Click to Expand)

Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Model] for adding a new model or improving an existing model. Model name should appear in the title.
  • [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.)
  • [Kernel] for changes affecting CUDA kernels or other compute kernels.
  • [Core] for changes in the core vLLM logic (e.g., LLMEngine, AsyncLLMEngine, Scheduler, etc.)
  • [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD]).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR need to meet the following code quality standards:

  • We adhere to Google Python style guide and Google C++ style guide.
  • Pass all linter checks. Please use format.sh to format your code.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests.
  • Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.

Notes for Large Changes

Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR.

What to Expect for the Reviews

The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:

  • After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability.
  • After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team.
  • After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.
  • Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion.

Thank You

Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!

@K-Mistele
Copy link
Contributor Author

K-Mistele commented Jun 19, 2024

Progress! I as of current commits, I can now get the hermes 2 pro model to generate a tool call using the --enable-auto-tool-choice and --tool-use-prompt-template flags:

Server:

python -m vllm.entrypoints.openai.api_server --model NousResearch/Hermes-2-Pro-Llama-3-8B --tool-use-prompt-template examples/tool_template_hermes_2_pro.jinja --enable-api-tools --enable-auto-tool-choice

Client:

python examples/openai_chat_completion_client_with_tools.py

Result

Chat completion results:
ChatCompletion(id='cmpl-1354f3f373574d7aa0e1bf0b78915188', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='<tool_call>{"arguments": {"city": "Dallas", "state": "TX", "unit": "fahrenheit"}, "name": "get_current_weather"}</tool_call>', role='assistant', function_call=None, tool_calls=[]), stop_reason=None)], created=1718763539, model='NousResearch/Hermes-2-Pro-Llama-3-8B', object='chat.completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=33, prompt_tokens=367, total_tokens=400))

Now, working on getting it to work for non-streaming responses - then, streaming!

@K-Mistele
Copy link
Contributor Author

A question I asked in the discord, with some open questions about how to handle configuration:

Setting up function calling models for an open model requires a lot of configurations if you want to be unopinionated about the model. Here is a brief list of all the parameters that would be needed:

  • enable "auto" tool choice - allow models to choose the function to call (supported for some models) or ONLY tool_choice="" for named tool choice via guided decoding
  • tool use prompt template - how do you render the list of tools provided in the request into the prompt / system prompt for the conversation?
  • tool use prompt role - what is the role of that message? defaults to system.
  • tool use response start token - different models (e.g. Hermes 2 Pro models vs. Mistral 7B Instruct v0.3) use different tokens in their tokenizer to indicate the start of a tool call response vs. a chat response. it's important that this is configured correctly so that we can know whether to send the model's response as a chat response or tool response, and how to stream the response if stream=True. Because there does not seem to be a specific convention for defining this in a uniform way across model tokenizers, it will be necessary for the user to inform the API server which token indicates a tool response is starting, in order to have fully OpenAI API-compatible tool responses.
  • tool result response/return value message template the template for returning tool results to the model so it can generated based on results of the tool. - e.g. <tool_response>{"name": "function_name_here", "content": TOOL_RETURN_VALUE_HERE}</tool_response> for Hermes 2 pro models
  • tool result response/return value message role - the ROLE to use for that message, e.g. tool for Hermes 2 pro models

The question is, it is better to have all of these as separate CLI flags, or would a JSON configuration file be preferable so that people can create (and track in version control!) configs that work for popular models?)

@K-Mistele
Copy link
Contributor Author

Please see ongoing conversation with the Hugging Face team, Nous Research & transformers maintainer here - this will make it MUCH easier to implement OpenAI API-compatible tool calling into vLLM regardless of model prompt/tokenizer configs.

HF PR for Hermes 2 Pro: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B/discussions/13#66724ea9bd5875ad665f1416

HF PR for Mistral 7B instruct v0.3: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3/discussions/35

Once these are merged, there will be a STANDARD way in transformers to handle templating in tool responses just like for templating chat conversations into prompts with a chat template, and hopefully to pull out tool calls from generated text.

@interstellarninja
Copy link

'<tool_call>{"arguments": {"city": "Dallas", "state": "TX", "unit": "fahrenheit"}, "name": "get_current_weather"}</tool_call>'

hey great initiative and nice to see Hermes Pro model's tool calls working.

there's a slight issue with this tool call -- our format requires new lines after <tool_call> XML tags:

<tool_call>
{"arguments": {"city": "Dallas", "state": "TX", "unit": "fahrenheit"}, "name": "get_current_weather"}
</tool_call>

Also tool choice should also work since it's basically passing the chosen tool only as part of

@K-Mistele
Copy link
Contributor Author

Thanks! Tool choice is already working via guided decoding, but I will update the PR to fix the template

@K-Mistele
Copy link
Contributor Author

Ok the most recent commit seems to fix it:
python examples/openai_chat_completion_client_with_tools.py

ChatCompletion(id='cmpl-48f019602ab64f4ab49c3563318c6d1f', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='<tool_call>\n{"arguments": {"city": "Dallas", "state": "TX", "unit": "fahrenheit"}, "name": "get_current_weather"}\n</tool_call>', role='assistant', function_call=None, tool_calls=[]), stop_reason=None)], created=1718896666, model='NousResearch/Hermes-2-Pro-Llama-3-8B', object='chat.completion', system_fingerprint=None, usage=CompletionUsage(completion_tokens=34, prompt_tokens=368, total_tokens=402))

@frankabc12341
Copy link

@K-Mistele
Hi. My vllm version is 0.5.0 post1. But, when I run the command

python -m vllm.entrypoints.openai.api_server --model /home/asus/autodl-tmp/qwen/Qwen2-7B-Instruct --tool-use-prompt-template /home/asus/autodl-tmp/examples/chatml.jinja --enable-api-tools --enable-auto-tool-choice
it shows api_server.py: error: unrecognized arguments: --tool-use-prompt-template --enable-api-tools --enable-auto-tool-choice

In addition, I can't find openai_chat_completion_client_with_tools.py in the example fille.

Can you give me some advice?

@K-Mistele
Copy link
Contributor Author

@K-Mistele Hi. My vllm version is 0.5.0 post1. But, when I run the command

python -m vllm.entrypoints.openai.api_server --model /home/asus/autodl-tmp/qwen/Qwen2-7B-Instruct --tool-use-prompt-template /home/asus/autodl-tmp/examples/chatml.jinja --enable-api-tools --enable-auto-tool-choice it shows api_server.py: error: unrecognized arguments: --tool-use-prompt-template --enable-api-tools --enable-auto-tool-choice

In addition, I can't find openai_chat_completion_client_with_tools.py in the example fille.

Can you give me some advice?

Please see my reply here. This is a draft pull request, which means it has not been merged into vLLM's codebase, and it is not ready to be merged yet. It is still a work-in-progress. As such, none of its additions or features are available in vLLM yet. If you are interested in testing or contributing to this pull request, please see the origin fork at the tool-use branch. However, please be aware that the capabilities discussed in this PR are incomplete and have not been robustly tested.

@aw632
Copy link
Contributor

aw632 commented Jun 24, 2024

if the tool use token is called, pull out the tool call JSON from the rest of the response (should be array)

Given that guided decoding is not enabled for "auto" tool use, what is the error handling planned in case the LLM does not output valid JSON?

@K-Mistele
Copy link
Contributor Author

if the tool use token is called, pull out the tool call JSON from the rest of the response (should be array)

Given that guided decoding is not enabled for "auto" tool use, what is the error handling planned in case the LLM does not output valid JSON?

Great question! Unfortunately since each model that supports tool calling uses its' own format for function calls (as opposed to tool choice with guided decoding, where we're forcing the LLM to call a specific tool in a specific format at decode-time) the response format is up to the model and its' trainer.

At this point, we are still exploring ways to handle extraction of tool calls from disparate formats to OpenAI-compatible calls in a way that isn't opinionated (we want to support multiple formats including Mistral, Hermes 2 Pro, Firefunction, etc). Until we have a good answer on this, we probably won't have a good answer for how to handle errors. There are a couple possible options once we have attempted to extract the model's tool call format into OpenAI's:

  • return the call to the client without validation, and allow the client to validate the tool call with libraries like Instructor, pydantic or Zod
  • return an HTTP error if the model generates an invalid tool call
  • automatically attempt to re-run the generation - perhaps only if the user specifies a specific CLI flag, since this could lead to performance issues or infinite retry loops depending on the model and other configurations like temperature, top_p and top_k

In the interim, we may try to solve this problem by "ignoring" it - detect if the model is generating a tool call, but avoid the destructuring/extraction issue by returning the tool call in the model's format to the client. Not exactly sure what this would look like, but it's a possibility.

@K-Mistele
Copy link
Contributor Author

Copy/pasting from discord:

I'd really like to make progress on tool use and right now with hugging face adding support into transformers for passing tools into the chat template, that solves one of the main issues

I think the blocker now is figuring out how to handle decodiong "auto" choice tool calls from each model's specific format (mistral vs. Hermes pro vs. firefunction) and returning that to the client ESPECIALLY when streaming is requested.

Until there's a "canonical" way to do decode model-specific tool-calls into the OpenAI format e.g. through transformers or "reverse templates" or something, it might be best to approach this like chat templates which is to try & support it as good as we can where possible

Here's what I mean by this:

  • mistral's format is very close to openAI's format and is probably close in terms of implementation. If a tool is being called, the first token from the response indicates this, and then a JSOn array is generated. I can use a partial JSON parser to stream tool arguments etc as they are generated to the client, basically without any translation
  • Hermes 2 Pro's format is different and uses multiple XML tags <tool_call></tool_call> for each call with the JSON inside. I should also be able to handle some streaming here with XML extraction and regex.
  • Firefunction v2 uses a totally different format as well.

So each one needs a very different implementation. I propose creating a ToolCallParser abstract class that can be implemented for different models like mistral and hermes. If a user is using a tool-calling model, they can use a CLI flag to toggle which parser they want, if they want one at all. If not, a tool call would be treated like a regular chat completion, and they can handle it client-side.

This way, we can ship support for tool calls incrementally in the absence of a commonly-accepted "best practice" on how to do this. People can also add support for other models that are important to them in a minimally-invasive way. Then, once a better way to implement this more broadly is available, we can deprecate this approach

I'd really appreciate feedback on this before moving forward and would especially love to hear if @mgoin and @simon-mo would consider this an appropriate approach that would be likely to be approved.

@simon-mo
Copy link
Collaborator

@br3no would love to get your feedback and review for this PR!

@wybartel
Copy link

wybartel commented Sep 4, 2024

Thank you all for your hard work! Especially you @K-Mistele! Great getting this into main!!!

@TimPietrusky
Copy link

I was following along the whole time, thank you so much @K-Mistele ❤️

@meetzuber
Copy link

How can I use it with llama 3.1 8b model?

@pbasov
Copy link

pbasov commented Sep 6, 2024

@meetzuber you have to write a prompt template for l3.1 if I understand the implementation correctly.

@gislerro
Copy link

gislerro commented Sep 6, 2024

@meetzuber you have to write a prompt template for l3.1 if I understand the implementation correctly.

From my understanding you have to write a ToolParser aswell.

I'm experimenting with a more general approach to tool parsing that generalizes over different models

Since OpenAI API compatible tool calls only require two string arguments name and arguments a regex for custom tool calling must only define two capture groups with those names, such a regex can be validated with a "meta" regex like:

(?=.*?\(\?P<name>.*?\))(?=.*?\(\?P<arguments>.*?\))

The tool calling regex must then be provided with a cli arg --tool-call-regex.

Then as an example for the Llama 3.1 model with JSON based tool calling your --tool-call-regex looks like:

{"name": "(?P<name>.*?)", "parameters": (?P<arguments>.*?)}

which would then extract the tool call: https://regex101.com/r/RhZ4zx/1

For a different custom tool calling format just provide another --tool-call-regex:

<function=(?P<name>.*?)>(?P<arguments>.*?)<\/function>

which also extracts tool call(s): https://regex101.com/r/9pM6IL/2

@K-Mistele
Copy link
Contributor Author

How can I use it with llama 3.1 8b model?

I will be adding support in a separate PR

@K-Mistele
Copy link
Contributor Author

@meetzuber you have to write a prompt template for l3.1 if I understand the implementation correctly.

From my understanding you have to write a ToolParser aswell.

I'm experimenting with a more general approach to tool parsing that generalizes over different models

Since OpenAI API compatible tool calls only require two string arguments name and arguments a regex for custom tool calling must only define two capture groups with those names, such a regex can be validated with a "meta" regex like:

(?=.*?\(\?P<name>.*?\))(?=.*?\(\?P<arguments>.*?\))

The tool calling regex must then be provided with a cli arg --tool-call-regex.

Then as an example for the Llama 3.1 model with JSON based tool calling your --tool-call-regex looks like:

{"name": "(?P<name>.*?)", "parameters": (?P<arguments>.*?)}

which would then extract the tool call: https://regex101.com/r/RhZ4zx/1

For a different custom tool calling format just provide another --tool-call-regex:

<function=(?P<name>.*?)>(?P<arguments>.*?)<\/function>

which also extracts tool call(s): https://regex101.com/r/9pM6IL/2

Unfortunately this approach does not work with streaming. Supporting tools in streaming mode was a core requirement of this PR since many applications use streaming mode by default if they are user-facing, and there is no way to “turn off streaming” if the model starts generating a tool call.

@pbasov
Copy link

pbasov commented Sep 6, 2024

@K-Mistele Documentation would be much appreciated, so the community can be quick to add support for future models with tool support.

@meetzuber
Copy link

Only below options are mentioned in tool/function calling. and there are only 2 options for tool call parser mistral and hermes.
There is no --tool-call-regex option there.

--enable-auto-tool-choice – mandatory Auto tool choice. tells vLLM that you want to enable the model to generate its own tool calls when it deems appropriate.

--tool-call-parser – select the tool parser to use - currently either hermes or mistral. Additional tool parsers will continue to be added in the future.

--chat-template – optional for auto tool choice. the path to the chat template which handles tool-role messages and assistant-role messages that contain previously generated tool calls. Hermes and Mistral models have tool-compatible chat templates in their tokenizer_config.json files, but you can specify a custom template. This argument can be set to tool_use if your model has a tool use-specific chat template configured in the tokenizer_config.json. In this case, it will be used per the transformers specification. More on this here from HuggingFace; and you can find an example of this in a tokenizer_config.json here

https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html#automatic-function-calling

@gislerro
Copy link

gislerro commented Sep 6, 2024

There is no --tool-call-regex option there.

That's not implemented on the main branch

@meetzuber
Copy link

Only below options are mentioned in tool/function calling. and there are only 2 options for tool call parser mistral and hermes.
There is no --tool-call-regex option there.

--enable-auto-tool-choice – mandatory Auto tool choice. tells vLLM that you want to enable the model to generate its own tool calls when it deems appropriate.

--tool-call-parser – select the tool parser to use - currently either hermes or mistral. Additional tool parsers will continue to be added in the future.

--chat-template – optional for auto tool choice. the path to the chat template which handles tool-role messages and assistant-role messages that contain previously generated tool calls. Hermes and Mistral models have tool-compatible chat templates in their tokenizer_config.json files, but you can specify a custom template. This argument can be set to tool_use if your model has a tool use-specific chat template configured in the tokenizer_config.json. In this case, it will be used per the transformers specification. More on this here from HuggingFace; and you can find an example of this in a tokenizer_config.json here

https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html#automatic-function-calling

That's not implemented on the main branch

These are implemented in v0.6.0 release.
https://github.com/vllm-project/vllm/releases/tag/v0.6.0

@vipulgote1999
Copy link

@K-Mistele can mentioned which is the pull request for llama3.1 tool support...ithink most people is been waiting for it.

@JackYangzg
Copy link

@K-Mistele can mentioned which is the pull request for Qwen2 tool support...ithink most people is been waiting for it.

@K-Mistele
Copy link
Contributor Author

@K-Mistele can mentioned which is the pull request for llama3.1 tool support...ithink most people is been waiting for it.

There is not a pull request for it yet because it's still a work-in-progress and not ready for review. I can create a draft though.

@K-Mistele
Copy link
Contributor Author

@K-Mistele can mentioned which is the pull request for Qwen2 tool support...ithink most people is been waiting for it.

There is not a PR for this, and at this point I hadn't planned on it. If this is something you're interested, please feel free to create an issue for a feature request, and tag me in it. If it gets enough interest, I'll add it.

opus24 added a commit to Hyper-Accel/vllm that referenced this pull request Sep 10, 2024
commit a1d8742
Author: Simon Mo <[email protected]>
Date:   Mon Sep 9 23:21:00 2024 -0700

    Add NVIDIA Meetup slides, announce AMD meetup, and add contact info (vllm-project#8319)

commit 6cd5e5b
Author: Dipika Sikka <[email protected]>
Date:   Mon Sep 9 23:02:52 2024 -0400

    [Misc] Fused MoE Marlin support for GPTQ (vllm-project#8217)

commit c7cb5c3
Author: Kyle Sayers <[email protected]>
Date:   Mon Sep 9 16:27:26 2024 -0400

    [Misc] GPTQ Activation Ordering (vllm-project#8135)

commit f9b4a2d
Author: Vladislav Kruglikov <[email protected]>
Date:   Mon Sep 9 21:20:46 2024 +0300

    [Bugfix] Correct adapter usage for cohere and jamba (vllm-project#8292)

commit 58fcc85
Author: Adam Lugowski <[email protected]>
Date:   Mon Sep 9 11:16:37 2024 -0700

    [Frontend] Add progress reporting to run_batch.py (vllm-project#8060)

    Co-authored-by: Adam Lugowski <[email protected]>

commit 08287ef
Author: Kyle Mistele <[email protected]>
Date:   Mon Sep 9 09:45:11 2024 -0500

    [Bugfix] Streamed tool calls now more strictly follow OpenAI's format; ensures Vercel AI SDK compatibility (vllm-project#8272)

commit 4ef41b8
Author: Alexander Matveev <[email protected]>
Date:   Sun Sep 8 00:01:51 2024 -0400

    [Bugfix] Fix async postprocessor in case of preemption (vllm-project#8267)

commit cfe712b
Author: Joe Runde <[email protected]>
Date:   Sat Sep 7 14:03:16 2024 -0600

    [CI/Build] Use python 3.12 in cuda image (vllm-project#8133)

    Signed-off-by: Joe Runde <[email protected]>

commit b962ee1
Author: sumitd2 <[email protected]>
Date:   Sat Sep 7 23:48:40 2024 +0530

    ppc64le: Dockerfile fixed, and a script for buildkite (vllm-project#8026)

commit 36bf815
Author: Isotr0py <[email protected]>
Date:   Sun Sep 8 01:45:44 2024 +0800

    [Model][VLM] Decouple weight loading logic for `Paligemma` (vllm-project#8269)

commit e807125
Author: Isotr0py <[email protected]>
Date:   Sat Sep 7 16:38:23 2024 +0800

    [Model][VLM] Support multi-images inputs for InternVL2 models (vllm-project#8201)

commit 9f68e00
Author: Cyrus Leung <[email protected]>
Date:   Sat Sep 7 16:02:39 2024 +0800

    [Bugfix] Fix broken OpenAI tensorizer test (vllm-project#8258)

commit ce2702a
Author: youkaichao <[email protected]>
Date:   Fri Sep 6 22:40:46 2024 -0700

    [tpu][misc] fix typo (vllm-project#8260)

commit 795b662
Author: Wei-Sheng Chin <[email protected]>
Date:   Fri Sep 6 20:18:16 2024 -0700

    Enable Random Prefix Caching in Serving Profiling Tool (benchmark_serving.py) (vllm-project#8241)

commit 2f707fc
Author: Cyrus Leung <[email protected]>
Date:   Sat Sep 7 10:57:24 2024 +0800

    [Model] Multi-input support for LLaVA (vllm-project#8238)

commit 41e95c5
Author: Kyle Mistele <[email protected]>
Date:   Fri Sep 6 21:49:01 2024 -0500

    [Bugfix] Fix Hermes tool call chat template bug (vllm-project#8256)

    Co-authored-by: Kyle Mistele <[email protected]>

commit 12dd715
Author: William Lin <[email protected]>
Date:   Fri Sep 6 17:48:48 2024 -0700

    [misc] [doc] [frontend] LLM torch profiler support (vllm-project#7943)

commit 29f49cd
Author: Patrick von Platen <[email protected]>
Date:   Sat Sep 7 01:02:05 2024 +0200

    [Model] Allow loading from original Mistral format (vllm-project#8168)

    Co-authored-by: Michael Goin <[email protected]>

commit 23f3222
Author: Dipika Sikka <[email protected]>
Date:   Fri Sep 6 18:29:03 2024 -0400

    [Misc] Remove `SqueezeLLM` (vllm-project#8220)

commit 9db52ea
Author: rasmith <[email protected]>
Date:   Fri Sep 6 17:26:09 2024 -0500

    [Kernel] [Triton] Memory optimization for awq_gemm and awq_dequantize, 2x throughput (vllm-project#8248)

commit 1447c97
Author: Alexey Kondratiev(AMD) <[email protected]>
Date:   Fri Sep 6 14:51:03 2024 -0400

    [CI/Build] Increasing timeout for multiproc worker tests (vllm-project#8203)

commit de80783
Author: Rui Qiao <[email protected]>
Date:   Fri Sep 6 09:18:35 2024 -0700

    [Misc] Use ray[adag] dependency instead of cuda (vllm-project#7938)

commit e5cab71
Author: afeldman-nm <[email protected]>
Date:   Fri Sep 6 12:01:14 2024 -0400

    [Frontend] Add --logprobs argument to `benchmark_serving.py` (vllm-project#8191)

commit baa5467
Author: Nick Hill <[email protected]>
Date:   Thu Sep 5 20:39:29 2024 -0700

    [BugFix] Fix Granite model configuration (vllm-project#8216)

commit db3bf7c
Author: Jiaxin Shan <[email protected]>
Date:   Thu Sep 5 18:10:33 2024 -0700

    [Core] Support load and unload LoRA in api server (vllm-project#6566)

    Co-authored-by: Jee Jee Li <[email protected]>

commit 2febcf2
Author: sroy745 <[email protected]>
Date:   Thu Sep 5 13:25:29 2024 -0700

    [Documentation][Spec Decode] Add documentation about lossless guarantees in Speculative Decoding in vLLM (vllm-project#7962)

commit 2ee4528
Author: Michael Goin <[email protected]>
Date:   Thu Sep 5 11:09:46 2024 -0400

    Move verify_marlin_supported to GPTQMarlinLinearMethod (vllm-project#8165)

commit 9da25a8
Author: Alex Brooks <[email protected]>
Date:   Thu Sep 5 06:48:10 2024 -0600

    [MODEL] Qwen Multimodal Support (Qwen-VL / Qwen-VL-Chat) (vllm-project#8029)

    Signed-off-by: Alex-Brooks <[email protected]>
    Co-authored-by: DarkLight1337 <[email protected]>

commit 8685ba1
Author: [email protected] <[email protected]>
Date:   Thu Sep 5 17:03:37 2024 +0530

    Inclusion of InternVLChatModel In PP_SUPPORTED_MODELS(Pipeline Parallelism) (vllm-project#7860)

commit 288a938
Author: Cyrus Leung <[email protected]>
Date:   Thu Sep 5 18:51:53 2024 +0800

    [Doc] Indicate more information about supported modalities (vllm-project#8181)

commit e39ebf5
Author: Elfie Guo <[email protected]>
Date:   Wed Sep 4 22:12:26 2024 -0700

    [Core/Bugfix] Add query dtype as per FlashInfer API requirements. (vllm-project#8173)

commit ba262c4
Author: Kevin H. Luu <[email protected]>
Date:   Wed Sep 4 20:33:12 2024 -0700

    [ci] Mark LoRA test as soft-fail (vllm-project#8160)

    Signed-off-by: kevin <[email protected]>

commit 4624d98
Author: Woosuk Kwon <[email protected]>
Date:   Wed Sep 4 20:31:48 2024 -0700

    [Misc] Clean up RoPE forward_native (vllm-project#8076)

commit 1afc931
Author: William Lin <[email protected]>
Date:   Wed Sep 4 17:35:36 2024 -0700

    [bugfix] >1.43 constraint for openai (vllm-project#8169)

    Co-authored-by: Michael Goin <[email protected]>

commit e01c2be
Author: Maureen McElaney <[email protected]>
Date:   Wed Sep 4 19:50:13 2024 -0400

    [Doc] [Misc] Create CODE_OF_CONDUCT.md (vllm-project#8161)

commit 32e7db2
Author: Simon Mo <[email protected]>
Date:   Wed Sep 4 16:34:27 2024 -0700

    Bump version to v0.6.0 (vllm-project#8166)

commit 008cf88
Author: Harsha vardhan manoj Bikki <[email protected]>
Date:   Wed Sep 4 16:33:43 2024 -0700

    [Neuron] Adding support for adding/ overriding neuron configuration a… (vllm-project#8062)

    Co-authored-by: Harsha Bikki <[email protected]>

commit 77d9e51
Author: Cody Yu <[email protected]>
Date:   Wed Sep 4 13:23:22 2024 -0700

    [MISC] Replace input token throughput with total token throughput (vllm-project#8164)

    Co-authored-by: Michael Goin <[email protected]>

commit e02ce49
Author: Kyle Mistele <[email protected]>
Date:   Wed Sep 4 15:18:13 2024 -0500

    [Feature] OpenAI-Compatible Tools API + Streaming for Hermes & Mistral models (vllm-project#5649)

    Co-authored-by: constellate <[email protected]>
    Co-authored-by: Kyle Mistele <[email protected]>

commit 561d6f8
Author: Woosuk Kwon <[email protected]>
Date:   Wed Sep 4 13:05:50 2024 -0700

    [CI] Change test input in Gemma LoRA test (vllm-project#8163)

commit d1dec64
Author: alexeykondrat <[email protected]>
Date:   Wed Sep 4 14:57:54 2024 -0400

    [CI/Build][ROCm] Enabling LoRA tests on ROCm (vllm-project#7369)

    Co-authored-by: Simon Mo <[email protected]>

commit 2ad2e56
Author: Cody Yu <[email protected]>
Date:   Wed Sep 4 11:53:25 2024 -0700

    [MISC] Consolidate FP8 kv-cache tests (vllm-project#8131)

commit d331156
Author: wnma <[email protected]>
Date:   Wed Sep 4 18:55:37 2024 +0800

    [Bugfix] remove post_layernorm in siglip (vllm-project#8106)

commit ccd7207
Author: TimWang <[email protected]>
Date:   Wed Sep 4 14:17:05 2024 +0800

    chore: Update check-wheel-size.py to read MAX_SIZE_MB from env (vllm-project#8103)

commit 855c262
Author: Cyrus Leung <[email protected]>
Date:   Wed Sep 4 13:22:17 2024 +0800

    [Frontend] Multimodal support in offline chat (vllm-project#8098)

commit 2be8ec6
Author: Peter Salas <[email protected]>
Date:   Tue Sep 3 21:38:21 2024 -0700

    [Model] Add Ultravox support for multiple audio chunks (vllm-project#7963)

commit e16fa99
Author: Dipika Sikka <[email protected]>
Date:   Tue Sep 3 22:12:41 2024 -0400

    [Misc] Update fbgemmfp8 to use `vLLMParameters` (vllm-project#7972)

    Co-authored-by: Michael Goin <[email protected]>

commit 61f4a93
Author: Woosuk Kwon <[email protected]>
Date:   Tue Sep 3 18:35:33 2024 -0700

    [TPU][Bugfix] Use XLA rank for persistent cache path (vllm-project#8137)

commit d4db9f5
Author: Nick Hill <[email protected]>
Date:   Tue Sep 3 17:57:41 2024 -0700

    [Benchmark] Add `--async-engine` option to benchmark_throughput.py (vllm-project#7964)

commit 2188a60
Author: Dipika Sikka <[email protected]>
Date:   Tue Sep 3 17:21:44 2024 -0400

    [Misc] Update `GPTQ` to use `vLLMParameters` (vllm-project#7976)

commit dc0b606
Author: Simon Mo <[email protected]>
Date:   Tue Sep 3 14:11:42 2024 -0700

    [CI] Change PR remainder to avoid at-mentions (vllm-project#8134)

commit 0af3abe
Author: Woosuk Kwon <[email protected]>
Date:   Tue Sep 3 13:29:24 2024 -0700

    [TPU][Bugfix] Fix next_token_ids shape (vllm-project#8128)

commit f1575dc
Author: Kevin H. Luu <[email protected]>
Date:   Tue Sep 3 13:25:09 2024 -0700

    [ci] Fix GHA workflow  (vllm-project#8129)

    Signed-off-by: kevin <[email protected]>

commit c02638e
Author: tomeras91 <[email protected]>
Date:   Tue Sep 3 22:37:08 2024 +0300

    [CI/Build] make pip install vllm work in macos (for import only) (vllm-project#8118)

commit 652c83b
Author: Antoni Baum <[email protected]>
Date:   Tue Sep 3 12:28:25 2024 -0700

    [Misc] Raise a more informative exception in add/remove_logger (vllm-project#7750)

commit 6d646d0
Author: Alexander Matveev <[email protected]>
Date:   Tue Sep 3 14:50:29 2024 -0400

    [Core] Optimize Async + Multi-step (vllm-project#8050)

commit 95a178f
Author: Kevin H. Luu <[email protected]>
Date:   Tue Sep 3 11:32:27 2024 -0700

    [CI] Only PR reviewers/committers can trigger CI on PR (vllm-project#8124)

    Signed-off-by: kevin <[email protected]>

commit bd852f2
Author: Cody Yu <[email protected]>
Date:   Tue Sep 3 10:49:18 2024 -0700

    [Performance] Enable chunked prefill and prefix caching together (vllm-project#8120)

    Co-authored-by: Tao He <[email protected]>
    Co-authored-by: Juelianqvq <[email protected]>

commit ec26653
Author: Isotr0py <[email protected]>
Date:   Tue Sep 3 21:37:52 2024 +0800

    [Bugfix][VLM] Add fallback to SDPA for ViT model running on CPU backend (vllm-project#8061)

commit 0fbc669
Author: Woosuk Kwon <[email protected]>
Date:   Mon Sep 2 20:35:42 2024 -0700

    [Bugfix] Fix single output condition in output processor (vllm-project#7881)

commit 6e36f4f
Author: wang.yuqi <[email protected]>
Date:   Tue Sep 3 05:20:12 2024 +0800

    improve chunked prefill performance

    [Bugfix] Fix vllm-project#7592 vllm 0.5.4 enable_chunked_prefill throughput is slightly lower than 0.5.3~0.5.0. (vllm-project#7874)

commit dd2a6a8
Author: Isotr0py <[email protected]>
Date:   Mon Sep 2 23:48:56 2024 +0800

    [Bugfix] Fix internlm2 tensor parallel inference (vllm-project#8055)

commit 4ca65a9
Author: Isotr0py <[email protected]>
Date:   Mon Sep 2 20:43:26 2024 +0800

    [Core][Bugfix] Accept GGUF model without .gguf extension (vllm-project#8056)

commit e2b2aa5
Author: Woosuk Kwon <[email protected]>
Date:   Sun Sep 1 23:09:46 2024 -0700

    [TPU] Align worker index with node boundary (vllm-project#7932)

commit e6a26ed
Author: Lily Liu <[email protected]>
Date:   Sun Sep 1 21:23:29 2024 -0700

    [SpecDecode][Kernel] Flashinfer Rejection Sampling (vllm-project#7244)

commit f8d6014
Author: Shawn Tan <[email protected]>
Date:   Sun Sep 1 21:37:18 2024 -0400

    [Model] Add Granite model (vllm-project#7436)

    Co-authored-by: Nick Hill <[email protected]>

commit 5b86b19
Author: Roger Wang <[email protected]>
Date:   Sun Sep 1 14:46:57 2024 -0700

    [Misc] Optional installation of audio related packages (vllm-project#8063)

commit 5231f08
Author: Roger Wang <[email protected]>
Date:   Sat Aug 31 16:35:53 2024 -0700

    [Frontend][VLM] Add support for multiple multi-modal items (vllm-project#8049)

commit 8423aef
Author: Robert Shaw <[email protected]>
Date:   Sat Aug 31 15:44:03 2024 -0400

    [BugFix][Core] Multistep Fix Crash on Request Cancellation (vllm-project#8059)

commit 4f5d844
Author: Nicolò Lucchesi <[email protected]>
Date:   Sat Aug 31 09:27:58 2024 +0200

    [Bugfix] Fix ModelScope models in v0.5.5 (vllm-project#8037)

commit d05f0a9
Author: Cyrus Leung <[email protected]>
Date:   Sat Aug 31 13:26:55 2024 +0800

    [Bugfix] Fix import error in Phi-3.5-MoE (vllm-project#8052)

commit 622f8ab
Author: Pavani Majety <[email protected]>
Date:   Fri Aug 30 22:18:50 2024 -0700

    [Bugfix] bugfix and add model test for flashinfer fp8 kv cache. (vllm-project#8013)

commit 1248e85
Author: Wenxiang <[email protected]>
Date:   Sat Aug 31 03:42:57 2024 +0800

    [Model] Adding support for MSFT Phi-3.5-MoE (vllm-project#7729)

    Co-authored-by: Your Name <[email protected]>
    Co-authored-by: Zeqi Lin <[email protected]>
    Co-authored-by: Zeqi Lin <[email protected]>

commit 2684efc
Author: Woosuk Kwon <[email protected]>
Date:   Fri Aug 30 09:01:26 2024 -0700

    [TPU][Bugfix] Fix tpu type api (vllm-project#8035)

commit 058344f
Author: Kaunil Dhruv <[email protected]>
Date:   Fri Aug 30 08:21:02 2024 -0700

    [Frontend]-config-cli-args (vllm-project#7737)

    Co-authored-by: Cyrus Leung <[email protected]>
    Co-authored-by: Kaunil Dhruv <[email protected]>

commit 98cef6a
Author: Cyrus Leung <[email protected]>
Date:   Fri Aug 30 23:20:34 2024 +0800

    [Core] Increase default `max_num_batched_tokens` for multimodal models (vllm-project#8028)

commit f97be32
Author: Jungho Christopher Cho <[email protected]>
Date:   Sat Aug 31 00:19:27 2024 +0900

    [VLM][Model] TP support for ViTs (vllm-project#7186)

    Co-authored-by: Roger Wang <[email protected]>
    Co-authored-by: Roger Wang <[email protected]>

commit afd39a4
Author: Cyrus Leung <[email protected]>
Date:   Fri Aug 30 23:03:28 2024 +0800

    [Bugfix] Fix import error in Exaone model (vllm-project#8034)

commit 2148441
Author: Richard Liu <[email protected]>
Date:   Fri Aug 30 00:27:40 2024 -0700

    [TPU] Support single and multi-host TPUs on GKE (vllm-project#7613)

commit dc13e99
Author: Yohan Na <[email protected]>
Date:   Fri Aug 30 15:34:20 2024 +0900

    [MODEL] add Exaone model support (vllm-project#7819)

commit 34a0e96
Author: Avshalom Manevich <[email protected]>
Date:   Fri Aug 30 11:11:39 2024 +0700

    [Kernel] changing fused moe kernel chunk size default to 32k (vllm-project#7995)

commit 80c7b08
Author: Woosuk Kwon <[email protected]>
Date:   Thu Aug 29 19:35:29 2024 -0700

    [TPU] Async output processing for TPU (vllm-project#8011)

commit 428dd14
Author: afeldman-nm <[email protected]>
Date:   Thu Aug 29 22:19:08 2024 -0400

    [Core] Logprobs support in Multi-step (vllm-project#7652)

commit 4abed65
Author: Cyrus Leung <[email protected]>
Date:   Fri Aug 30 08:49:04 2024 +0800

    [VLM] Disallow overflowing `max_model_len` for multimodal models (vllm-project#7998)

commit 0c785d3
Author: Wei-Sheng Chin <[email protected]>
Date:   Thu Aug 29 16:48:11 2024 -0700

    Add more percentiles and latencies (vllm-project#7759)

commit 4664cea
Author: chenqianfzh <[email protected]>
Date:   Thu Aug 29 16:09:08 2024 -0700

    support bitsandbytes 8-bit and FP4 quantized models (vllm-project#7445)

commit 257afc3
Author: Harsha vardhan manoj Bikki <[email protected]>
Date:   Thu Aug 29 13:58:14 2024 -0700

    [Neuron] Adding support for context-lenght, token-gen buckets. (vllm-project#7885)

    Co-authored-by: Harsha Bikki <[email protected]>

commit 86a677d
Author: Dipika Sikka <[email protected]>
Date:   Thu Aug 29 16:46:55 2024 -0400

    [misc] update tpu int8 to use new vLLM Parameters (vllm-project#7973)

commit d78789a
Author: Isotr0py <[email protected]>
Date:   Fri Aug 30 03:54:49 2024 +0800

    [Bugfix] Fix incorrect vocal embedding shards for GGUF model in tensor parallelism (vllm-project#7954)

commit c334b18
Author: kushanam <[email protected]>
Date:   Thu Aug 29 12:15:04 2024 -0700

    extend cuda graph size for H200 (vllm-project#7894)

    Co-authored-by: youkaichao <[email protected]>

commit 6b34215
Author: Pavani Majety <[email protected]>
Date:   Thu Aug 29 11:53:11 2024 -0700

    [Core][Kernels] Enable FP8 KV Cache with Flashinfer backend.  + BugFix for kv_cache_dtype=auto (vllm-project#7985)

    Co-authored-by: Simon Mo <[email protected]>
    Co-authored-by: Cody Yu <[email protected]>

commit 3f60f22
Author: Alexander Matveev <[email protected]>
Date:   Thu Aug 29 14:18:26 2024 -0400

    [Core] Combine async postprocessor and multi-step (vllm-project#7921)

commit f205c09
Author: Jonas M. Kübler <[email protected]>
Date:   Thu Aug 29 07:18:13 2024 +0200

    [Bugfix] Unify rank computation across regular decoding and speculative decoding (vllm-project#7899)

commit ef99a78
Author: youkaichao <[email protected]>
Date:   Wed Aug 28 21:27:06 2024 -0700

    Revert "[Core][Kernels] Use FlashInfer backend for FP8 KV Cache when available." (vllm-project#7982)

commit 74d5543
Author: Peter Salas <[email protected]>
Date:   Wed Aug 28 20:24:31 2024 -0700

    [VLM][Core] Fix exceptions on ragged NestedTensors (vllm-project#7974)

commit a7f65c2
Author: youkaichao <[email protected]>
Date:   Wed Aug 28 17:32:26 2024 -0700

    [torch.compile] remove reset (vllm-project#7975)

commit 4289cad
Author: Nick Hill <[email protected]>
Date:   Wed Aug 28 17:22:43 2024 -0700

    [Frontend] Minor optimizations to zmq decoupled front-end (vllm-project#7957)

    Co-authored-by: Robert Shaw <rshaw@neuralmagic>

commit af59df0
Author: Michael Goin <[email protected]>
Date:   Wed Aug 28 19:19:17 2024 -0400

    Remove faulty Meta-Llama-3-8B-Instruct-FP8.yaml lm-eval test (vllm-project#7961)

commit ce6bf3a
Author: youkaichao <[email protected]>
Date:   Wed Aug 28 16:10:12 2024 -0700

    [torch.compile] avoid Dynamo guard evaluation overhead (vllm-project#7898)

    Co-authored-by: Woosuk Kwon <[email protected]>

commit 3cdfe1f
Author: bnellnm <[email protected]>
Date:   Wed Aug 28 18:11:49 2024 -0400

    [Bugfix] Make torch registration of punica ops optional (vllm-project#7970)

commit fdd9daa
Author: Mor Zusman <[email protected]>
Date:   Thu Aug 29 01:06:52 2024 +0300

    [Kernel/Model] Migrate mamba_ssm and causal_conv1d kernels to vLLM (vllm-project#7651)

commit 8c56e57
Author: Stas Bekman <[email protected]>
Date:   Wed Aug 28 13:54:23 2024 -0700

    [Doc] fix 404 link (vllm-project#7966)

commit eeffde1
Author: Woosuk Kwon <[email protected]>
Date:   Wed Aug 28 13:10:21 2024 -0700

    [TPU] Upgrade PyTorch XLA nightly (vllm-project#7967)

commit e5697d1
Author: rasmith <[email protected]>
Date:   Wed Aug 28 14:37:47 2024 -0500

    [Kernel] [Triton] [AMD] Adding Triton implementations awq_dequantize and awq_gemm to support AWQ (vllm-project#7386)

commit b98cc28
Author: Pavani Majety <[email protected]>
Date:   Wed Aug 28 10:01:22 2024 -0700

    [Core][Kernels] Use FlashInfer backend for FP8 KV Cache when available. (vllm-project#7798)

    Co-authored-by: Simon Mo <[email protected]>

commit ef9baee
Author: Cyrus Leung <[email protected]>
Date:   Wed Aug 28 23:11:18 2024 +0800

    [Bugfix][VLM] Fix incompatibility between vllm-project#7902 and vllm-project#7230 (vllm-project#7948)

commit 98c12cf
Author: Stas Bekman <[email protected]>
Date:   Wed Aug 28 05:12:32 2024 -0700

    [Doc] fix the autoAWQ example (vllm-project#7937)

commit f52a43a
Author: youkaichao <[email protected]>
Date:   Wed Aug 28 01:27:07 2024 -0700

    [ci][test] fix pp test failure (vllm-project#7945)

commit e358053
Author: Cody Yu <[email protected]>
Date:   Wed Aug 28 00:36:31 2024 -0700

    [Performance] Enable chunked prefill and prefix caching together (vllm-project#7753)
Jeffwan pushed a commit to aibrix/vllm that referenced this pull request Sep 19, 2024
siddharth9820 pushed a commit to axonn-ai/vllm that referenced this pull request Sep 30, 2024
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
…l models (vllm-project#5649)

Co-authored-by: constellate <[email protected]>
Co-authored-by: Kyle Mistele <[email protected]>
Signed-off-by: Alvant <[email protected]>
KuntaiDu pushed a commit to KuntaiDu/vllm that referenced this pull request Nov 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.