-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bugfix][SpecDecode] kv corruption with bonus tokens in spec decode #9730
[Bugfix][SpecDecode] kv corruption with bonus tokens in spec decode #9730
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
@@ -237,6 +239,13 @@ def _append_new_tokens( | |||
|
|||
token_id = seq_output.output_token | |||
token_logprob = seq_output.logprobs[token_id] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changing token_logprob here is challenging because the log probability of the bonus token is required. I'm unsure if this change is actually necessary at this point, becuase I think filtered_model_output will be return at the end.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will we be using this logprob at all since we will be filtering out these sequences in the final output right? In that case we can add a note saying that this is a fake/unused logprob?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, I think the logprobs of the sequence without the bonus token will likely not be used. I will check this precisely and add a note about it.
I added experiment data with llama3 70B/8B |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix and the results! Left a couple of comments. PTAL
@@ -97,7 +97,8 @@ def sampler_output( | |||
model_output = model_output[0] | |||
|
|||
self._append_new_tokens( | |||
model_output, expanded_request.seq_group_metadata_list) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently as you mentioned in the pr description this does not take care of the TP1DraftModelRunner case. I was wondering if we should do the first run with the expanded request and after that the subsequent ones with the original request. That way both cases will be taken care of? However in TP1DraftModelRunner we will no longer have the multi-step run for all K steps now (it will be 1 step first with expanded request and K-1 multistep).
cc: @LiuXiaoxuanPKU
@@ -237,6 +239,13 @@ def _append_new_tokens( | |||
|
|||
token_id = seq_output.output_token | |||
token_logprob = seq_output.logprobs[token_id] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will we be using this logprob at all since we will be filtering out these sequences in the final output right? In that case we can add a note saying that this is a fake/unused logprob?
Thanks for the fix! I feel it's a tricky but important bug. It's hard to detect.... Really appreciate the fix! To add support for TP1DraftModelRunner, some potential solutions: (1) might hurt the performance, but more clean and general. |
If we were to go down the second path (2) won't we need to pass the indices of the original rows to the advance_step function and use it in the advance_step kernel to update the output after every step? If so this might be more work than option (1) though this would definitely be more performant. I feel option (2) is better because sd is very performance sensitive. However I think it would make the implementation complicated but if we are ok with that I would say option (2). What do you guys think? @llsj14 please let us know if you need help with this. |
Thank you for the feedback and solutions. Solution (2) sounds good if it improves performance. I'll first review |
Yeah (2) sounds good to me as well.
Why do we need to modify advance_step function and use it in the advance_step kernel? Can we just modify the input to each draft runner step? I just pushed the minimum change, it's very similar to the current modification, let me know your thoughts. |
Sorry for the confusion. Yeah I think this will work and we don't need any changes in advance_step kernel |
so, the idea is to deliver the index of bonus tokens to the |
@LiuXiaoxuanPKU |
Yeah please if you get time, really appreciate it! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the pr. Left some comments. Overall LGTM. Wondering if we should run the benchmarks with and without this PR considering the cpu->gpu sync concern raised by @LiuXiaoxuanPKU in the draft_model_runner changes?
bonus_seq_idx = self.indices_of_seq_with_bonus_tokens[ | ||
count] | ||
if i != bonus_seq_idx: | ||
# The following might cause a cpu->gpu sync |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case aren't both tensors are on GPU ? Will this still cause a sync?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I benchmarked this with 160m + 7B on H100, this has neglectable performance difference. Before/After this PR, the proposal time are almost the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: May be update the comment to say that benchmark shows comparable performance?
@@ -303,6 +303,7 @@ def test_multi_step_with_batch_expansion_correct_output(): | |||
seed, | |||
model_runner_cls=TP1DraftModelRunner, | |||
) | |||
multi_step_worker.set_include_gpu_probs_tensor() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is this needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we changed the local of draft model runner, it requires the sampler output is not None here.
prompts, | ||
num_gpu_blocks, | ||
block_size, | ||
continuations=multi_step_continuations, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit - Would this be the same as continuations=single_step_continuations? If so could be do that here and below for readability?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'multi_step_continuations' has one more token for each request. For example multi_step_continuations
is [[1, 2], [3, 4]], with the second token being the bonus token. single_step_continuations
only has one token for each request, for the given example, it's [[1], [3]].
block_size, | ||
num_gpu_blocks, | ||
seed, | ||
model_runner_cls=TP1DraftModelRunner, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we don't set model_runner_cls will this test the non TP1DraftModelRunner case? If so we could parameterize the test to run with both model_runner_cls set and unset and test both changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah actually, I found that for the CI machine, since the backend is XFORMERS instead of FlASH_ATTN, it will go to the else branch here by default because self.model_runner.supports_gpu_multi_step(expanded_request)
is False (only FLASH_ATTN supports GPU multi step). I guess we can specify the backend explicitly in the tests so that it will hit both branches.
@llsj14 Could you help check the correctness/performance running the experiments you showed originally in this PR? (different Ks, acceptance rate, and decoding time) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix @llsj14 and @LiuXiaoxuanPKU . LGTM.
bonus_seq_idx = self.indices_of_seq_with_bonus_tokens[ | ||
count] | ||
if i != bonus_seq_idx: | ||
# The following might cause a cpu->gpu sync |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: May be update the comment to say that benchmark shows comparable performance?
Sure, I’ll include the results in this PR! |
…decode (vllm-project#9730)" This reverts commit 0c63c34.
I added the test results for the TP1ModelRunner case at the top. Similar to the TP>1 case, there was a drop in acceptance rate, and it was fixed after the commits added (without performance degradation). I'm sorry I couldn't add the unit test for the TP>1 case and include comments about logprobs. |
…llm-project#9730) Co-authored-by: LiuXiaoxuanPKU <[email protected]> Signed-off-by: Loc Huynh <[email protected]>
…llm-project#9730) Co-authored-by: LiuXiaoxuanPKU <[email protected]> Signed-off-by: Sumit Dubey <[email protected]>
…llm-project#9730) Co-authored-by: LiuXiaoxuanPKU <[email protected]>
…llm-project#9730) Co-authored-by: LiuXiaoxuanPKU <[email protected]> Signed-off-by: Maxime Fournioux <[email protected]>
…llm-project#9730) Co-authored-by: LiuXiaoxuanPKU <[email protected]> Signed-off-by: Tyler Michael Smith <[email protected]>
…llm-project#9730) Co-authored-by: LiuXiaoxuanPKU <[email protected]>
Summary
#5765 enabled the bonus token by arranging model outputs with bonus tokens in mind. However, it only functions correctly when
K=1
(num_speculative_tokens).When
K > 1
(num_speculative_tokens), each generation step produces a token ID without considering the bonus token. This can lead to a token that is not equal to the bonus token, which may corrupt the KV cache.The inconsistency in the token IDs resulted in a sharp drop in the average acceptance rate. This drop is unexpected because the average acceptance rate should have been similar to that when K=1.
I modified it to replace a token generated by a sequence without bonus tokens with a token generated by a sequence that has bonus tokens, as shown in the image below. ('B' stands for the bonus token in the image.)
Experiment
as-is
force disable_bonus_tokens
to-be (applied this PR)
Experiment2 (TP1 case)
as-is
force disable_bonus_tokens
to-be (applied this PR)
Help needed
Reference
#4212: This is a proposed issue about enabling the bonus token.
#5765: This PR implemented the functionality to enable the bonus token.
#8701: This PR removed the logic to disable the bonus token as part of a refactor.
BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE
PR Checklist (Click to Expand)
Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]
for bug fixes.[CI/Build]
for build or continuous integration improvements.[Doc]
for documentation fixes and improvements.[Model]
for adding a new model or improving an existing model. Model name should appear in the title.[Frontend]
For changes on the vLLM frontend (e.g., OpenAI API server,LLM
class, etc.)[Kernel]
for changes affecting CUDA kernels or other compute kernels.[Core]
for changes in the core vLLM logic (e.g.,LLMEngine
,AsyncLLMEngine
,Scheduler
, etc.)[Hardware][Vendor]
for hardware-specific changes. Vendor name should appear in the prefix (e.g.,[Hardware][AMD]
).[Misc]
for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
format.sh
to format your code.docs/source/
if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.Adding or changing kernels
Each custom kernel needs a schema and one or more implementations to be registered with PyTorch.
Tensors
require meta-functions. Meta-functions should be implemented and registered in python so that dynamic dims can be handled automatically. See above documents for a description of meta-functions.torch.libary.opcheck()
to test the function registration and meta-function for any registered ops. Seetests/kernels
for examples.Notes for Large Changes
Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with
rfc-required
and might not go through the PR.What to Expect for the Reviews
The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:
action-required
label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.Thank You
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!