-
-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core] Support offloading KV cache to CPU #10874
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
Signed-off-by: ApostaC <[email protected]> Co-authored-by: KuntaiDu <[email protected]>
Signed-off-by: ApostaC <[email protected]>
da35ed9
to
e6654f2
Compare
…ssues Signed-off-by: ApostaC <[email protected]>
Signed-off-by: ApostaC <[email protected]>
@@ -362,7 +362,7 @@ def test_swap_blocks( | |||
block_mapping = list(zip(src_blocks, dst_blocks)) | |||
block_mapping_tensor = torch.tensor(block_mapping, | |||
dtype=torch.int64, | |||
device="cpu").view(-1, 2) | |||
device=device).view(-1, 2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this because that this tensor need to be accessed by the new CUDA memcpy kernel?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. The new paged_copy
kernel need to access the block mapping from GPU.
@@ -508,3 +523,19 @@ def get_num_cached_tokens(self, seq: Sequence) -> int: | |||
cached in the block manager for the sequence. | |||
""" | |||
return self._computed_blocks_tracker.get_num_cached_tokens(seq) | |||
|
|||
def get_and_reset_swaps(self, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function seems like not get the real physical block ID from get_physical_block_id
? Especially for CPU PrefixCachingBlockAllocator, whose start ID is not zero.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I understand correctly, this function should not return the physical block IDs because the get_physical_block_id
will be called in block_manager.swap_in()
/block_manager.swap_out()
later.
Call chain is: scheduler._swap_in() --> block_manager.swap_in() --> block_allocator.get_physical_block_id()
(similar for swapping out).
(Let me know if my understanding is incorrect and I will fix it asap, thanks!)
|
||
# NOTE(Kuntai): extend the swapping list for CPU offloading | ||
new_swap_out, new_swap_in = \ | ||
self.block_manager.get_and_reset_swaps(time.time()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
However, the get_and_reset_swaps
is called directly here without get_physical_block_id
. I think these block IDs are sent to the cache engine directly later, so not the real physical block IDs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it! I double-checked the logic and you are right. Just pushed another commit to fix the issue and update the docstring. Thanks for the catch!
Signed-off-by: ApostaC <[email protected]>
Signed-off-by: ApostaC <[email protected]>
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: ApostaC <[email protected]>
Signed-off-by: ApostaC <[email protected]>
Signed-off-by: ApostaC <[email protected]>
uncached: allocated blocks that didn't hit any cache | ||
cached: allocated blocks that are cached, either in GPU or in CPU | ||
free: the blocks are not allocated by block allocator | ||
This implementation aims to transform uncacherd blocks to cached blocks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
uncacherd blocks -> uncached blocks?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the catch! Fixed!
583984f
to
789b00e
Compare
9fcf23e
to
2648fa5
Compare
Signed-off-by: ApostaC <[email protected]>
parser.add_argument( | ||
'--block-allocator', | ||
type=str, | ||
default='CpuGpuBlockAllocator', | ||
choices=['CpuGpuBlockAllocator', 'CpuOffloadingBlockAllocator'], | ||
help='The block allocator that vLLM uses. Currently' | ||
' can be CpuGpuBlockAllocator (the default) and ' | ||
'CpuOffloadingBlockAllocator (experimental) that ' | ||
'supports offloading the KV cache to CPU . ' | ||
'When using CpuOffloadingBlockAllocator, the ' | ||
'preemption mode must be recompute.') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we avoid to expose block allocator? Instead we should provide something like --kv-cache-offloading
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After a second thought, for the sake of modularity and extensibility, I think it's fine to expose a block allocator argument, but we should keep the default name concise, and also keep the possibility to accept third-party allocators.
I recommend adding --allocator
, which can take several default values as above, and also a class qualname like mod.cls
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this interface work with v1?
@@ -322,10 +322,10 @@ def prepare_worker_input( | |||
# `blocks_to_swap_in` and `blocks_to_swap_out` are cpu tensors. | |||
# they contain parameters to launch cudamemcpyasync. | |||
blocks_to_swap_in = torch.tensor(execute_model_req.blocks_to_swap_in, | |||
device="cpu", | |||
device="cuda", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This won't be compatible with other devices I think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As suggested by Kaichao, we will use a static pinned CPU memory buffer to host this array.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we separate the benchmark scripts to another PR to reduce the size of this one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah sure!
This pull request has merge conflicts that must be resolved before it can be |
return static_cast<T*>(tensor.data_ptr()); | ||
} else if (device.is_cpu() && tensor.is_pinned()) { | ||
T* ptr; | ||
cudaHostGetDevicePointer((void**)&ptr, static_cast<T*>(tensor.data_ptr()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Save this pointer at the creation of the CPU memory allocation -- making it CUDA graph compatible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potentially in a new PR
An implementation for CPU KV cache offloading (#7697)
TL; DR: CPU offloading is better than prefix caching in our benchmark, we also found that the evictor can be optimized to save 10-30% of the runtime.
This PR is for fixing the DCO issue for the Kuntai's original CPU offloading PR #9682 . It also contains new CUDA kernels to improve the KV cache offloading performance.
End-to-end benchmarking results:
A long document QA workload (see
benchmarks/benchmark_long_document_qa.py
) running on A100-40G-SXM GPU. The GPU can cache 8 documents and the CPU can cache 30 documents.(Following are the original data for the above figure)
New kernel implementation microbenchmark
The numbers are collected on A100-40GB-SXM GPUs
The new kernel can achieve 4x better throughput than the old
swap_block
implementation.Also, it won't decrease the performance when the number of pages are small.
Potential improvement:
Currently, the
swap_block
is invoked once per layer. If we can aggregate the copy of all the layers into one kernel, the throughput of copying 1 page will also achieve >10GB/s.Implementation
This PR has much less features compared to #8694, but it is really minimum and creates very little core change. So I guess we can use this PR to enable CPU KV cache offloading first, and then focus on disk.
The key idea of this implementation is to maintain those allocated blocks that didn't hit the cache, and constantly copy them into CPU after each scheduler step.
Here is the flow diagram
This idea is borrowed from ConServe (paper link: https://arxiv.org/abs/2410.01228), based on the assumption that the CPU-GPU bandwidth is much higher than GPU KV cache generation throughput. Thanks Yifan for this idea.