Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]:Structured outputs inference often took a very long time,and eventually causing a timeout and vLLM engine crushing. #10081

Open
1 task done
hpx502766238 opened this issue Nov 6, 2024 · 10 comments
Labels
bug Something isn't working

Comments

@hpx502766238
Copy link

Your current environment

The output of `python collect_env.py`
Collecting environment information...
WARNING 11-06 22:45:40 cuda.py:22] You are using a deprecated `pynvml` package. Please install `nvidia-ml-py` instead, and make sure to uninstall `pynvml`. When both of them are installed, `pynvml` will take precedence and cause errors. See https://pypi.org/project/pynvml for more information.
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35

Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: Tesla V100-PCIE-32GB
GPU 1: Tesla V100-PCIE-32GB

Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
架构:                                x86_64
CPU 运行模式:                        32-bit, 64-bit
Address sizes:                        46 bits physical, 48 bits virtual
字节序:                              Little Endian
CPU:                                  16
在线 CPU 列表:                       0-15
厂商 ID:                             GenuineIntel
型号名称:                            Intel Xeon Processor (Skylake, IBRS)
CPU 系列:                            6
型号:                                85
每个核的线程数:                      2
每个座的核数:                        1
座:                                  8
步进:                                4
BogoMIPS:                            4589.20
标记:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke md_clear
超管理器厂商:                        KVM
虚拟化类型:                          完全
L1d 缓存:                            512 KiB (16 instances)
L1i 缓存:                            512 KiB (16 instances)
L2 缓存:                             32 MiB (8 instances)
L3 缓存:                             128 MiB (8 instances)
NUMA 节点:                           1
NUMA 节点0 CPU:                      0-15
Vulnerability Gather data sampling:   Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit:          KVM: Mitigation: VMX unsupported
Vulnerability L1tf:                   Mitigation; PTE Inversion
Vulnerability Mds:                    Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown:               Mitigation; PTI
Vulnerability Mmio stale data:        Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Mitigation; IBRS
Vulnerability Spec rstack overflow:   Not affected
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Mitigation; Clear CPU buffers; SMT Host state unknown

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-ml-py==12.560.30
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.6.77
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pynvml==11.5.3
[pip3] pyzmq==26.2.0
[pip3] torch==2.4.0
[pip3] torchvision==0.19.0
[pip3] transformers==4.45.2
[pip3] triton==3.0.0
[conda] No relevant packages
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.6.1.dev238+ge2c6e0a82
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
�[4mGPU0	GPU1	CPU Affinity	NUMA Affinity	GPU NUMA ID�[0m
GPU0	 X 	PHB	0-15	0		N/A
GPU1	PHB	 X 	0-15	0		N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

Model Input Dumps

No response

🐛 Describe the bug

Structured output inference can take a very long time, even with just a single request, ultimately leading to timeouts or crashes. During inference, GPU KV cache usage gradually increases to 100%, while the average generation throughput drops from 30 tokens/s to 20 tokens/s, eventually causing a timeout and necessitating the use of the CPU KV cache. Even after one hour, there is no response to the structured output request sent earlier. Subsequently, I sent additional requests, including both normal and structured ones; the normal requests were responded to, albeit slowly, while the structured requests received no response. Over several more hours, an increasing number of new requests became pending and sequences were swapped, which eventually led to the vLLM engine crashing.

However, everything operates smoothly when only normal chat completion requests are sent, achieving an average generation throughput of 100 tokens/s or higher on dual Tesla V100 GPUs. The test model used was Qwen2-32B-GPTQ-Int8.

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@hpx502766238 hpx502766238 added the bug Something isn't working label Nov 6, 2024
@hpx502766238
Copy link
Author

hpx502766238 commented Nov 6, 2024

Structured Output seems to function correctly only with simple structures (fewer than 3 fields). With 4 or more fields, there is a high likelihood of causing a crash like leading. I tested one task, and it worked fine when I split it into multiple tasks; otherwise, the task would crash.

@DarkLight1337
Copy link
Member

May be related to #9032

@wolfgangsmdt
Copy link

@hpx502766238 @DarkLight1337
Hello,

Anyone have an idea about this BUG
Thanks!

@hpx502766238
How do you calculate the tokens/second? Thanks!

@hpx502766238
Copy link
Author

@hpx502766238 @DarkLight1337 Hello,

Anyone have an idea about this BUG Thanks!

@hpx502766238 How do you calculate the tokens/second? Thanks!

tokens/second(Avg generation throughput) was showing on terminal by vllm:Avg prompt throughput: 38.8 tokens/s, Avg generation throughput: 30.4 tokens/s, Running: 11 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 34.7%, CPU KV cache usage: 0.0%.

@gpgn
Copy link

gpgn commented Dec 9, 2024

We are seeing something similar with unsloth/Llama-3.1-Nemotron-70B-Instruct-bnb-4bit on v0.6.4.post1, only with complex guided json structures (other queries work fine):

--model
unsloth/Llama-3.1-Nemotron-70B-Instruct-bnb-4bit
--max-model-len
8192
--load-format
bitsandbytes
--quantization
bitsandbytes
--enable-prefix-caching
--dtype
float16
--disable-frontend-multiprocessing
--guided-decoding-backend
outlines
[... (request comes in)]

INFO 12-09 05:27:39 metrics.py:465] Prefix cache hit rate: GPU: 0.00%, CPU: 0.00%
INFO 12-09 05:27:49 metrics.py:449] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%.
INFO 12-09 05:27:49 metrics.py:465] Prefix cache hit rate: GPU: 0.00%, CPU: 0.00%
INFO 12-09 05:27:59 metrics.py:449] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%.
INFO 12-09 05:27:59 metrics.py:465] Prefix cache hit rate: GPU: 0.00%, CPU: 0.00%
INFO 12-09 05:28:09 metrics.py:449] Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 0 reqs, Swapped: 0 reqs, Pending: 0 reqs, GPU KV cache usage: 0.0%, CPU KV cache usage: 0.0%.
INFO 12-09 05:28:09 metrics.py:465] Prefix cache hit rate: GPU: 0.00%, CPU: 0.00%
ERROR 12-09 05:28:10 async_llm_engine.py:886] Engine iteration timed out. This should never happen!
ERROR 12-09 05:28:10 async_llm_engine.py:65] Engine background task failed
ERROR 12-09 05:28:10 async_llm_engine.py:65] Traceback (most recent call last):
ERROR 12-09 05:28:10 async_llm_engine.py:65]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/async_llm_engine.py", line 866, in run_engine_loop
ERROR 12-09 05:28:10 async_llm_engine.py:65]     done, _ = await asyncio.wait(
ERROR 12-09 05:28:10 async_llm_engine.py:65]               ^^^^^^^^^^^^^^^^^^^
ERROR 12-09 05:28:10 async_llm_engine.py:65]   File "/usr/lib/python3.12/asyncio/tasks.py", line 464, in wait
ERROR 12-09 05:28:10 async_llm_engine.py:65]     return await _wait(fs, timeout, return_when, loop)
ERROR 12-09 05:28:10 async_llm_engine.py:65]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 12-09 05:28:10 async_llm_engine.py:65]   File "/usr/lib/python3.12/asyncio/tasks.py", line 550, in _wait
ERROR 12-09 05:28:10 async_llm_engine.py:65]     await waiter
ERROR 12-09 05:28:10 async_llm_engine.py:65] asyncio.exceptions.CancelledError
ERROR 12-09 05:28:10 async_llm_engine.py:65]
ERROR 12-09 05:28:10 async_llm_engine.py:65] The above exception was the direct cause of the following exception:
ERROR 12-09 05:28:10 async_llm_engine.py:65]
ERROR 12-09 05:28:10 async_llm_engine.py:65] Traceback (most recent call last):
ERROR 12-09 05:28:10 async_llm_engine.py:65]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/async_llm_engine.py", line 55, in _log_task_completion
ERROR 12-09 05:28:10 async_llm_engine.py:65]     return_value = task.result()
ERROR 12-09 05:28:10 async_llm_engine.py:65]                    ^^^^^^^^^^^^^
ERROR 12-09 05:28:10 async_llm_engine.py:65]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/async_llm_engine.py", line 865, in run_engine_loop
ERROR 12-09 05:28:10 async_llm_engine.py:65]     async with asyncio_timeout(ENGINE_ITERATION_TIMEOUT_S):
ERROR 12-09 05:28:10 async_llm_engine.py:65]                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 12-09 05:28:10 async_llm_engine.py:65]   File "/usr/lib/python3.12/asyncio/timeouts.py", line 115, in __aexit__
ERROR 12-09 05:28:10 async_llm_engine.py:65]     raise TimeoutError from exc_val
ERROR 12-09 05:28:10 async_llm_engine.py:65] TimeoutError
Exception in callback functools.partial(<function _log_task_completion at 0x78faf7c59300>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x78facaeae180>>)
handle: <Handle functools.partial(<function _log_task_completion at 0x78faf7c59300>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x78facaeae180>>)>
Traceback (most recent call last):
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/async_llm_engine.py", line 866, in run_engine_loop
    done, _ = await asyncio.wait(
              ^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/tasks.py", line 464, in wait
    return await _wait(fs, timeout, return_when, loop)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/tasks.py", line 550, in _wait
    await waiter
asyncio.exceptions.CancelledError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/async_llm_engine.py", line 55, in _log_task_completion
    return_value = task.result()
                   ^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/async_llm_engine.py", line 865, in run_engine_loop
    async with asyncio_timeout(ENGINE_ITERATION_TIMEOUT_S):
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/timeouts.py", line 115, in __aexit__
    raise TimeoutError from exc_val
TimeoutError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "uvloop/cbhandles.pyx", line 63, in uvloop.loop.Handle._run
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/async_llm_engine.py", line 67, in _log_task_completion
    raise AsyncEngineDeadError(
vllm.engine.async_llm_engine.AsyncEngineDeadError: Task finished unexpectedly. This should never happen! Please open an issue on Github. See stack trace above for the actual cause.
INFO:     172.20.1.3:35712 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/async_llm_engine.py", line 866, in run_engine_loop
    done, _ = await asyncio.wait(
              ^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/tasks.py", line 464, in wait
    return await _wait(fs, timeout, return_when, loop)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/tasks.py", line 550, in _wait
    await waiter
asyncio.exceptions.CancelledError

The above exception was the direct cause of the following exception:
[...]

Do you think it's related to this @joerunde?

@joerunde
Copy link
Contributor

Do you think it's related to #9032 @joerunde?

@gpgn Maybe? But I never got a chance to dig deeper there, so 🤷 . The xgrammar backend for guided decoding was recently added, can you try that out and see if it works better for your use case?

@abuchnick-aiola
Copy link

Any updates on this ??? This is blocking us from using VLLM !!!

@DarkLight1337
Copy link
Member

You can try using guided decoding backends other than outlines. cc @mgoin

@gpgn
Copy link

gpgn commented Dec 23, 2024

Do you think it's related to #9032 @joerunde?

@gpgn Maybe? But I never got a chance to dig deeper there, so 🤷 . The xgrammar backend for guided decoding was recently added, can you try that out and see if it works better for your use case?

Worked for us on first glance, and faster too. Will evaluate more elaborately early next year. Thanks a lot!

@mgoin
Copy link
Member

mgoin commented Dec 23, 2024

Please try using the latest vllm==0.6.5! We have greatly improved performance. I covered this in the office hours last week
https://docs.google.com/presentation/d/1Z78ljqPIg7_KZ7ZAqKO4VDjKG-ytbkbZ/edit#slide=id.g319810a565e_0_444

Screenshot 2024-12-23 at 12 12 26 PM

Hopefully this issue can be closed now, but please let me know if you still have issues with this feature!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

7 participants