Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Attempting to profile VLLM with TPU errors #9783

Open
1 task done
manninglucas opened this issue Oct 29, 2024 · 3 comments
Open
1 task done

[Bug]: Attempting to profile VLLM with TPU errors #9783

manninglucas opened this issue Oct 29, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@manninglucas
Copy link

Your current environment

The output of `python collect_env.py`
Collecting environment information...
WARNING:root:libtpu.so and TPU device found. Setting PJRT_DEVICE=TPU.
INFO 10-29 01:26:46 importing.py:15] Triton not installed or not compatible; certain GPU-related functions will not be available.
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.31

Python version: 3.10.14 (main, Aug 13 2024, 02:16:06) [GCC 10.2.1 20210110] (64-bit runtime)
Python platform: Linux-5.19.0-1022-gcp-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   48 bits physical, 48 bits virtual
CPU(s):                          24
On-line CPU(s) list:             0-23
Thread(s) per core:              2
Core(s) per socket:              12
Socket(s):                       1
NUMA node(s):                    1
Vendor ID:                       AuthenticAMD
CPU family:                      25
Model:                           1
Model name:                      AMD EPYC 7B13
Stepping:                        0
CPU MHz:                         2449.998
BogoMIPS:                        4899.99
Hypervisor vendor:               KVM
Virtualization type:             full
L1d cache:                       384 KiB
L1i cache:                       384 KiB
L2 cache:                        6 MiB
L3 cache:                        32 MiB
NUMA node0 CPU(s):               0-23
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Not affected
Vulnerability Retbleed:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr arat npt nrip_save umip vaes vpclmulqdq rdpid fsrm

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pyzmq==26.2.0
[pip3] torch==2.5.0
[pip3] torch-xla==2.5.0+git17a4ef5
[pip3] torchvision==0.19.0a0+d23a6e1
[pip3] transformers==4.46.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.6.3.post2.dev132+g76ed5340
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect

Model Input Dumps

No response

🐛 Describe the bug

I am running a docker container to test out profiling and vllm serve errors when asked for a profile. Profiling should return 404 if it's not supported yet (but I would love to see it get some traction!).

Error stack trace:

ERROR 10-29 01:21:42 engine.py:165] AttributeError("'TPUExecutor' object has no attribute '_run_workers'")
ERROR 10-29 01:21:42 engine.py:165] Traceback (most recent call last):
ERROR 10-29 01:21:42 engine.py:165]   File "/workspace/vllm/vllm/engine/multiprocessing/engine.py", line 163, in start
ERROR 10-29 01:21:42 engine.py:165]     self.run_engine_loop()
ERROR 10-29 01:21:42 engine.py:165]   File "/workspace/vllm/vllm/engine/multiprocessing/engine.py", line 223, in run_engine_loop
ERROR 10-29 01:21:42 engine.py:165]     self.handle_new_input()
ERROR 10-29 01:21:42 engine.py:165]   File "/workspace/vllm/vllm/engine/multiprocessing/engine.py", line 274, in handle_new_input
ERROR 10-29 01:21:42 engine.py:165]     raise e
ERROR 10-29 01:21:42 engine.py:165]   File "/workspace/vllm/vllm/engine/multiprocessing/engine.py", line 264, in handle_new_input
ERROR 10-29 01:21:42 engine.py:165]     self.start_profile()
ERROR 10-29 01:21:42 engine.py:165]   File "/workspace/vllm/vllm/engine/multiprocessing/engine.py", line 379, in start_profile
ERROR 10-29 01:21:42 engine.py:165]     self.engine.model_executor._run_workers("start_profile")
ERROR 10-29 01:21:42 engine.py:165] AttributeError: 'TPUExecutor' object has no attribute '_run_workers'
ERROR 10-29 01:21:42 client.py:262] AttributeError("'TPUExecutor' object has no attribute '_run_workers'")
ERROR 10-29 01:21:42 client.py:262] Traceback (most recent call last):
ERROR 10-29 01:21:42 client.py:262]   File "/workspace/vllm/vllm/engine/multiprocessing/client.py", line 150, in run_heartbeat_loop
ERROR 10-29 01:21:42 client.py:262]     await self._check_success(
ERROR 10-29 01:21:42 client.py:262]   File "/workspace/vllm/vllm/engine/multiprocessing/client.py", line 326, in _check_success
ERROR 10-29 01:21:42 client.py:262]     raise response
ERROR 10-29 01:21:42 client.py:262] AttributeError: 'TPUExecutor' object has no attribute '_run_workers'
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/starlette/responses.py", line 259, in __call__
    await wrap(partial(self.listen_for_disconnect, receive))
  File "/usr/local/lib/python3.10/site-packages/starlette/responses.py", line 255, in wrap
    await func()
  File "/usr/local/lib/python3.10/site-packages/starlette/responses.py", line 232, in listen_for_disconnect
    message = await receive()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 555, in receive
    await self.message_event.wait()
  File "/usr/local/lib/python3.10/asyncio/locks.py", line 214, in wait
    await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7f3f7474a0b0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/starlette/applications.py", line 113, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 187, in __call__
    raise exc
  File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 165, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.10/site-packages/starlette/middleware/cors.py", line 85, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 715, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 735, in app
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 288, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 76, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 74, in app
    await response(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/starlette/responses.py", line 252, in __call__
    async with anyio.create_task_group() as task_group:
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 763, in __aexit__
    raise BaseExceptionGroup(
exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/starlette/responses.py", line 259, in __call__
    await wrap(partial(self.listen_for_disconnect, receive))
  File "/usr/local/lib/python3.10/site-packages/starlette/responses.py", line 255, in wrap
    await func()
  File "/usr/local/lib/python3.10/site-packages/starlette/responses.py", line 232, in listen_for_disconnect
    message = await receive()
  File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 555, in receive
    await self.message_event.wait()
  File "/usr/local/lib/python3.10/asyncio/locks.py", line 214, in wait
    await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7f3f74749090

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/starlette/applications.py", line 113, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 187, in __call__
    raise exc
  File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 165, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.10/site-packages/starlette/middleware/cors.py", line 85, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 715, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 735, in app
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 288, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 76, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 74, in app
    await response(scope, receive, send)
  File "/usr/local/lib/python3.10/site-packages/starlette/responses.py", line 252, in __call__
    async with anyio.create_task_group() as task_group:
  File "/usr/local/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 763, in __aexit__
    raise BaseExceptionGroup(
exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)

Steps to reproduce:

  1. $ docker build -f Dockerfile.tpu -t vllm-tpu .
  2. $ docker run --privileged --net host --shm-size=16G -it vllm-tpu
  3. $ VLLM_TORCH_PROFILER_DIR=./vllm_profile vllm serve "google/gemma-2b" --swap-space 4 --disable-log-requests --tensor_parallel_size=1 --max-model-len=1024
  4. In a separate shell, $ docker exec -it <container_name> /bin/bash
  5. $ wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json
  6. $ python benchmarks/benchmark_serving.py --backend vllm --model "google/gemma-2b" --dataset-name sharegpt --dataset-path /workspace/ShareGPT_V3_unfiltered_cleaned_split.json --num-prompts 2 --profile

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@manninglucas manninglucas added the bug Something isn't working label Oct 29, 2024
@manninglucas manninglucas changed the title [Bug]: Attempting to profile VLLM with TPU crashes workload [Bug]: Attempting to profile VLLM with TPU errors Oct 29, 2024
@bvrockwell
Copy link
Contributor

Hey @manninglucas sorry for missing this, same as the other bug - didn't see it because I was filtering on tag.

You can profile with https://cloud.google.com/tpu/docs/pytorch-xla-performance-profiling-tpu-vm#manual_capture

and open up the xplane file in tensorboard.

@bvrockwell
Copy link
Contributor

also, this should be going in soon: #11041

@manninglucas
Copy link
Author

Great, thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants