Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Usage]: Persistent Errors with vllm serve on Neuron Device: Model architectures ['LlamaForCausalLM'] failed to be inspected. #10932

Closed
1 task done
xiao11lam opened this issue Dec 5, 2024 · 7 comments · Fixed by #11016
Labels
usage How to use vllm

Comments

@xiao11lam
Copy link

Your current environment

Hello vLLM Development Team,
I am encountering persistent issues when trying to run the vllm serve command for the meta-llama/Llama-3.2-1B model on an AWS EC2 inf2 instance with the Neuron AMI. Despite following all the recommended installation and upgrade steps, and adjusting the numpy versions as per the guidelines, the issue persists.

I already referred the issues I could find such as:

#9624
#9713
#9624

Here is the way I installed the vllm under the instruction guideline through:

image

I already tried to reinstall or upgrade the vllm under the instruction above many times, also tried to set the numpy versions. Still I cannot solve the problem when I tried to run the vllm serve meta-llama/Llama-3.2-1B --device neuron --tensor-parallel-size 2 --block-size 8 --max-model-len 4096 --max-num-seqs 32

It constantly shows the error here:
ValueError: Model architectures ['LlamaForCausalLM'] failed to be inspected. Please check the logs for more details.

image

Here is my environment:

Collecting environment information...
PyTorch version: 2.1.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35

Python version: 3.10.12 (main, Nov  6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-1031-aws-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Address sizes:                   48 bits physical, 48 bits virtual
Byte Order:                      Little Endian
CPU(s):                          4
On-line CPU(s) list:             0-3
Vendor ID:                       AuthenticAMD
Model name:                      AMD EPYC 7R13 Processor
CPU family:                      25
Model:                           1
Thread(s) per core:              2
Core(s) per socket:              2
Socket(s):                       1
Stepping:                        1
BogoMIPS:                        5299.99
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid
Hypervisor vendor:               KVM
Virtualization type:             full
L1d cache:                       64 KiB (2 instances)
L1i cache:                       64 KiB (2 instances)
L2 cache:                        1 MiB (2 instances)
L3 cache:                        8 MiB (1 instance)
NUMA node(s):                    1
NUMA node0 CPU(s):               0-3
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Not affected
Vulnerability Retbleed:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:        Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:        Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected

Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==8.9.2.26
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.18.1
[pip3] nvidia-nvjitlink-cu12==12.6.85
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pyzmq==26.2.0
[pip3] torch==2.1.2
[pip3] torch-neuronx==2.1.2.2.3.2
[pip3] torch-xla==2.1.5
[pip3] torchvision==0.16.2
[pip3] transformers==4.46.3
[pip3] transformers-neuronx==0.12.313
[pip3] triton==2.1.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: (0, 'instance-type: inf2.xlarge\ninstance-id: i-012f1753f01231818\n+--------+--------+--------+---------+\n| NEURON | NEURON | NEURON |   PCI   |\n| DEVICE | CORES  | MEMORY |   BDF   |\n+--------+--------+--------+---------+\n| 0      | 2      | 32 GB  | 00:1f.0 |\n+--------+--------+--------+---------+', '')
vLLM Version: 0.6.4.post2.dev246+g9743d64e
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect

LD_LIBRARY_PATH=/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/cv2/../../lib64:/usr/local/lib:/usr/lib

Here is the error log:

Process SpawnProcess-1:
Traceback (most recent call last):
 File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
   self.run()
 File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
   self._target(*self._args, **self._kwargs)
 File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 368, in run_mp_engine
   raise e
 File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine
   engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
 File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 114, in from_engine_args
   engine_config = engine_args.create_engine_config(usage_context)
 File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 1010, in create_engine_config
   model_config = self.create_model_config()
 File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 938, in create_model_config
   return ModelConfig(
 File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/config.py", line 279, in __init__
   self.multimodal_config = self._init_multimodal_config(
 File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/config.py", line 305, in _init_multimodal_config
   if ModelRegistry.is_multimodal_model(architectures):
 File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 462, in is_multimodal_model
   model_cls, _ = self.inspect_model_cls(architectures)
 File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 422, in inspect_model_cls
   return self._raise_for_unsupported(architectures)
 File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 379, in _raise_for_unsupported
   raise ValueError(
ValueError: Model architectures ['LlamaForCausalLM'] failed to be inspected. Please check the logs for more details.
Here is my pip list:
absl-py                           2.1.0
accelerate                        1.1.1
aiohappyeyeballs                  2.4.4
aiohttp                           3.11.9
aiosignal                         1.3.1
annotated-types                   0.7.0
anyio                             4.7.0
argon2-cffi                       23.1.0
argon2-cffi-bindings              21.2.0
arrow                             1.3.0
asttokens                         3.0.0
async-lru                         2.0.4
async-timeout                     5.0.1
attrs                             24.2.0
aws-neuronx-runtime-discovery     2.9
awscli                            1.36.17
babel                             2.16.0
beautifulsoup4                    4.12.3
bleach                            6.2.0
blinker                           1.9.0
boto3                             1.35.76
botocore                          1.35.76
cachetools                        5.5.0
certifi                           2024.8.30
cffi                              1.17.1
charset-normalizer                3.4.0
click                             8.1.7
cloud-tpu-client                  0.10
cloudpickle                       3.1.0
colorama                          0.4.6
comm                              0.2.2
compressed-tensors                0.8.0
datasets                          2.19.1
debugpy                           1.8.9
decorator                         5.1.1
defusedxml                        0.7.1
dill                              0.3.8
diskcache                         5.6.3
distro                            1.9.0
docutils                          0.16
ec2-metadata                      2.14.0
einops                            0.8.0
environment-kernels               1.2.0
exceptiongroup                    1.2.2
executing                         2.1.0
fastapi                           0.115.6
fastjsonschema                    2.21.1
filelock                          3.16.1
Flask                             3.1.0
fqdn                              1.5.1
frozenlist                        1.5.0
fsspec                            2024.3.1
gguf                              0.10.0
google-api-core                   1.34.1
google-api-python-client          1.8.0
google-auth                       2.36.0
google-auth-httplib2              0.2.0
googleapis-common-protos          1.66.0
h11                               0.14.0
httpcore                          1.0.7
httplib2                          0.22.0
httptools                         0.6.4
httpx                             0.28.0
huggingface-hub                   0.26.3
idna                              3.10
importlib_metadata                8.5.0
iniconfig                         2.0.0
interegular                       0.3.3
ipykernel                         6.29.5
ipython                           8.30.0
ipywidgets                        8.1.5
islpy                             2023.2.5
isoduration                       20.11.0
itsdangerous                      2.2.0
jedi                              0.19.2
Jinja2                            3.1.4
jiter                             0.8.0
jmespath                          1.0.1
json5                             0.10.0
jsonpointer                       3.0.0
jsonschema                        4.23.0
jsonschema-specifications         2024.10.1
jupyter                           1.1.1
jupyter_client                    8.6.3
jupyter-console                   6.6.3
jupyter_core                      5.7.2
jupyter-events                    0.10.0
jupyter-lsp                       2.2.5
jupyter_server                    2.14.2
jupyter_server_terminals          0.5.3
jupyterlab                        4.3.2
jupyterlab_pygments               0.3.0
jupyterlab_server                 2.27.3
jupyterlab_widgets                3.0.13
lark                              1.2.2
libneuronxla                      2.0.5347.0
llvmlite                          0.43.0
lm-format-enforcer                0.10.9
lockfile                          0.12.2
MarkupSafe                        3.0.2
matplotlib-inline                 0.1.7
mistral_common                    1.5.1
mistune                           3.0.2
ml-dtypes                         0.2.0
mpmath                            1.3.0
msgspec                           0.18.6
multidict                         6.1.0
multiprocess                      0.70.16
nbclient                          0.10.1
nbconvert                         7.16.4
nbformat                          5.10.4
nest-asyncio                      1.6.0
networkx                          2.8.8
neuronx-cc                        2.15.143.0+e39249ad
notebook                          7.3.1
notebook_shim                     0.2.4
numba                             0.60.0
numpy                             1.25.2
nvidia-cublas-cu12                12.1.3.1
nvidia-cuda-cupti-cu12            12.1.105
nvidia-cuda-nvrtc-cu12            12.1.105
nvidia-cuda-runtime-cu12          12.1.105
nvidia-cudnn-cu12                 8.9.2.26
nvidia-cufft-cu12                 11.0.2.54
nvidia-curand-cu12                10.3.2.106
nvidia-cusolver-cu12              11.4.5.107
nvidia-cusparse-cu12              12.1.0.106
nvidia-nccl-cu12                  2.18.1
nvidia-nvjitlink-cu12             12.6.85
nvidia-nvtx-cu12                  12.1.105
oauth2client                      4.1.3
openai                            1.57.0
opencv-python-headless            4.10.0.84
outlines                          0.0.46
overrides                         7.7.0
packaging                         24.2
pandas                            2.2.3
pandocfilters                     1.5.1
parso                             0.8.4
partial-json-parser               0.2.1.1.post4
pexpect                           4.9.0
pgzip                             0.3.5
pillow                            10.4.0
pip                               22.0.2
platformdirs                      4.3.6
pluggy                            1.5.0
prometheus_client                 0.21.1
prometheus-fastapi-instrumentator 7.0.0
prompt_toolkit                    3.0.48
propcache                         0.2.1
protobuf                          3.20.3
psutil                            6.1.0
ptyprocess                        0.7.0
pure_eval                         0.2.3
py-cpuinfo                        9.0.0
pyairports                        2.1.1
pyarrow                           18.1.0
pyarrow-hotfix                    0.6
pyasn1                            0.6.1
pyasn1_modules                    0.4.1
pybind11                          2.13.6
pycountry                         24.6.1
pycparser                         2.22
pydantic                          2.10.3
pydantic_core                     2.27.1
Pygments                          2.18.0
pyparsing                         3.2.0
pytest                            8.3.4
python-daemon                     3.1.2
python-dateutil                   2.9.0.post0
python-dotenv                     1.0.1
python-json-logger                2.0.7
pytz                              2024.2
PyYAML                            6.0.2
pyzmq                             26.2.0
referencing                       0.35.1
regex                             2024.11.6
requests                          2.31.0
requests-unixsocket               0.3.0
rfc3339-validator                 0.1.4
rfc3986-validator                 0.1.1
rpds-py                           0.22.3
rsa                               4.7.2
s3transfer                        0.10.4
safetensors                       0.4.6.dev0
scipy                             1.11.2
Send2Trash                        1.8.3
sentencepiece                     0.2.0
setuptools                        59.6.0
six                               1.17.0
sniffio                           1.3.1
soupsieve                         2.6
stack-data                        0.6.3
starlette                         0.41.3
sympy                             1.13.3
terminado                         0.18.1
tiktoken                          0.7.0
tinycss2                          1.4.0
tokenizers                        0.20.4rc0
tomli                             2.2.1
torch                             2.1.2
torch-neuronx                     2.1.2.2.3.2
torch-xla                         2.1.5
torchvision                       0.16.2
tornado                           6.4.2
tqdm                              4.67.1
traitlets                         5.14.3
transformers                      4.46.3
transformers-neuronx              0.12.313
triton                            2.1.0
types-python-dateutil             2.9.0.20241003
typing_extensions                 4.12.2
tzdata                            2024.2
uri-template                      1.3.0
uritemplate                       3.0.1
urllib3                           2.2.3
uvicorn                           0.32.1
uvloop                            0.21.0
vllm                              0.6.4.post2.dev246+g9743d64e.neuron215
watchfiles                        1.0.0
wcwidth                           0.2.13
webcolors                         24.11.1
webencodings                      0.5.1
websocket-client                  1.8.0
websockets                        14.1
Werkzeug                          3.1.3
wget                              3.2
widgetsnbextension                4.0.13
xgrammar                          0.1.5
xxhash                            3.5.0
yarl                              1.18.3
zipp                              3.21.0

I am reaching out to ask for your expert advice on how to proceed or if there are any additional steps you could suggest to help resolve this issue. Any assistance you can provide would be greatly appreciated.

How would you like to use vllm

No response

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@xiao11lam xiao11lam added the usage How to use vllm label Dec 5, 2024
@DarkLight1337
Copy link
Member

DarkLight1337 commented Dec 6, 2024

ValueError: Model architectures ['LlamaForCausalLM'] failed to be inspected. Please check the logs for more details.

Can you show the full logs? Not just the final stack trace.

@xiao11lam
Copy link
Author

@DarkLight1337 here it is, sorry:

(aws_neuron_venv_pytorch) ubuntu@ip-172-31-16-133:~$ vllm serve meta-llama/Llama-3.2-1B --device neuron --tensor-parallel-size 2 --block-size 8 --max-model-len 4096 --max-num-seqs 32
INFO 12-06 09:43:30 api_server.py:625] vLLM API server version 0.6.4.post2.dev246+g9743d64e
INFO 12-06 09:43:30 api_server.py:626] args: Namespace(subparser='serve', model_tag='meta-llama/Llama-3.2-1B', config='', host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='meta-llama/Llama-3.2-1B', task='auto', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=4096, guided_decoding_backend='xgrammar', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=2, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=8, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=32, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='neuron', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, dispatch_function=<function serve at 0x7fcb20a2ac20>)
INFO 12-06 09:43:30 __init__.py:60] No plugins found.
INFO 12-06 09:43:30 api_server.py:178] Multiprocessing frontend to use ipc:///tmp/b9d0735a-aacb-4123-a4a1-e86be0f67182 for IPC Path.
INFO 12-06 09:43:30 api_server.py:197] Started engine process with PID 10458
INFO 12-06 09:43:34 __init__.py:60] No plugins found.
ERROR 12-06 09:43:36 registry.py:328] Error in inspecting model architecture 'LlamaForCausalLM'
ERROR 12-06 09:43:36 registry.py:328] Traceback (most recent call last):
ERROR 12-06 09:43:36 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 516, in _run_in_subprocess
ERROR 12-06 09:43:36 registry.py:328]     returned.check_returncode()
ERROR 12-06 09:43:36 registry.py:328]   File "/usr/lib/python3.10/subprocess.py", line 457, in check_returncode
ERROR 12-06 09:43:36 registry.py:328]     raise CalledProcessError(self.returncode, self.args, self.stdout,
ERROR 12-06 09:43:36 registry.py:328] subprocess.CalledProcessError: Command '['/home/ubuntu/aws_neuron_venv_pytorch/bin/python3.10', '-m', 'vllm.model_executor.models.registry']' returned non-zero exit status 1.
ERROR 12-06 09:43:36 registry.py:328] 
ERROR 12-06 09:43:36 registry.py:328] The above exception was the direct cause of the following exception:
ERROR 12-06 09:43:36 registry.py:328] 
ERROR 12-06 09:43:36 registry.py:328] Traceback (most recent call last):
ERROR 12-06 09:43:36 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 326, in _try_inspect_model_cls
ERROR 12-06 09:43:36 registry.py:328]     return model.inspect_model_cls()
ERROR 12-06 09:43:36 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 288, in inspect_model_cls
ERROR 12-06 09:43:36 registry.py:328]     return _run_in_subprocess(
ERROR 12-06 09:43:36 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 519, in _run_in_subprocess
ERROR 12-06 09:43:36 registry.py:328]     raise RuntimeError(f"Error raised in subprocess:\n"
ERROR 12-06 09:43:36 registry.py:328] RuntimeError: Error raised in subprocess:
ERROR 12-06 09:43:36 registry.py:328] /usr/lib/python3.10/runpy.py:126: RuntimeWarning: 'vllm.model_executor.models.registry' found in sys.modules after import of package 'vllm.model_executor.models', but prior to execution of 'vllm.model_executor.models.registry'; this may result in unpredictable behaviour
ERROR 12-06 09:43:36 registry.py:328]   warn(RuntimeWarning(msg))
ERROR 12-06 09:43:36 registry.py:328] Traceback (most recent call last):
ERROR 12-06 09:43:36 registry.py:328]   File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
ERROR 12-06 09:43:36 registry.py:328]     return _run_code(code, main_globals, None,
ERROR 12-06 09:43:36 registry.py:328]   File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
ERROR 12-06 09:43:36 registry.py:328]     exec(code, run_globals)
ERROR 12-06 09:43:36 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 540, in <module>
ERROR 12-06 09:43:36 registry.py:328]     _run()
ERROR 12-06 09:43:36 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 533, in _run
ERROR 12-06 09:43:36 registry.py:328]     result = fn()
ERROR 12-06 09:43:36 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 289, in <lambda>
ERROR 12-06 09:43:36 registry.py:328]     lambda: _ModelInfo.from_model_cls(self.load_model_cls()))
ERROR 12-06 09:43:36 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 292, in load_model_cls
ERROR 12-06 09:43:36 registry.py:328]     mod = importlib.import_module(self.module_name)
ERROR 12-06 09:43:36 registry.py:328]   File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
ERROR 12-06 09:43:36 registry.py:328]     return _bootstrap._gcd_import(name[level:], package, level)
ERROR 12-06 09:43:36 registry.py:328]   File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
ERROR 12-06 09:43:36 registry.py:328]   File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
ERROR 12-06 09:43:36 registry.py:328]   File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
ERROR 12-06 09:43:36 registry.py:328]   File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
ERROR 12-06 09:43:36 registry.py:328]   File "<frozen importlib._bootstrap_external>", line 883, in exec_module
ERROR 12-06 09:43:36 registry.py:328]   File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
ERROR 12-06 09:43:36 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 29, in <module>
ERROR 12-06 09:43:36 registry.py:328]     from vllm.attention import Attention, AttentionMetadata
ERROR 12-06 09:43:36 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/attention/__init__.py", line 5, in <module>
ERROR 12-06 09:43:36 registry.py:328]     from vllm.attention.layer import Attention
ERROR 12-06 09:43:36 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/attention/layer.py", line 266, in <module>
ERROR 12-06 09:43:36 registry.py:328]     direct_register_custom_op(
ERROR 12-06 09:43:36 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/utils.py", line 1647, in direct_register_custom_op
ERROR 12-06 09:43:36 registry.py:328]     schema_str = torch._custom_op.impl.infer_schema(op_func, mutates_args)
ERROR 12-06 09:43:36 registry.py:328] TypeError: infer_schema() takes 1 positional argument but 2 were given
ERROR 12-06 09:43:36 registry.py:328] 
Traceback (most recent call last):
  File "/home/ubuntu/aws_neuron_venv_pytorch/bin/vllm", line 8, in <module>
    sys.exit(main())
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/scripts.py", line 201, in main
    args.dispatch_function(args)
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/scripts.py", line 42, in serve
    uvloop.run(run_server(args))
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/uvloop/__init__.py", line 82, in run
    return loop.run_until_complete(wrapper())
  File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/uvloop/__init__.py", line 61, in wrapper
    return await main
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 649, in run_server
    async with build_async_engine_client(args) as engine_client:
  File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 116, in build_async_engine_client
    async with build_async_engine_client_from_engine_args(
  File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 200, in build_async_engine_client_from_engine_args
    engine_config = engine_args.create_engine_config()
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 1010, in create_engine_config
    model_config = self.create_model_config()
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 938, in create_model_config
    return ModelConfig(
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/config.py", line 279, in __init__
    self.multimodal_config = self._init_multimodal_config(
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/config.py", line 305, in _init_multimodal_config
    if ModelRegistry.is_multimodal_model(architectures):
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 462, in is_multimodal_model
    model_cls, _ = self.inspect_model_cls(architectures)
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 422, in inspect_model_cls
    return self._raise_for_unsupported(architectures)
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 379, in _raise_for_unsupported
    raise ValueError(
ValueError: Model architectures ['LlamaForCausalLM'] failed to be inspected. Please check the logs for more details.
ERROR 12-06 09:43:39 registry.py:328] Error in inspecting model architecture 'LlamaForCausalLM'
ERROR 12-06 09:43:39 registry.py:328] Traceback (most recent call last):
ERROR 12-06 09:43:39 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 516, in _run_in_subprocess
ERROR 12-06 09:43:39 registry.py:328]     returned.check_returncode()
ERROR 12-06 09:43:39 registry.py:328]   File "/usr/lib/python3.10/subprocess.py", line 457, in check_returncode
ERROR 12-06 09:43:39 registry.py:328]     raise CalledProcessError(self.returncode, self.args, self.stdout,
ERROR 12-06 09:43:39 registry.py:328] subprocess.CalledProcessError: Command '['/home/ubuntu/aws_neuron_venv_pytorch/bin/python3.10', '-m', 'vllm.model_executor.models.registry']' returned non-zero exit status 1.
ERROR 12-06 09:43:39 registry.py:328] 
ERROR 12-06 09:43:39 registry.py:328] The above exception was the direct cause of the following exception:
ERROR 12-06 09:43:39 registry.py:328] 
ERROR 12-06 09:43:39 registry.py:328] Traceback (most recent call last):
ERROR 12-06 09:43:39 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 326, in _try_inspect_model_cls
ERROR 12-06 09:43:39 registry.py:328]     return model.inspect_model_cls()
ERROR 12-06 09:43:39 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 288, in inspect_model_cls
ERROR 12-06 09:43:39 registry.py:328]     return _run_in_subprocess(
ERROR 12-06 09:43:39 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 519, in _run_in_subprocess
ERROR 12-06 09:43:39 registry.py:328]     raise RuntimeError(f"Error raised in subprocess:\n"
ERROR 12-06 09:43:39 registry.py:328] RuntimeError: Error raised in subprocess:
ERROR 12-06 09:43:39 registry.py:328] /usr/lib/python3.10/runpy.py:126: RuntimeWarning: 'vllm.model_executor.models.registry' found in sys.modules after import of package 'vllm.model_executor.models', but prior to execution of 'vllm.model_executor.models.registry'; this may result in unpredictable behaviour
ERROR 12-06 09:43:39 registry.py:328]   warn(RuntimeWarning(msg))
ERROR 12-06 09:43:39 registry.py:328] Traceback (most recent call last):
ERROR 12-06 09:43:39 registry.py:328]   File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
ERROR 12-06 09:43:39 registry.py:328]     return _run_code(code, main_globals, None,
ERROR 12-06 09:43:39 registry.py:328]   File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
ERROR 12-06 09:43:39 registry.py:328]     exec(code, run_globals)
ERROR 12-06 09:43:39 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 540, in <module>
ERROR 12-06 09:43:39 registry.py:328]     _run()
ERROR 12-06 09:43:39 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 533, in _run
ERROR 12-06 09:43:39 registry.py:328]     result = fn()
ERROR 12-06 09:43:39 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 289, in <lambda>
ERROR 12-06 09:43:39 registry.py:328]     lambda: _ModelInfo.from_model_cls(self.load_model_cls()))
ERROR 12-06 09:43:39 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 292, in load_model_cls
ERROR 12-06 09:43:39 registry.py:328]     mod = importlib.import_module(self.module_name)
ERROR 12-06 09:43:39 registry.py:328]   File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
ERROR 12-06 09:43:39 registry.py:328]     return _bootstrap._gcd_import(name[level:], package, level)
ERROR 12-06 09:43:39 registry.py:328]   File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
ERROR 12-06 09:43:39 registry.py:328]   File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
ERROR 12-06 09:43:39 registry.py:328]   File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
ERROR 12-06 09:43:39 registry.py:328]   File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
ERROR 12-06 09:43:39 registry.py:328]   File "<frozen importlib._bootstrap_external>", line 883, in exec_module
ERROR 12-06 09:43:39 registry.py:328]   File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
ERROR 12-06 09:43:39 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/llama.py", line 29, in <module>
ERROR 12-06 09:43:39 registry.py:328]     from vllm.attention import Attention, AttentionMetadata
ERROR 12-06 09:43:39 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/attention/__init__.py", line 5, in <module>
ERROR 12-06 09:43:39 registry.py:328]     from vllm.attention.layer import Attention
ERROR 12-06 09:43:39 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/attention/layer.py", line 266, in <module>
ERROR 12-06 09:43:39 registry.py:328]     direct_register_custom_op(
ERROR 12-06 09:43:39 registry.py:328]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/utils.py", line 1647, in direct_register_custom_op
ERROR 12-06 09:43:39 registry.py:328]     schema_str = torch._custom_op.impl.infer_schema(op_func, mutates_args)
ERROR 12-06 09:43:39 registry.py:328] TypeError: infer_schema() takes 1 positional argument but 2 were given
ERROR 12-06 09:43:39 registry.py:328] 
ERROR 12-06 09:43:39 engine.py:366] Model architectures ['LlamaForCausalLM'] failed to be inspected. Please check the logs for more details.
ERROR 12-06 09:43:39 engine.py:366] Traceback (most recent call last):
ERROR 12-06 09:43:39 engine.py:366]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine
ERROR 12-06 09:43:39 engine.py:366]     engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
ERROR 12-06 09:43:39 engine.py:366]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 114, in from_engine_args
ERROR 12-06 09:43:39 engine.py:366]     engine_config = engine_args.create_engine_config(usage_context)
ERROR 12-06 09:43:39 engine.py:366]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 1010, in create_engine_config
ERROR 12-06 09:43:39 engine.py:366]     model_config = self.create_model_config()
ERROR 12-06 09:43:39 engine.py:366]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 938, in create_model_config
ERROR 12-06 09:43:39 engine.py:366]     return ModelConfig(
ERROR 12-06 09:43:39 engine.py:366]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/config.py", line 279, in __init__
ERROR 12-06 09:43:39 engine.py:366]     self.multimodal_config = self._init_multimodal_config(
ERROR 12-06 09:43:39 engine.py:366]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/config.py", line 305, in _init_multimodal_config
ERROR 12-06 09:43:39 engine.py:366]     if ModelRegistry.is_multimodal_model(architectures):
ERROR 12-06 09:43:39 engine.py:366]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 462, in is_multimodal_model
ERROR 12-06 09:43:39 engine.py:366]     model_cls, _ = self.inspect_model_cls(architectures)
ERROR 12-06 09:43:39 engine.py:366]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 422, in inspect_model_cls
ERROR 12-06 09:43:39 engine.py:366]     return self._raise_for_unsupported(architectures)
ERROR 12-06 09:43:39 engine.py:366]   File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 379, in _raise_for_unsupported
ERROR 12-06 09:43:39 engine.py:366]     raise ValueError(
ERROR 12-06 09:43:39 engine.py:366] ValueError: Model architectures ['LlamaForCausalLM'] failed to be inspected. Please check the logs for more details.
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 368, in run_mp_engine
    raise e
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine
    engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/engine/multiprocessing/engine.py", line 114, in from_engine_args
    engine_config = engine_args.create_engine_config(usage_context)
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 1010, in create_engine_config
    model_config = self.create_model_config()
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 938, in create_model_config
    return ModelConfig(
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/config.py", line 279, in __init__
    self.multimodal_config = self._init_multimodal_config(
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/config.py", line 305, in _init_multimodal_config
    if ModelRegistry.is_multimodal_model(architectures):
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 462, in is_multimodal_model
    model_cls, _ = self.inspect_model_cls(architectures)
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 422, in inspect_model_cls
    return self._raise_for_unsupported(architectures)
  File "/home/ubuntu/aws_neuron_venv_pytorch/lib/python3.10/site-packages/vllm/model_executor/models/registry.py", line 379, in _raise_for_unsupported
    raise ValueError(
ValueError: Model architectures ['LlamaForCausalLM'] failed to be inspected. Please check the logs for more details.

@DarkLight1337
Copy link
Member

This looks like a problem with custom ops. @youkaichao might be able to help.

@youkaichao
Copy link
Member

torch 2.1.2

ERROR 12-06 09:43:36 registry.py:328] TypeError: infer_schema() takes 1 positional argument but 2 were given

I think the neuro torch version is too low.

@youkaichao
Copy link
Member

@xiao11lam
Copy link
Author

@youkaichao I think my neuro torch version is the latest:
torch-neuronx 2.1.2.2.3.2

Which is the latest version:
image

@xendo
Copy link
Contributor

xendo commented Dec 9, 2024

I think the neuro torch version is too low.

yup, neuron still requires torch==2.1 I think.

I was able to reproduce locally, I'm not sure how it's possible that this works on the CI, here is a pull request with a fix:

#11016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
usage How to use vllm
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants