You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to run this with Docker on windows. Using a 3080 Ti. It runs the installer for a while, maxing out the GPU and then eventually throws an error with this message.
docker run --gpus all -e HF_TOKEN=**** -p 8000:8000 ghcr.io/mistralai/mistral-src/vllm:latest --host 0.0.0.0 --model mistralai/Mistral-7B-v0.1
The HF_TOKEN environment variable set, logging to Hugging Face.
Token will not been saved to git credential helper. Pass add_to_git_credential=True if you want to set the git credential as well.
Token is valid (permission: read).
Your token has been saved to /root/.cache/huggingface/token
Login successful
Downloading (…)lve/main/config.json: 100%|██████████| 571/571 [00:00<00:00, 4.41MB/s]
INFO 09-30 15:27:08 llm_engine.py:72] Initializing an LLM engine with config: model='mistralai/Mistral-7B-v0.1', tokenizer='mistralai/Mistral-7B-v0.1', tokenizer_mode=auto, revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=auto, tensor_parallel_size=1, quantization=None, seed=0)
Downloading (…)okenizer_config.json: 100%|██████████| 963/963 [00:00<00:00, 8.18MB/s]
Downloading tokenizer.model: 100%|██████████| 493k/493k [00:00<00:00, 20.1MB/s]
Downloading (…)/main/tokenizer.json: 100%|██████████| 1.80M/1.80M [00:00<00:00, 9.81MB/s]
Downloading (…)in/added_tokens.json: 100%|██████████| 42.0/42.0 [00:00<00:00, 369kB/s]
Downloading (…)cial_tokens_map.json: 100%|██████████| 72.0/72.0 [00:00<00:00, 628kB/s]
Downloading (…)l-00002-of-00002.bin: 100%|██████████| 5.06G/5.06G [03:19<00:00, 25.4MB/s]
Downloading (…)l-00001-of-00002.bin: 100%|██████████| 9.94G/9.94G [05:10<00:00, 32.1MB/s]
INFO 09-30 15:48:32 llm_engine.py:205] # GPU blocks: 0, # CPU blocks: 20480:00, 57.3MB/s]
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/api_server.py", line 616, in
engine = AsyncLLMEngine.from_engine_args(engine_args)
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 486, in from_engine_args
engine = cls(engine_args.worker_use_ray,
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 270, in init
self.engine = self._init_engine(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 306, in _init_engine
return engine_class(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 111, in init
self._init_cache()
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 209, in _init_cache
raise ValueError("No available memory for the cache blocks. " ValueError: No available memory for the cache blocks. Try increasing gpu_memory_utilization when initializing the engine.
Can anyone provide guidance on what to change in the launching command to increase gpu_memory_utilization? Or is that in the docker windows app? I'm more used to running in Linux, but windows has the good GPU for gaming.
The text was updated successfully, but these errors were encountered:
I'm trying to run this with Docker on windows. Using a 3080 Ti. It runs the installer for a while, maxing out the GPU and then eventually throws an error with this message.
docker run --gpus all -e HF_TOKEN=**** -p 8000:8000 ghcr.io/mistralai/mistral-src/vllm:latest --host 0.0.0.0 --model mistralai/Mistral-7B-v0.1
The HF_TOKEN environment variable set, logging to Hugging Face.
Token will not been saved to git credential helper. Pass
add_to_git_credential=True
if you want to set the git credential as well.Token is valid (permission: read).
Your token has been saved to /root/.cache/huggingface/token
Login successful
Downloading (…)lve/main/config.json: 100%|██████████| 571/571 [00:00<00:00, 4.41MB/s]
INFO 09-30 15:27:08 llm_engine.py:72] Initializing an LLM engine with config: model='mistralai/Mistral-7B-v0.1', tokenizer='mistralai/Mistral-7B-v0.1', tokenizer_mode=auto, revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=auto, tensor_parallel_size=1, quantization=None, seed=0)
Downloading (…)okenizer_config.json: 100%|██████████| 963/963 [00:00<00:00, 8.18MB/s]
Downloading tokenizer.model: 100%|██████████| 493k/493k [00:00<00:00, 20.1MB/s]
Downloading (…)/main/tokenizer.json: 100%|██████████| 1.80M/1.80M [00:00<00:00, 9.81MB/s]
Downloading (…)in/added_tokens.json: 100%|██████████| 42.0/42.0 [00:00<00:00, 369kB/s]
Downloading (…)cial_tokens_map.json: 100%|██████████| 72.0/72.0 [00:00<00:00, 628kB/s]
Downloading (…)l-00002-of-00002.bin: 100%|██████████| 5.06G/5.06G [03:19<00:00, 25.4MB/s]
Downloading (…)l-00001-of-00002.bin: 100%|██████████| 9.94G/9.94G [05:10<00:00, 32.1MB/s]
INFO 09-30 15:48:32 llm_engine.py:205] # GPU blocks: 0, # CPU blocks: 20480:00, 57.3MB/s]
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/api_server.py", line 616, in
engine = AsyncLLMEngine.from_engine_args(engine_args)
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 486, in from_engine_args
engine = cls(engine_args.worker_use_ray,
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 270, in init
self.engine = self._init_engine(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 306, in _init_engine
return engine_class(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 111, in init
self._init_cache()
File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 209, in _init_cache
raise ValueError("No available memory for the cache blocks. "
ValueError: No available memory for the cache blocks. Try increasing
gpu_memory_utilization
when initializing the engine.Can anyone provide guidance on what to change in the launching command to increase gpu_memory_utilization? Or is that in the docker windows app? I'm more used to running in Linux, but windows has the good GPU for gaming.
The text was updated successfully, but these errors were encountered: