You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
xinference | 2024-11-10 10:43:13,558 transformers.models.auto.image_processing_auto 592 INFO Could not locate the image processor configuration file, will try to use the model config instead.
xinference | Could not locate the image processor configuration file, will try to use the model config instead.
xinference | INFO 11-10 10:43:13 awq_marlin.py:89] The model is convertible to awq_marlin during runtime. Using awq_marlin kernel.
xinference | INFO 11-10 10:43:13 config.py:648] Using fp8 data type to store kv cache. It reduces the GPU memory footprint and boosts the performance. Meanwhile, it may cause accuracy drop without a proper scaling factor
xinference | 2024-11-10 10:43:13,566 xinference.api.restful_api 7 ERROR [address=0.0.0.0:10179, pid=165] Model qwen2.5-instruct cannot be run on engine sglang.
xinference | Traceback (most recent call last):
xinference | File "/usr/local/lib/python3.10/dist-packages/xinference/api/restful_api.py", line 987, in launch_model
xinference | model_uid = await (await self._get_supervisor_ref()).launch_builtin_model(
xinference | File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 231, in send
xinference | return self._process_result_message(result)
xinference | File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 102, in _process_result_message
xinference | raise message.as_instanceof_cause()
xinference | File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 659, in send
xinference | result = await self._run_coro(message.message_id, coro)
xinference | File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 370, in _run_coro
xinference | return await coro
xinference | File "/usr/local/lib/python3.10/dist-packages/xoscar/api.py", line 384, in __on_receive__
xinference | return await super().__on_receive__(message) # type: ignore
xinference | File "xoscar/core.pyx", line 558, in __on_receive__
xinference | raise ex
xinference | File "xoscar/core.pyx", line 520, in xoscar.core._BaseActor.__on_receive__
xinference | async with self._lock:
xinference | File "xoscar/core.pyx", line 521, in xoscar.core._BaseActor.__on_receive__
xinference | with debug_async_timeout('actor_lock_timeout',
xinference | File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.__on_receive__
xinference | result = await result
xinference | File "/usr/local/lib/python3.10/dist-packages/xinference/core/supervisor.py", line 1040, in launch_builtin_model
xinference | await _launch_model()
xinference | File "/usr/local/lib/python3.10/dist-packages/xinference/core/supervisor.py", line 1004, in _launch_model
xinference | await _launch_one_model(rep_model_uid)
xinference | File "/usr/local/lib/python3.10/dist-packages/xinference/core/supervisor.py", line 983, in _launch_one_model
xinference | await worker_ref.launch_builtin_model(
xinference | File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 231, in send
xinference | return self._process_result_message(result)
xinference | File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/context.py", line 102, in _process_result_message
xinference | raise message.as_instanceof_cause()
xinference | File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 659, in send
xinference | result = await self._run_coro(message.message_id, coro)
xinference | File "/usr/local/lib/python3.10/dist-packages/xoscar/backends/pool.py", line 370, in _run_coro
xinference | return await coro
xinference | File "/usr/local/lib/python3.10/dist-packages/xoscar/api.py", line 384, in __on_receive__
xinference | return await super().__on_receive__(message) # type: ignore
xinference | File "xoscar/core.pyx", line 558, in __on_receive__
xinference | raise ex
xinference | File "xoscar/core.pyx", line 520, in xoscar.core._BaseActor.__on_receive__
xinference | async with self._lock:
xinference | File "xoscar/core.pyx", line 521, in xoscar.core._BaseActor.__on_receive__
xinference | with debug_async_timeout('actor_lock_timeout',
xinference | File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.__on_receive__
xinference | result = await result
xinference | File "/usr/local/lib/python3.10/dist-packages/xinference/core/utils.py", line 78, in wrapped
xinference | ret = await func(*args, **kwargs)
xinference | File "/usr/local/lib/python3.10/dist-packages/xinference/core/worker.py", line 869, in launch_builtin_model
xinference | model, model_description = await asyncio.to_thread(
xinference | File "/usr/lib/python3.10/asyncio/threads.py", line 25, in to_thread
xinference | return await loop.run_in_executor(None, func_call)
xinference | File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
xinference | result = self.fn(*self.args, **self.kwargs)
xinference | File "/usr/local/lib/python3.10/dist-packages/xinference/model/core.py", line 73, in create_model_instance
xinference | return create_llm_model_instance(
xinference | File "/usr/local/lib/python3.10/dist-packages/xinference/model/llm/core.py", line 216, in create_llm_model_instance
xinference | llm_cls = check_engine_by_spec_parameters(
xinference | File "/usr/local/lib/python3.10/dist-packages/xinference/model/llm/llm_family.py", line 1136, in check_engine_by_spec_parameters
xinference | raise ValueError(f"Model {model_name} cannot be run on engine {model_engine}.")
xinference | ValueError: [address=0.0.0.0:10179, pid=165] Model qwen2.5-instruct cannot be run on engine sglang.
xinference | Traceback (most recent call last):
xinference | File "/usr/local/bin/xinference", line 8, in <module>
xinference | sys.exit(cli())
xinference | File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1157, in __call__
xinference | return self.main(*args, **kwargs)
xinference | File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1078, in main
xinference | rv = self.invoke(ctx)
xinference | File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1688, in invoke
xinference | return _process_result(sub_ctx.command.invoke(sub_ctx))
xinference | File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 1434, in invoke
xinference | return ctx.invoke(self.callback, **ctx.params)
xinference | File "/usr/local/lib/python3.10/dist-packages/click/core.py", line 783, in invoke
xinference | return __callback(*args, **kwargs)
xinference | File "/usr/local/lib/python3.10/dist-packages/click/decorators.py", line 33, in new_func
xinference | return f(get_current_context(), *args, **kwargs)
xinference | File "/usr/local/lib/python3.10/dist-packages/xinference/deploy/cmdline.py", line 906, in model_launch
xinference | model_uid = client.launch_model(
xinference | File "/usr/local/lib/python3.10/dist-packages/xinference/client/restful/restful_client.py", line 959, in launch_model
xinference | raise RuntimeError(
xinference | RuntimeError: Failed to launch model, detail: [address=0.0.0.0:10179, pid=165] Model qwen2.5-instruct cannot be run on engine sglang.
Expected behavior / 期待表现
0.16.3 版本docker镜像 中丢失 model engine sglang 引擎参数选项(cli & webui)
The text was updated successfully, but these errors were encountered:
System Info / 系統信息
Driver Version: 535.171.04 CUDA Version: 12.2
Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?
Version info / 版本信息
xinference 0.16.3 (docker image)
The command used to start Xinference / 用以启动 xinference 的命令
Reproduction / 复现过程
Expected behavior / 期待表现
0.16.3 版本docker镜像 中丢失 model engine sglang 引擎参数选项(cli & webui)
The text was updated successfully, but these errors were encountered: