We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ubuntu 22,CUDA Version: 12.5
v0.16.3
docker run -d --restart=always --name=xinference -v /opt/xinference_gpu:/opt/xinference -e XINFERENCE_HOME=/opt/xinference -e XINFERENCE_MODEL_SRC=modelscope -p 9998:9997 --gpus all xprobe/xinference:latest xinference-local -H 0.0.0.0 --log-level debug
1、在 lauch model 界面选择 embbeding 模型,device 选择 gpu,启动成功。 2、在 running model 界面能看到该 embbeding 模型运行成功。 3、在服务器上 运行 nvidia-smi ,显示 gpu 没有使用。 4、推理时,在服务器运行 top 命令,显示 cpu 使用率很高。
使用gpu运行embbeding模型
The text was updated successfully, but these errors were encountered:
换成 v1.0.0 之后,问题解决。
Sorry, something went wrong.
No branches or pull requests
System Info / 系統信息
ubuntu 22,CUDA Version: 12.5
Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?
Version info / 版本信息
v0.16.3
The command used to start Xinference / 用以启动 xinference 的命令
docker run -d --restart=always --name=xinference
-v /opt/xinference_gpu:/opt/xinference -e XINFERENCE_HOME=/opt/xinference -e XINFERENCE_MODEL_SRC=modelscope
-p 9998:9997 --gpus all xprobe/xinference:latest xinference-local -H 0.0.0.0 --log-level debug
Reproduction / 复现过程
1、在 lauch model 界面选择 embbeding 模型,device 选择 gpu,启动成功。
2、在 running model 界面能看到该 embbeding 模型运行成功。
3、在服务器上 运行 nvidia-smi ,显示 gpu 没有使用。
4、推理时,在服务器运行 top 命令,显示 cpu 使用率很高。
Expected behavior / 期待表现
使用gpu运行embbeding模型
The text was updated successfully, but these errors were encountered: