We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cuda:12.4 操作系统:win--wsl2
xinference:0.16.3
XINFERENCE_HOME=/home/xxx/xinference/ xinference-local --host 0.0.0.0 --port 9997 --log-level debug
可以选择gpu
The text was updated successfully, but these errors were encountered:
python -c "import torch; print(torch.cuda.is_available())"
跑下看下。
Sorry, something went wrong.
python -c "import torch; print(torch.cuda.is_available())" 跑下看下。
输出也是true 不知道是win10--wsl的原因 在win11的wsl上跑了一套是正常的
No branches or pull requests
System Info / 系統信息
cuda:12.4
操作系统:win--wsl2
Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?
Version info / 版本信息
xinference:0.16.3
The command used to start Xinference / 用以启动 xinference 的命令
XINFERENCE_HOME=/home/xxx/xinference/ xinference-local --host 0.0.0.0 --port 9997 --log-level debug
Reproduction / 复现过程
Expected behavior / 期待表现
可以选择gpu
The text was updated successfully, but these errors were encountered: