-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cuda 12.6 cuann9.3.0我不能运行,环境变量已经正确配置 #359
Comments
您好,您可以参考一下这个文档降级您的CUDA版本,它非常详细,我们非常感谢 @dimitribarbot 的贡献。 Hi, you can refer to this document to downgrade your CUDA version. It is very detailed, and we are very grateful for @dimitribarbot's contribution. |
我也一样的问题 |
Please, note however that this document has been written in the context of the LivePortrait Automatic1111's Stable Diffusion WebUI extension. It should also work if you're not using Automatic1111's Stable Diffusion WebUI but the key point for Windows users is to not have "Visual Studio 2022 Build Tools" installed (v17.10 or above), otherwise it will not work as they're not compatible with CUDA 11.8. You should ensure that no "Visual Studio 2022 Build Tools" (17.10 or above) is installed before proceeding with XPose installation for CUDA 11.8 or uninstall them if it is the case. |
你说不能和CUDA11兼容,但是我用的是12呀
…---- 回复的原邮件 ----
| 发件人 | Dimitri ***@***.***> |
| 日期 | 2024年08月29日 16:41 |
| 收件人 | ***@***.***> |
| 抄送至 | ***@***.***>***@***.***> |
| 主题 | Re: [KwaiVGI/LivePortrait] cuda 12.6 cuann9.3.0我不能运行,环境变量已经正确配置 (Issue #359) |
您好,您可以参考一下这个文档降级您的CUDA版本,它非常详细,我们非常感谢 @dimitribarbot 的贡献。
Hi, you can refer to this document to downgrade your CUDA version. It is very detailed, and we are very grateful for @dimitribarbot's contribution.
Please, note however that this document has been written in the context of the LivePortrait Automatic1111's Stable Diffusion WebUI extension. It should also work if you're not using Automatic1111's Stable Diffusion WebUI but the key point for Windows users is to not have "Visual Studio 2022 Build Tools" installed (v17.10 or above), otherwise it will not work as they're not compatible with CUDA 11.8. You should ensure that no "Visual Studio 2022 Build Tools" (17.10 or above) is installed before proceeding with XPose installation or uninstall them if it is the case.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
可是不是要和英伟达的这个组件版本号对应吗,这个不能降级啊
在 2024-08-29 16:41:26,"Dimitri Barbot" ***@***.***> 写道:
您好,您可以参考一下这个文档降级您的CUDA版本,它非常详细,我们非常感谢 @dimitribarbot 的贡献。
Hi, you can refer to this document to downgrade your CUDA version. It is very detailed, and we are very grateful for @dimitribarbot's contribution.
Please, note however that this document has been written in the context of the LivePortrait Automatic1111's Stable Diffusion WebUI extension. It should also work if you're not using Automatic1111's Stable Diffusion WebUI but the key point for Windows users is to not have "Visual Studio 2022 Build Tools" installed (v17.10 or above), otherwise it will not work as they're not compatible with CUDA 11.8. You should ensure that no "Visual Studio 2022 Build Tools" (17.10 or above) is installed before proceeding with XPose installation or uninstall them if it is the case.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
Actually, I don't think we're talking about the same issue. My documentation aims at solving an issue with animal model inference when having CUDA 12.x and PyTorch v2.1.x and executing instructions under the Fast hands-on (animals) or under the 快速上手(动物模型) section of the README. Here it seems to me that the problem is about the installation of onnxruntime, which is used during human model inference. The error message indicates that the installed version of onnxruntime is not compatible with the installed version of CUDA/cudNN. Indeed, when looking at the documentation of onnxruntime, it seems to me that only version 1.18.1 is compatible with cudNN 9.x whereas LivePortrait requirements is asking for version 1.18.0. In addition, when looking at the instructions of onnxruntime-gpu here, they're saying we should add an For me, the human models inference was correctly working after I ran this command (don't forget to activate your conda environement first via
If the previous command is not working, note that you can rollback to the default Live Portrait installation by running:
|
RuntimeError: C:\a_work\1\s\onnxruntime\python\onnxruntime_pybind_state.cc:866 onnxruntime::python::CreateExecutionProviderInstance CUDA_PATH is set but CUDA wasnt able to be loaded. Please install the correct version of CUDA andcuDNN as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported.
The text was updated successfully, but these errors were encountered: