-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Onnxruntime not found or doesn't come with acceleration providers #75
Comments
It's just a replacement for "DWPose doesn't support CUDA out-of-the block" |
Changed the warning at 3c8cfd3 |
|
Looks CUDA related. Just ran into this myself. Onnx site only has support for 11.8 CUDA listed. New ComfyUI is using 1.21 (I think since that's what its downloading now). Not sure this will get fixed till Onnx does something on their side. |
After installed onnxruntime-gpu 1.16.1—— DWPose: Onnxruntime with acceleration providers detected. Caching sessions (might take around half a minute)... |
@Sostay "Falling back" is not an error |
After installing onnxruntime (regardless of GPU version or CPU version), there is an error message. It seems that there is no need to install onnxruntime, and then just ignore the fallback? |
As I said, "fallback" is not an error, but if it has "Failed to create CUDAExecutionProvider" or "Failed to create ROCMExecutionProvider" then it is |
same |
+1 I did this because there's no |
Same here... but: C:\Users\booty>nvcc --version onnx 1.14.1 ComfyUI Revision: 1587 [f8caa24b] | Released on '2023-10-17' ... which I thought was supposed to be compatible with ONNXRuntime. |
@illuculent |
any tips on how to do this safely with the portable version? |
Portable version of? Safe from what? |
What steps did you use?On Oct 28, 2023, at 10:24 AM, kenny-kvibe ***@***.***> wrote:
+1 I've installed TensorRT and downgraded torch to use cu118 and also reinstalled onnxruntime-gpu. InvokeAI still uses cu118, and Comfy also works normally with it.
No errors nor fallbacks:
I did this because there's no cu121 listed here nor any of the 12.x versions: https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements
any tips on how to do this safely with the portable version?
I have the portable version, what do you mean by "safely" ?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: ***@***.***>
|
TensorRT: https://developer.nvidia.com/tensorrt @haqthat I didn't use any steps |
Make sure you have everything in the system PATH variable.
@ECHO off
SETLOCAL
SET "PATH=X:\path\to\missing\files;%PATH%"
CD %~dp0ComfyUI
python main.py
ENDLOCAL
EXIT /B 0
#!/usr/bin/env bash
cd `dirname $0`/ComfyUI
PATH="/path/to/missing/files:$PATH" python main.py
return 0 And place this script in the same folder where your ComfyUI folder is. Do this ^ if it says that some program you installed is missing or not found. I don't know your exact issue so my answers are about what I think your issue is. |
@Fannovel16 like you explain in an other post, I added in comfyui_controlnet_aux/requirements.txt :) Now I have both acceleration, CPUs and GPU run at 100% and fans also... DWPose: Onnxruntime with acceleration providers detected. Caching sessions (might take around half a minute)... Not a double copy/paste, same error is showed 2 times like this. Full startup: _D:\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build Prestartup times for custom nodes: Total VRAM 12287 MB, total RAM 49135 MB Loading: ComfyUI-Impact-Pack (V4.28.2)Loading: ComfyUI-Impact-Pack (Subpack: V0.3)Loading: ComfyUI-Manager (V0.36)ComfyUI Revision: 1636 [e73ec8c4] | Released on '2023-11-01'Registered sys.path: ['D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\init.py', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_pycocotools', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_oneformer', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mmpkg', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_midas_repo', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_detectron2', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src', 'D:\ComfyUI_windows_portable\ComfyUI\comfy', 'D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\git\ext\gitdb', 'D:\ComfyUI_windows_portable\ComfyUI', 'D:\ComfyUI_windows_portable\python_embeded\python310.zip', 'D:\ComfyUI_windows_portable\python_embeded', 'D:\ComfyUI_windows_portable\python_embeded\lib\site-packages', 'D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\win32', 'D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\win32\lib', 'D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\Pythonwin', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules', 'D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\impact_subpack', '../..'] Import times for custom nodes: Starting server To see the GUI go to: http://127.0.0.1:8188 |
How to downgrade the Cuda from 12.1 to 11.8? |
activate virtual environment, uninstall torch, install torch+cu118 with command from https://pytorch.org/
|
Hello, this is not a error, just because TensorRT not natively support these models, maybe you can find the answer from issue#82 |
Does it supports acceleration of apple silicon? I got the information when I startup comfyUI: /comfyui_controlnet_aux/node_wrappers/dwpose.py:24: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly") Then I install the https://github.com/cansik/onnxruntime-silicon but onnxruntime still can not be found.
|
Comfyroll Custom Nodes: Loaded |
I resolved this by installing PyTorch v11.8 side-by-side with my current CUDA (v12.3) and:
Now I see |
To use CUDA 12.* instead of 11.8, you can try install nightly binary like the following (for Python 3.8~3.11):
|
saved my life, thank you! |
for late comer: track the issue here for version changes: microsoft/onnxruntime#13932
|
My device is RTX3080Ti which matching CUDA11.7,but I found that the onnx package only have CUDA11.8 or 11.6 version.And I follow the steps it doesn't work.What should I do? |
@izonewonyoung, |
EP Error A:\a_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1193 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "C:\ComfyUI\python_embeded\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll" |
@Zakutu, if you intend to use TensorRT EP, please install TensorRT 8.6.1 for CUDA 11 (since official onnxruntime-gpu is for CUDA 11 right now). Please refer to https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers/models/stable_diffusion/. It is a demo of using TRT EP (or CUDA EP) with stable diffusion. |
For anyone reading this thread looking for a solution for Apple Silicon, try cansik/onnxruntime-silicon. Install: On start up:
Running the DWPose Estimator on a 512x768 image (M1 Max/Sonoma 14.1.2):
|
→解決しました。原因は同一環境でのpythonの競合でした。 既存のpython→3.10.6 cu118のバージョンでcomfyUIportableを構築したら、なにもエラーや警告文無く動作しました。 ▶assetから旧バージョンをダウンロードします。 |
It seems CUDA 12 packages came out just three days ago (as of this writing). All I had to do to make it work was to install the CUDA 12 version of the ONNX runtime. Hope this helps! 🙏 Some backgroundI'm running: Windows 10 Pro: 10.0.19045
Python: 3.11.6
Pip: 23.3.2
GPU: NVIDIA GeForce GTX 980 Ti (🙈) If I activate my 2.1.2+cu121
12.1 |
If somebody has any wrong with onnxruntime 1.17.0 and onnxruntime-gpu 1.17.0, you can try to Install them separately (no gpu version first, then gpu version second) |
Thank you very much, this solved the warning ! |
Anyway to get this to run on the windows comfy UI portable.. I'm running 12.3 as well.. |
Run with python embedded command. For example mine is CUDA 12.3:
Check the screenshot below. |
See https://onnxruntime.ai/docs/install/ You can install like the following:
|
THX!! |
It doesn't solve the problem |
Another scenario: you have installed both OnnxRuntme and OnnxRuntme-GPU versions, OnnxRuntme is run by default, just uninstall OnnxRuntme and keep the gpu version, I hope it will help you~ |
[Fix-Tip] long story short - it works with Cuda12! - my problem was this 3 folders: [Fix]
[Why] if you dont have venv (python virtual environment) installed ,close and exit comfyui then- in main confyui folder go cmd - make sure that you in main confyui directory then type this:
|
Not works to me |
Works to me , thx |
The following fixed the error for me on W10, using the Windows portable version for nvidia GPUs via Powershell:
You have to make sure the embedded python distro (3.10) installs the dependency, hence the invocation using the embedded python .exe. It may not find the dep if installed using your command line environment. |
Thx!!!this is way |
OMFG! Thank you very much! Worked for me. 😁 |
For me, the issue was that I have Note: Using
|
this worked for me in portable |
What is the problem? It seems that opencv is not running in the normal way. Does anyone know how to solve it?
E:\Stable Diffusion\ComfyUI\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device (super slow)
warnings.warn("Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device (super slow)")
The text was updated successfully, but these errors were encountered: