-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too hard to run on win, lol(now available run on windows with custom ComfyUI node) #36
Comments
Very easy to run on Windows as standalone installation |
Need vs env setup, so it's not easy. |
If you had seen the video you would find out that you don't need to build Triton. :) |
I have triton installed weeks ago (triton 3.1, py3.11, cuda 12.6). Thanks a lot, though it doesn't work. |
That's why I skipped the Windows installation and created a RunPod template right away. :D That way I can access it from anywhere, even from a mobile phone. |
Once our Autoencoder is merged into |
it works with triton on windows i am using right now you have to install pre compiled triton wheel checkout my article : https://www.linkedin.com/pulse/nvidia-labs-developed-sana-model-weights-gradio-demo-app-g%C3%B6z%C3%BCkara-gxirf/?trackingId=55yg59jISbuecGZgE8F8NA%3D%3D |
Seems you are trying to use Sana in CompyUI. Any open-source code available now? Maybe we can collaborate on it. @zmwv823 |
May I request if you can share the code to use this (using Diffusers), can't wait for it to be merged. Want to test on my local. P.S. I'm not developer. I will just run the code. |
@nitinmukesh Of course. There maybe some changes with the final version. But you can try this file: https://github.com/lawrence-cj/diffusers/blob/Sana/sana.py convert sana pth to diffusers' safetensor script is also available: |
Thanks a lot, the vae decode now works without triton. |
Is your code of ComfyUI which supports Sana is public available, I’m looking into ComfyUI recently. |
Not ready yet, the text_encode need to split out. |
A ComfyUI custom-node with lots of bugs: |
Hello @lawrence-cj I tried the diffuser version. Does it support memory optimizations? I got error and sequential offload
|
Once it works in ComfyUI please let me know I think at least this method is a way to get it working under Windows 11 soon. ComfyUI works great here. |
@zmwv823 , so cool. Will look into it recently and remain your contribution if I PR for ComfyUI. |
I stronggly recmmended looking into this repo: https://github.com/city96/ComfyUI_ExtraModels. |
Yah, we know this project, and we collaborate before. I'll also mention this project. BTW, I'll re-open this after the ComfyUI is supported. |
Try play in comfyui, write a simple init code.
By offload te and vae (prompt_process、latent decode can be seperated in comfy), only 3.5gb vram required in gen process, te need 5gb.
But stuck at latent decode, seems need triton compile or something else wrong.
The text was updated successfully, but these errors were encountered: