-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model v0.9.1 consumes much more VRAM #81
Comments
I'm seeing the same issue here as well, VRAM usage is much higher on 0.9.1 than it was on 0.9. I keep running into Out Of Memory errors with the provided i2v workflow. I don't have a solution for this, unfortunately, but wanted to point out this is not an isolated issue as it was also reported here: https://www.reddit.com/r/StableDiffusion/comments/1hhz17h/comment/m2v4hi0/ |
Also same issue here, but I was able to fix it with a Clean VRAM node right after the Sampler and before VAE Decode. I also put another one between the Guider and Sampler. Fixed the issue for me. |
can you provide a screenshot, please? |
Where do you find that Clean VRAM node from - I don't see it anywhere? is it a part of another package? |
You can use the free memory tool as well to clean ram and/or vram. Here the link: |
Clean VRAM does not reduce the memory consumption of the model during generation. It simply allows us to get rid of the increased consumption during VAE decoding. I believe that the model itself has become much more memory-consuming than the previous one. |
Yeah, the new VAE node might need to expose a tiling and minimum size - then it can fit in VRAM... (Like how kijai's Hunyuan comfyui VAE decoder works...) |
do we need to use the new VAE from lighttricks repo ? I dont use the new VAE file, it works |
It seems that my new model consumes much more VRAM than model v0.9.
On model v0.9 I could generate 1280x720 with 97 frames and all this fit in 11gb VRAM (on 2080ti).
And on model v0.9.1 I could only start generation with a resolution of 632x280 at 97 frames. At the same time, during generation there is still free VRAM, but during VAE decoding, the consumption increases sharply. So I was able to achieve only this resolution taking into account the consumption of the VAE decoder.
Has anyone else encountered this problem?
I am using the new workflow from the description.
The question is whether this is related to the model or to the nodes?
The text was updated successfully, but these errors were encountered: