how to run it on low v-ram systems ? #269
Replies: 4 comments 12 replies
-
|
Beta Was this translation helpful? Give feedback.
-
re: |
Beta Was this translation helpful? Give feedback.
-
Running on an AMD RX Vega 64 (8GB vram) I found initially I couldn't upscale anything much beyond the default 512x512 before getting OOM errors. I've done the following things, which have allowed me to upscale to 1200x720 in img-to-img: • followed info I saw in a discussion somewhere here, to manually install pytorch 2.1 nightly + rocm in venv with
(may have manually uninstalled existing pytorch previously) • manually removed xformers in venv with pip uninstall • commented out code that tries to install it • commented out other code that might try installing things at startup, like the setup.run_setup() and setup.check_torch() invocations in launch.py (only after an initial run which did those setup() calls, of course) • added • configured "upcast to float 32", "sub-quadratic" cross-attention optimization and "SDP disable memory attention" in Stable Diffusion settings section Interestingly, I tried enabling the tiled VAE option in "from image" tab, and without the "sub-quadratic" setting this gave an improvement. After enabling sub-quadratic, checking tiled VAE produces a stacktrace ending with:
Since sub-quadratic gave the bigger improvement, I've gone with that. • got used to using the "unload checkpoint" and "reload checkpoint" buttons in settings tab before doing any upscaling - I think this might be down to pytorch 2.1 nightly sometimes leaving the checkpoint loaded when it shouldn't and so wasting VRAM. Please note, this is all voodoo - I have no real idea what I'm doing. I don't know enough about python, pytorch, diffusion models or graphics cards to do better. Like this, I can scale an image up to 1200x720 on "from image" tab which was impossible for me previously, so maybe it helps someone. Alternatively, maybe someone can help me :) I can well imagine the horror a developer might feel seeing advice to trample all over their code like this, so I don't advise or recommend it, I just mention what I've done. Also, I have yet to get around to trying the latest install options provided which might well avoid some manual steps here. I should mention this is a lot better than results I got from the parent repo and from comfy, both of which consistently caused an immediate instant hard reset on reaching the VAE stage once I coaxed them into using the GPU at all. I much appreciate the work by the developer! |
Beta Was this translation helpful? Give feedback.
-
create a file namex.bat @echo off set PYTHON="C:\Users\Admin\AppData\Local\Programs\Python\Python310\python.exe" |
Beta Was this translation helpful? Give feedback.
-
on the other webui i just add the line
set COMMANDLINE_ARGS= --opt-split-attention --precision full --no-half --lowvram --always-batch-cond-uncond
in the webui-user.bat file
after which it manages to run on low v-ram its slow but it runs.
any beginners guide to achieve this on this system ?
Beta Was this translation helpful? Give feedback.
All reactions