Replies: 2 comments 2 replies
-
Please verify and update your scripts. |
Beta Was this translation helpful? Give feedback.
0 replies
-
can you elaborate if this needs nvlink/SLI to function or duplicate GPUs? or can you mix e.g. 4090 with 3060 |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have seen a few mentions of multi-gpu especially recently, and the readme even mentions it for sdxl:
sd-scripts/README.md
Line 255 in 0a52b83
actually trying to run that with main or dev branch doesnt work, it just says there's no such arguments and dumps the help page.
I have also tried to configure accelerate for non-distributed gpu training but with multiple gpu ID's or "all" in the setup wizard (as mentioned in another issue), which also just OOM'd the main GPU and didn't offload anything at all to the second.
so what is the actual way to get multi gpu training working? thanks
Beta Was this translation helpful? Give feedback.
All reactions