You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Instead of loading multiple full-sized Stable Diffusion models, I wonder the potential benefits of caching high-density locon or lycoris representations. This could be a more efficient way to operate on consumer hardware. I'm not certain if this approach would actually offer any improvements, though, or if this pipeline is something that would support or benefit from it.
The text was updated successfully, but these errors were encountered:
The other thing to consider is if the base-model tests and examples given were done with as high of an iteration count as the multi-expert models;
4x2 might want to give an example with 16 iterations, 2x1 might want to do a run with 32 iterations, and the base model might want to do a run with 128 iterations, but only if that's applicable to how it works...
Instead of loading multiple full-sized Stable Diffusion models, I wonder the potential benefits of caching high-density locon or lycoris representations. This could be a more efficient way to operate on consumer hardware. I'm not certain if this approach would actually offer any improvements, though, or if this pipeline is something that would support or benefit from it.
The text was updated successfully, but these errors were encountered: