Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Any benefit to implementing this with lycoris/lora instead of full models? #9

Open
Nyxeka opened this issue Feb 5, 2024 · 2 comments

Comments

@Nyxeka
Copy link

Nyxeka commented Feb 5, 2024

Instead of loading multiple full-sized Stable Diffusion models, I wonder the potential benefits of caching high-density locon or lycoris representations. This could be a more efficient way to operate on consumer hardware. I'm not certain if this approach would actually offer any improvements, though, or if this pipeline is something that would support or benefit from it.

@Warlord-K
Copy link
Contributor

Will look into it, Thanks for the suggestion!

@Nyxeka
Copy link
Author

Nyxeka commented Feb 6, 2024

The other thing to consider is if the base-model tests and examples given were done with as high of an iteration count as the multi-expert models;

4x2 might want to give an example with 16 iterations, 2x1 might want to do a run with 32 iterations, and the base model might want to do a run with 128 iterations, but only if that's applicable to how it works...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants