-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FLUX LoRA Support #6847
FLUX LoRA Support #6847
Conversation
9c2f173
to
c3aa092
Compare
Note to reviewers: Please test any LoRAs that you have (SD or FLUX). There are many small LoRA format variations, and there's a risk of breaking one of them with this change. |
i don't see how alpha=8 would work for any peft loras that aren't also rank=8 |
…e LoRAModelRaw class.
…SDXL lora key conversions, out of LoRAModelRaw and into LoRALoader.
…xception if an unexpected key is encountered, and add a corresponding unit test.
…X kohya LoRA format.
…xup dtype handling in the sidecar layers.
… in the hopes that the latter works properly on MacOS.
…de is uglier, it turns out that the Module implementation of some ops like .to(...) is noticeably slower.
08e2b76
to
d51f2c5
Compare
if you're going for lycoris support note that the support for this in peft is very lacking currently, with an open issue for community help request on updating the various modules it's very easy to wrap the upstream lycoris library modules though, as @KohakuBlueleaf has added a functional API for this use case. it just wraps the Diffusers model and has there's built in methods for extracting models and playing with them in other ways. i'd highly recommend switching from peft for lycoris stuff. |
|
…ches(), and add unit tests for the stacked LoRA case.
Thanks for this context. We explored using PEFT at one point, but decided against using it because it didn't meet our requirements for high patching/unpatching speeds. I'll leave the integration of the upstream lycoris lib for future work. For now, we have our own implementation of the lycoris layers for fused execution, and partial support for sidecar execution (and easy to extend). |
i agree that the patching speed of diffusers could be improved. if you have any observations/suggestions please open an issue report |
Summary
This PR adds support for FLUX LoRA models on both quantized and non-quantized base models.
Supported formats:
Full changelist:
invokeai/backend/lora
QA Instructions
Note to reviewers: I tested everything in this checklist. Feel free to re-verify any of this, but also test any LoRAs that you have. There are many small LoRA format variations, and there's a risk of breaking one of them with this change.
FLUX LoRA
Regression Tests
USE_MODULAR_DENOISE=1
smoke test with LoRATest for output regression with the following LoRA formats:
Checklist