-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't load multiple loras when using Flux Control LoRA #10180
Can't load multiple loras when using Flux Control LoRA #10180
Comments
Oh, we should have anticipated this use case. I think the correct check should be Even with the above fix, I don't think the weights would load as expected because the depth control lora would expand the input features of cc @yiyixuxu as well |
It does indeed error out with the corrected if-statement as well due to the explanation above. traceTraceback (most recent call last):
File "/home/aryan/work/diffusers/dump4.py", line 9, in <module>
pipe.load_lora_weights(hf_hub_download("ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors"))
File "/home/aryan/work/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1868, in load_lora_weights
self.load_lora_into_transformer(
File "/home/aryan/work/diffusers/src/diffusers/loaders/lora_pipeline.py", line 1932, in load_lora_into_transformer
transformer.load_lora_adapter(
File "/home/aryan/work/diffusers/src/diffusers/loaders/peft.py", line 320, in load_lora_adapter
incompatible_keys = set_peft_model_state_dict(self, state_dict, adapter_name, **peft_kwargs)
File "/raid/aryan/nightly-venv/lib/python3.10/site-packages/peft/utils/save_and_load.py", line 445, in set_peft_model_state_dict
load_result = model.load_state_dict(peft_model_state_dict, strict=False, assign=True)
File "/raid/aryan/nightly-venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2584, in load_state_dict
raise RuntimeError(
RuntimeError: Error(s) in loading state_dict for FluxTransformer2DModel:
size mismatch for x_embedder.lora_A.default_1.weight: copying a param with shape torch.Size([64, 64]) from checkpoint, the shape in current model is torch.Size([64, 128]). I do believe that this should work as expected allowing for depth-control-lora to work with N-step hyper-sd-loras. This is a unique case that has probably never been investigated before. Not completely sure on how we would handle this either :/ My initial thoughts are to expand the lora shapes as well, and set the weights of the linear layer corresponding to the depth control input to 0. This should effectively remove the control latent from interfering with the effect of hyper-sd and it will operate only on the denoising latent. Will experiment and let the results speak for whether this would be something we should try to prioritize support for (as there are 10000+ available Flux loras that might be compatible), and will let YiYi and Sayak comment on how best to handle this situation if it works as expected Are you facing any errors when trying to run inference with LoRAs, but without control LoRAs? Either way, I think above mentioned condition needs to be updated. |
I just tried using the normal Repro script:
|
First of all, thanks for your PR (#10182), @jonathanyin12! We appreciate it! I echo your thoughts on why we should support this as it is quite a bit enablement!
Yeah I think that would be the way to go here. The expanded inputs channels are initialized to zero anyway, so, it won't interfere with the HyperSD LoRA. Would love to get @BenjaminBossan's thoughts here too. LMK if anything is unclear from the issues and the comments above. |
#10182 doesn't completely solve this issue, so opening. |
An important thing to note why our integration tests passed through despite the bug in expanding shapes (that is now fixed, thanks to @jonathanyin12!): Most LoRAs on the hub don't train for the |
Yes, this part is unclear to me. Is the |
It needs to be expanded because:
In the base model (assume no expansion), before the LoRA is loaded the concerned layer is expanded like so: diffusers/src/diffusers/loaders/lora_pipeline.py Lines 2359 to 2365 in d041dd5
|
Additionally, I used @a-r-r-o-w's idea of expanding the LoRA state dict with zeros (minimal changes are in the test codefrom diffusers import FluxControlPipeline
from image_gen_aux import DepthPreprocessor
from diffusers.utils import load_image
from huggingface_hub import hf_hub_download
import torch
control_pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to("cuda")
control_pipe.load_lora_weights("black-forest-labs/FLUX.1-Depth-dev-lora", adapter_name="depth")
control_pipe.load_lora_weights(
hf_hub_download("ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors"), adapter_name="hyper-sd"
)
control_pipe.set_adapters(["depth", "hyper-sd"])
prompt = "A robot made of exotic candies and chocolates of different kinds. The background is filled with confetti and celebratory gifts."
control_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")
processor = DepthPreprocessor.from_pretrained("LiheYoung/depth-anything-large-hf")
control_image = processor(control_image)[0].convert("RGB")
image = control_pipe(
prompt=prompt,
control_image=control_image,
height=1024,
width=1024,
num_inference_steps=50, # when lowered the results are still garbage
guidance_scale=10.0,
generator=torch.Generator().manual_seed(42),
).images[0]
image.save("output.png") Cc: @jonathanyin12 |
Okay, so technically it is a different base model, although just this one layer was adjusted. Yes, I don't really see any better option than to pad with zeros in that case.
I did a quick check of the diff but at first glance I see no issue. |
@jonathanyin12 can you check #10184 (comment)? |
Describe the bug
I was trying out the FluxControlPipeline with the Control LoRA introduced in #9999 , but had issues loading in multiple loras.
For example, if I load the depth lora first and then the 8-step lora, it errors on the 8-step lora, and if I load the 8-step lora first and then the depth lora, it errors when loading the depth lora.
Reproduction
Logs
System Info
Who can help?
@a-r-r-o-w @sayakpaul
The text was updated successfully, but these errors were encountered: