-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
flux fill cannot use lora(flux turbo lora) #10184
Comments
I am not sure you can. You're trying to load a LoRA that was obtained on Flux.1-Dev into something different. |
I thought that I can use it becuase I tried with comfyui first. it worked well with comfyui |
Cc: @a-r-r-o-w could be nice to try the LoRA expansion idea here as well. |
Will have to take a look at what Comfy is doing to be sure they're not simply dropping the input lora layer (which requires expansion/shrinking) and using all the other layers - in this case it will work easily. Or perhaps they are not passing the channelwise-concatenated latents through the lora layer and just passing the denoising latents. I currently don't have the bandwidth to try this out, but will be sure to look into it soon! |
Valid. |
I hope I can use turbo for flux fill soon :) |
@sayakpaul https://github.com/nftblackmagic/catvton-flux/blob/main/tryon_inference_lora.py#L26 Code: transformer = FluxTransformer2DModel.from_pretrained(
"xiaozaa/flux1-fill-dev-diffusers", ## The official Flux-Fill weights
torch_dtype=torch.bfloat16
)
print("Start loading LoRA weights")
state_dict, network_alphas = FluxFillPipeline.lora_state_dict(
pretrained_model_name_or_path_or_dict="xiaozaa/catvton-flux-lora-alpha", ## The tryon Lora weights
weight_name="pytorch_lora_weights.safetensors",
return_alphas=True
)
is_correct_format = all("lora" in key or "dora_scale" in key for key in state_dict.keys())
if not is_correct_format:
raise ValueError("Invalid LoRA checkpoint.")
FluxFillPipeline.load_lora_into_transformer(
state_dict=state_dict,
network_alphas=network_alphas,
transformer=transformer,
)
|
got error ;)
error message:
But loading Also I tried with this way but it is not work too.
|
@kadirnar it won't work as the LoRA you showed in the example was obtained on the Flux fill checkpoint itself. |
I tried this huggingface:0d96a89...huggingface:ed91c533f but the outputs are pure garbage. Wondering what the strategy here is. |
Oh we had to set the scales accordingly: from diffusers import FluxControlPipeline
from image_gen_aux import DepthPreprocessor
from diffusers.utils import load_image
from huggingface_hub import hf_hub_download
import torch
control_pipe = FluxControlPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
control_pipe.load_lora_weights("black-forest-labs/FLUX.1-Depth-dev-lora", adapter_name="depth")
control_pipe.load_lora_weights(
hf_hub_download("ByteDance/Hyper-SD", "Hyper-FLUX.1-dev-8steps-lora.safetensors"), adapter_name="hyper-sd"
)
control_pipe.set_adapters(["depth", "hyper-sd"], adapter_weights=[0.85, 0.125])
control_pipe.enable_model_cpu_offload()
prompt = "A robot made of exotic candies and chocolates of different kinds. The background is filled with confetti and celebratory gifts."
control_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png")
processor = DepthPreprocessor.from_pretrained("LiheYoung/depth-anything-large-hf")
control_image = processor(control_image)[0].convert("RGB")
image = control_pipe(
prompt=prompt,
control_image=control_image,
height=1024,
width=1024,
num_inference_steps=8,
guidance_scale=10.0,
generator=torch.Generator().manual_seed(42),
).images[0]
image.save("output.png") ^ should work: Make sure to install from |
Thanks for handling this issue, I appriciate. But when I run my example
It doesn't work.
I edit diffusers directly your commit inside my venv and run the example. But it failed. ;) |
Install |
Thanks for quick reply I installed that branch and this is new error
|
I don't know if it's an issue with your local installation but I just ran it and it worked. Please check it rigorously if you have done the installation correctly: pip uninstall diffusers -y
git clone https://github.com/huggingface/diffusers/
cd diffusers
git checkout expand-flux-lora
pip install -e . @JakobLS also tried it out in #10227 (comment) and has confirmed it's working. So, I am not sure what's wrong in your case. |
Oh, It worked! I just installed with Thanks :) |
Describe the bug
I want to use flux fill pipeline with turbo lora, but when I load pipeline and load lora model, than gives error
Reproduction
Logs
NotImplementedError: Only LoRAs with input/output features higher than the current module's input/output features are currently supported. The provided LoRA contains in_features=64 and out_features=3072, which are lower than module_in_features=384 and module_out_features=3072. If you require support for this please open an issue at https://github.com/huggingface/diffusers/issues.
System Info
latest(github version diffusers), python3.10, ubuntu with nvidia gpu
Who can help?
@sayakpaul
The text was updated successfully, but these errors were encountered: