[Feature Request]: Implementing lora-ctl #185
Replies: 38 comments 8 replies
-
Hi there, a lot of people have been asking for this, so it sounds very interesting. But I'm not sure where to start. Sorry for the ping @cheald, not sure if you heard about this repo before, but do you think the changes commented on lllyasviel#68 (comment) would be applied as well? (dev_upstream branch probably) Or about the branch of the extension for forge-reforge? I could help if needed |
Beta Was this translation helpful? Give feedback.
-
Okay, so model weights are a combination of the base model + deltas computed from a LoRA (or multiples). There is some procedure which takes a LoRA, loads its state dict, figures out which keys map to which model keys, perform any matrix multiplications necessary to derive the full weight delta matrices, and then finally adds a fraction of those weights (via the strength parameter) to the underlying weights, with the final sum being used for inference. The concept is that one each step, new weights are computed. This is...well, not high performance in the A1111 version, because it basically just invalidates the A1111 lora cache so that new weights are recomputed with each step. Last I looked, one of the things that Forge did was precompute the final weights before inference started, which made the A1111 loractl implementation a non-starter (since it worked via cache invalidation during the inference loop). A more clever implementation could do something like:
def apply_step_lora(lora_weights, next_weight, previous_weight):
return base_weights - (previous_weight * lora_weights) + (next_weight * lora_weights) Once you have that, then all you need is some way to say "linearly interpolate from this weight to that weight over these steps", and it's trivial to compute the lora weight at any given step. This would run massively faster than the simple cache-bust option (which results in the base weights being copied back from cache and then loras reapplied on top of them), but would require that the lora weights be loaded and kept available during inference, and that there be a mechanism within the inference loop that would have the opportunity to modify the model weights used before each step. Additionally, text encoder conditionings would need to be recomputed if any text encoder weights are modified by this procedure. I haven't looked into the lora implementation in this repo, but if I were building a loractl-like feature in natively, something like the above is where I'd start. |
Beta Was this translation helpful? Give feedback.
-
Many thanks! Warning, long post ahead; LoRAs on reForge are loaded like this: On ldm_patched/modules/sd.py, we define def load_lora_for_models, which it's
Then on modules/networks.py, we apply the lora logic
Functions that sd.py use are model_lora_keys_unet, model_lora_keys_clip and load_lora model_lora_keys_unet on ldm_patched.modules.lora is
model_lora_keys_clip on this same file is
def load_lora in this same file is
On networks.py we have
load_torch_file is on ldm_patched.modules.utils
That's the complete process of how we load loras. This is for the dev_upstream branch which has comfy backend upstream changes. Main branch is a bit different and more akind to original Forge (which is basically comfy backend from 7 months ago) With all this information, do you we think we can reach something? |
Beta Was this translation helpful? Give feedback.
-
Okay, so the process right now is:
Instead, you'd need a way to remove patches from the ModelPatcher and replace them with reweighted copies on each step, then repatch the model and use that set of patched weights for the next inference step. The model is patched during LoadedModel.model_load: You'll have to trace back where this is used, and add a mechanism to alter the patch weights and re-patch the model per step somewhere inside the inference loop before each unet/text encoder call. This is likely not a trivial process (which is why I haven't implemented loractl in forge/comfyui!), but I encourage you to give it a go! |
Beta Was this translation helpful? Give feedback.
-
Slightly off-topic, but there is a similar ComfyUI node, that allows for dynamic lora weight adjustment. https://github.com/asagi4/comfyui-prompt-control |
Beta Was this translation helpful? Give feedback.
-
Pretty interesting, something as how it's implemented there should in theory work (the logic I mean), but the extension does it for Comfy. So something like that but with somehow an UI for reForge would maybe work? |
Beta Was this translation helpful? Give feedback.
-
I have added preliminary support for some features based on https://github.com/asagi4/comfyui-prompt-control on latest commits, if someone wants to test I still have to add some features. LoRA scheduling seems to work. LBW is probably redundant with the other extension. |
Beta Was this translation helpful? Give feedback.
-
Whoa! I’ll definitely test this out ASAP. syntax is same as loractl extension, or as in the comfy prompting link you shared? Edit Ok I see that you have some explanation in the ReadMe now, but I still am lost on how to use it. For example, how could I replicate the results from this loractl syntax?
Let's say its a 10 step generation for example. Grimace LORA: 0.75 strength @ Step 1 -> 0.38 strength @ step 5 and onwards. Result from A1111 using the loractl extension: |
Beta Was this translation helpful? Give feedback.
-
@altoiddealer Oh for now just lora scheduling and split is added, the part of lora block weight isn't working based on this extension (it does because the other extension of lora block weight). Removed that of the lora ctl extension for now (lbw), I still have to add the rest of the features of ctl. |
Beta Was this translation helpful? Give feedback.
-
Please confirm or clarify… ReForge now supports:
Does not yet support:
|
Beta Was this translation helpful? Give feedback.
-
For the first, honestly not sure how it works haha it should based from the comfy implementation
For the second, I removed the built it extension, as now https://github.com/hako-mikan/sd-webui-lora-block-weight works fine out of the box. For the different lore strength for specific steps, yes, though I have added I think preliminar for dyn on extra_networks_lora.py I guess this is because as how it works on comfy prompt control. I will try to port the extension directly now, that I know a bit more how to manage these things. |
Beta Was this translation helpful? Give feedback.
-
Okay, I've added them as built-it extensions (with the required changes to make them work) Both should work, but any test is welcome or if you find any issues. |
Beta Was this translation helpful? Give feedback.
-
If you get this working, you will have implemented one of the most in-demand features since the advent of Forge... well over a year people have been dying for this. Now, from my tests, it does not seem to be working, or at least not if it is supposed to accept the same syntax as the loractl extension. I get identical output if I change the ending step weights and/or timing.
Edit Actually, I get same results no matter what values I change with that syntax. |
Beta Was this translation helpful? Give feedback.
-
Okay pushed some commits that should fix some issues. I did this X/Y grid to show how it changes. |
Beta Was this translation helpful? Give feedback.
-
Dude... this is such an unexpected gift. AMAZING.If it is feasible, you should consider pushing a PR to Forge, at your convenience. |
Beta Was this translation helpful? Give feedback.
-
@Panchovix i'm glad you are working on implementing flora ctl extension into forge buddy, it's really are a game changer of tool and some of us are dying to have it work on forge since that's the only tool that can make those over-trained loras usable again! |
Beta Was this translation helpful? Give feedback.
-
Lora ctl IS working on this commit here, so like me you can just git checkout this commit and stay on it |
Beta Was this translation helpful? Give feedback.
-
mate i'm actually using forge not reforge because of the flux support in forge if i'm not wrong reforge dosen't have flux yet, i just wait for panchovix to port lora-ctl extension into forge or something.. |
Beta Was this translation helpful? Give feedback.
-
Just wanted to say, I've got some code that works, but is pretty buggy and far from optimal - it essentially involves manually hijacking the sampler by wrapping the function: |
Beta Was this translation helpful? Give feedback.
-
Interesting! Does this work as a standalone extension, or it depended of the lora block weight extension for syntax? |
Beta Was this translation helpful? Give feedback.
-
I've got the block weight extension disabled and I don't think it uses any syntax from it, but as a result I'm also not sure if it's compatible with it - I've not really used block weight so I'm not sure how to test it |
Beta Was this translation helpful? Give feedback.
-
Don't worry, that's ideal by the way. I have to add combability with it after lora control works as default first (since lora block weight as itself also works) My idea is to add lora control as built it extension, so can you send a PR with those changes (for the lora ctl extension) on https://github.com/Panchovix/sd_webui_loractl_reforge? I want to test how it works as individual extension before adding it as built it extension, if some extra networks code is required, etc. Lora block weight, I wanted to add it as default extension/built in, but it makes lora not being kept in the cache after each consecutive gen, until I do some re-implentations. |
Beta Was this translation helpful? Give feedback.
-
I think it WAS working... looking at the two images I had made in the same session. The settings and seed were all identical - and the only difference between the prompts was the Ending Lora weight values (same starting values). I can't seem to find which commit this was working on now that I've updated.
|
Beta Was this translation helpful? Give feedback.
-
Metadata says Edit... OK that's this commit... EDIT 2 YES - ACTUALLY WORKING IN THIS COMMITEDIT 3 NO - I WAS MISTAKEN :) |
Beta Was this translation helpful? Give feedback.
-
@altoiddealer so it was working for you on f2c45ef basically? Can you try using it with the lora block weight extension disabled and just lora ctl? |
Beta Was this translation helpful? Give feedback.
-
I'm not seeing it work on 1075057 - generated quickly with a random prompt, but with my commit:
Then with 1075057 :
|
Beta Was this translation helpful? Give feedback.
-
Alright... here is the actual scoop on this after further testing. It's not working. What's happening, is it seems like LORAs are 100% not working on the first generation. Then on subsequent generations, the LORAs are all 1.0. Hence, why I thought changing the weights was actually changing the output :P But the output is just changing due to some strange bug. |
Beta Was this translation helpful? Give feedback.
-
Okay I went ahead and applied your changes @1rre into the separate extension Can you guys try if it works as independent extension, before adding it as built it extension? |
Beta Was this translation helpful? Give feedback.
-
Okay I can confirm it works, it is a bit buggy at the moment as @1rre said but I think it can be worked on it as a base. Now it is a built it extension, but by default it won't be enabled as the original extension, if anything happens. Also, will move this as discussions since closing the issue won't help but keeping it open as not implemented is not correct either. |
Beta Was this translation helpful? Give feedback.
-
As a driveby bystander, what's the status on this? i saw it get disabled/re-enabled via commits recently, but i see nothing in my built-in extensions (as of 12-12 build). links to that git location are all dead. 🙏 |
Beta Was this translation helpful? Give feedback.
-
Is there an existing issue for this?
What would your feature do ?
lllyasviel/issues/253
lllyasviel/issues/68
This is likely not a simple feature addition...
There is a very super cool extension called sd-webui-loractl which you can read about in the linked Issues from Forge Main... it allows LORA weights to be calculated on each step, with a syntax that can be used in the prompt to ramp the weights up or down.
It's quite amazin, really the only feature I still use exclusively in A1111.
lllyasviel had marked this Issue with the High Priority flag before going unresponsive for Forge development.
The author of the loractl extension had analyzed the situation and provided key details on what was missing / what would be needed to make it work. They also seemed very responsive and open to answering any questions, maybe even pulling some weight to help get it working in Forge.
author's comments
additional comment in this thread
If you start looking for something ambitious to add in to this project... here you go! :)
Proposed workflow
Additional information
No response
Beta Was this translation helpful? Give feedback.
All reactions