Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LoraUpDownModel has no attribute up_model #23

Open
cppietime opened this issue Apr 6, 2023 · 6 comments
Open

LoraUpDownModel has no attribute up_model #23

cppietime opened this issue Apr 6, 2023 · 6 comments

Comments

@cppietime
Copy link

While attempting to use the extension with a Locon model as a Lora, trying to generate a prompt produces the following stacktrace:

Traceback (most recent call last):
  File "F:\StableDiffusion\stable-diffusion-webui\modules\call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "F:\StableDiffusion\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 503, in process_images
    res = process_images_inner(p)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 642, in process_images_inner
    uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\processing.py", line 587, in get_conds_with_caching
    cache[1] = function(shared.sd_model, required_prompts, steps)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\prompt_parser.py", line 140, in get_learned_conditioning
    conds = model.get_learned_conditioning(texts)
  File "F:\StableDiffusion\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
    c = self.cond_stage_model(c)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack_clip.py", line 229, in forward
    z = self.process_tokens(tokens, multipliers)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack_clip.py", line 254, in process_tokens
    z = self.encode_with_transformers(tokens)
  File "F:\StableDiffusion\stable-diffusion-webui\modules\sd_hijack_clip.py", line 302, in encode_with_transformers
    outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 811, in forward
    return self.text_model(
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 721, in forward
    encoder_outputs = self.encoder(
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 650, in forward
    layer_outputs = encoder_layer(
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 379, in forward
    hidden_states, attn_weights = self.self_attn(
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 268, in forward
    query_states = self.q_proj(hidden_states) * self.scale
  File "F:\StableDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "F:\StableDiffusion\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 305, in lora_Linear_forward
    lora_apply_weights(self)
  File "F:\StableDiffusion\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 273, in lora_apply_weights
    self.weight += lora_calc_updown(lora, module, self.weight)
  File "F:\StableDiffusion\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 612, in lora_calc_updown
    updown = rebuild_weight(module, target)
  File "F:\StableDiffusion\stable-diffusion-webui\extensions\a1111-sd-webui-locon\scripts\main.py", line 536, in rebuild_weight
    up = module.up_model.weight.to(orig_weight.device, dtype=orig_weight.dtype)
AttributeError: 'LoraUpDownModule' object has no attribute 'up_model'

Checkpoint: anythingV3_fp16.ckpt [812cd9f9d9]

@KohakuBlueleaf
Copy link
Owner

Do you have any other extension for lora?
Lot of them are not compatible of my extension

@cppietime
Copy link
Author

My extensions are:
The builtin:

  • LORA
  • LDSR
  • ScuNET
  • SwinIR
  • prompt-bracket-checker

And:

  • sd-webui-controlnet
  • sd_civitai_extension

@KohakuBlueleaf
Copy link
Owner

What's your lora model?
I may need the file that cause error to find where cause the bug

@cppietime
Copy link
Author

I tried both versions of this model (NSFW): https://civitai.com/models/19987/beegirlz
Both result in the error. After attempting to include this LORA in the prompt, all subsequent prompts result in the same error until restarting, even after removing the LORA

@Gourieff
Copy link

Gourieff commented May 2, 2023

So am I, the same error, trying to use with this lora https://civitai.com/models/27615/delicate-armor?modelVersionId=33064

@KohakuBlueleaf
Copy link
Owner

@Gourieff I will suggest you to use new extension https://github.com/KohakuBlueleaf/a1111-sd-webui-lycoris

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants