Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: RuntimeError: Given groups=1, weight of size [320, 5, 3, 3], expected input[2, 9, 64, 64] to have 5 channels, but got 9 channels instead #10995

Closed
1 task done
itinance opened this issue Jun 3, 2023 · 2 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@itinance
Copy link

itinance commented Jun 3, 2023

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

Image-generation crashes the application without further error handling. Reproducible (always)

Steps to reproduce the problem

  1. start the application running ./webui.sh
  2. open the web interface at http://127.0.0.1:7860/
  3. type in a prompt (no matter what, e.g. "funny people in a pink forest")
  4. click on "generate"
  5. frontend won't show any error
  6. in terminal I can see the application crashed with a long callstack (see below)

What should have happened?

that the application is not crashing and shows either an error message or even generates an image

Commit where the problem happens

b6af0a3

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

MacOS

What device are you running WebUI on?

Other GPUs

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

./webui.sh

List of extensions

no

Console logs

Error completing request
Arguments: ('task(lny03dyth0f10xc)', 'lucky people', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
Traceback (most recent call last):
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/modules/call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/modules/txt2img.py", line 57, in txt2img
    processed = processing.process_images(p)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/modules/processing.py", line 610, in process_images
    res = process_images_inner(p)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/modules/processing.py", line 728, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/modules/processing.py", line 976, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 383, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 257, in launch_sampling
    return func()
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 383, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 137, in forward
    x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 114, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 140, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/modules/sd_hijack_utils.py", line 26, in __call__
    return self.__sub_func(self.__orig_func, *args, **kwargs)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/modules/sd_hijack_unet.py", line 45, in apply_model
    return orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs).float()
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1339, in forward
    out = self.diffusion_model(xc, t, context=cc)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 797, in forward
    h = module(h, emb, context)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 86, in forward
    x = layer(x)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 415, in lora_Conv2d_forward
    return torch.nn.Conv2d_forward_before_lora(self, input)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 463, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/Users/rzrhtz56/workspace/ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [320, 5, 3, 3], expected input[2, 9, 64, 64] to have 5 channels, but got 9 channels instead

Additional information

i tried with python 3.10 and 3.11, same error happens on both. Models are installed in the models-directory.
After clicking the "generate" button, the application is crashing.

@itinance itinance added the bug-report Report of a bug, yet to be confirmed label Jun 3, 2023
@akx
Copy link
Collaborator

akx commented Jun 18, 2023

What model are you using? See e.g. #5372

@akx akx closed this as not planned Won't fix, can't repro, duplicate, stale Jun 18, 2023
@DobleV55
Copy link

DobleV55 commented May 8, 2024

Any solution to this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

3 participants