Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Modular backend - inpaint #6643

Merged
merged 13 commits into from
Jul 29, 2024

Conversation

StAlKeR7779
Copy link
Contributor

@StAlKeR7779 StAlKeR7779 commented Jul 21, 2024

Summary

Code for inpainting and inpaint models handling from #6577.
Separated in 2 extensions as discussed briefly before, so wait for discussion about such implementation.

Related Issues / Discussions

#6606
https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d

QA Instructions

Run with and without set USE_MODULAR_DENOISE environment.
Try and compare outputs between backends in cases:

  • Normal generation on inpaint model
  • Inpainting on inpaint model
  • Inpainting on normal model

Merge Plan

Nope.
If you think that there should be some kind of tests - feel free to add.

Checklist

  • The PR has a short but descriptive title, suitable for a changelog
  • Tests added / updated (if applicable)
  • Documentation added / updated (if applicable)

@github-actions github-actions bot added python PRs that change python files invocations PRs that change invocations backend PRs that change backend files labels Jul 21, 2024
Copy link
Collaborator

@RyanJDick RyanJDick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried to test, but ran into a device mismatch error. I expect that it will be easy to reproduce, but if not let me know and I can provide more details.

Also, in addition to the test cases that you have already listed, we should make sure and test both gradient-mask and non-gradient-mask inpainting.

Copy link
Collaborator

@lstein lstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// invokeai/app/invocations/denoise_latents.py 
 def invoke(self, context: InvocationContext) -> LatentsOutput:
        if os.environ.get("USE_MODULAR_DENOISE", False):
            return self._new_invoke(context)
        else:
            return self._old_invoke(context)

During the transition period, why not make USE_MODULAR_DENOISE into an app configuration variable? This will then automatically set a config parameter based on an environment variable named INVOKEAI_USE_MODULAR_DENOISE and be consistent with how environment variables are handled elsewhere.

@lstein
Copy link
Collaborator

lstein commented Jul 23, 2024

Ran into an apparent bug. When inpainting with stable-diffusion-xl-1.0-inpainting-0.1:

$ USE_MODULAR_DENOISE="1" invokeai-web
... multiple log lines omitted...
  File "/home/lstein/Projects/InvokeAI/invokeai/app/services/session_processor/session_processor_default.py", line 129, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
  File "/home/lstein/Projects/InvokeAI/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
    output = self.invoke(context)
  File "/home/lstein/Projects/InvokeAI/invokeai/app/invocations/denoise_latents.py", line 722, in invoke
    return self._new_invoke(context)
  File "/home/lstein/invokeai-main/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/lib/python3.10/contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "/home/lstein/Projects/InvokeAI/invokeai/app/invocations/denoise_latents.py", line 818, in _new_invoke
    result_latents = sd_backend.latents_from_embeddings(denoise_ctx, ext_manager)
  File "/home/lstein/Projects/InvokeAI/invokeai/backend/stable_diffusion/diffusion_backend.py", line 45, in latents_from_embeddings
    ext_manager.run_callback(ExtensionCallbackType.PRE_DENOISE_LOOP, ctx)
  File "/home/lstein/Projects/InvokeAI/invokeai/backend/stable_diffusion/extensions_manager.py", line 52, in run_callback
    cb.function(ctx)
  File "/home/lstein/Projects/InvokeAI/invokeai/backend/stable_diffusion/extensions/inpaint.py", line 77, in init_tensors
    raise ValueError("InpaintExt should be used only on normal models!")
ValueError: InpaintExt should be used only on normal models!

No such issue when USE_MODULAR_DENOISE is false. I checked the config.json for the unet, and it is 9 channels as expected for an inpainting model.

@StAlKeR7779
Copy link
Contributor Author

Ran into an apparent bug. When inpainting with stable-diffusion-xl-1.0-inpainting-0.1:

$ USE_MODULAR_DENOISE="1" invokeai-web
... multiple log lines omitted...
  File "/home/lstein/Projects/InvokeAI/invokeai/app/services/session_processor/session_processor_default.py", line 129, in run_node
    output = invocation.invoke_internal(context=context, services=self._services)
  File "/home/lstein/Projects/InvokeAI/invokeai/app/invocations/baseinvocation.py", line 289, in invoke_internal
    output = self.invoke(context)
  File "/home/lstein/Projects/InvokeAI/invokeai/app/invocations/denoise_latents.py", line 722, in invoke
    return self._new_invoke(context)
  File "/home/lstein/invokeai-main/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/usr/lib/python3.10/contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "/home/lstein/Projects/InvokeAI/invokeai/app/invocations/denoise_latents.py", line 818, in _new_invoke
    result_latents = sd_backend.latents_from_embeddings(denoise_ctx, ext_manager)
  File "/home/lstein/Projects/InvokeAI/invokeai/backend/stable_diffusion/diffusion_backend.py", line 45, in latents_from_embeddings
    ext_manager.run_callback(ExtensionCallbackType.PRE_DENOISE_LOOP, ctx)
  File "/home/lstein/Projects/InvokeAI/invokeai/backend/stable_diffusion/extensions_manager.py", line 52, in run_callback
    cb.function(ctx)
  File "/home/lstein/Projects/InvokeAI/invokeai/backend/stable_diffusion/extensions/inpaint.py", line 77, in init_tensors
    raise ValueError("InpaintExt should be used only on normal models!")
ValueError: InpaintExt should be used only on normal models!

No such issue when USE_MODULAR_DENOISE is false. I checked the config.json for the unet, and it is 9 channels as expected for an inpainting model.

Could you debug if unet_config.variant correct on this model?
Because currently in code InpaintExt/InpaintModelExt selects based on unet_config.variant, while internally code checks for in_channels(and error from this check you see).

@lstein
Copy link
Collaborator

lstein commented Jul 23, 2024

Could you debug if unet_config.variant correct on this model?
Because currently in code InpaintExt/InpaintModelExt selects based on unet_config.variant, while internally code checks for in_channels(and error from this check you see).

You called it correctly. The model was configured as "normal". Changing its variant to "inpaint" fixed the issue.

Now I have to figure out why the variant was set to normal - maybe an issue with the prober.
[edit] Reinstalling from HuggingFace autoprobed with "inpaint" correctly. Probably an issue with a old install.

Copy link
Collaborator

@RyanJDick RyanJDick left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might make more sense to split the CreateGradientMaskInvocation bug fix out into its own PR. What do you think? This would simplify the testing scope a bit. And will certainly be nice if a rollback is necessary. I'll leave it up to you. If we keep the bug fix in this PR, can you update the PR description to reflect that it also fixes a bug in CreateGradientMaskInvocation - which will have a small impact on behavior.

invokeai/app/invocations/denoise_latents.py Show resolved Hide resolved
invokeai/app/invocations/create_gradient_mask.py Outdated Show resolved Hide resolved
invokeai/app/invocations/denoise_latents.py Show resolved Hide resolved
@RyanJDick
Copy link
Collaborator

We need to test all of the following cases for regression against main before merging:

  • _old_invoke, gradient mask, inpaint model
  • _old_invoke, gradient mask, non-inpaint model
  • _old_invoke, binary mask, inpaint mode
  • _old_invoke, binary mask, non-inpaint mode
  • _old_invoke, normal generation with an inpainting model
  • _new_invoke, gradient mask, inpaint model
  • _new_invoke, gradient mask, non-inpaint model
  • _new_invoke, binary mask, inpaint mode
  • _new_invoke, binary mask, non-inpaint mode
  • _new_invoke, normal generation with an inpainting model

@RyanJDick
Copy link
Collaborator

RyanJDick commented Jul 26, 2024

During the transition period, why not make USE_MODULAR_DENOISE into an app configuration variable? This will then automatically set a config parameter based on an environment variable named INVOKEAI_USE_MODULAR_DENOISE and be consistent with how environment variables are handled elsewhere.

I don't feel strongly either way. But, I didn't suggest it earlier since it's intended to be short-lived and for developers only, so I didn't want to have to worry about handling config migrations.

@StAlKeR7779 StAlKeR7779 mentioned this pull request Jul 26, 2024
3 tasks
Co-Authored-By: Ryan Dick <[email protected]>
@StAlKeR7779
Copy link
Contributor Author

StAlKeR7779 commented Jul 26, 2024

We need to test all of the following cases for regression against main before merging:

  • _old_invoke, gradient mask, inpaint model
  • _old_invoke, gradient mask, non-inpaint model
  • _old_invoke, binary mask, inpaint mode
  • _old_invoke, binary mask, non-inpaint mode
  • _old_invoke, normal generation with an inpainting model
  • _new_invoke, gradient mask, inpaint model
  • _new_invoke, gradient mask, non-inpaint model
  • _new_invoke, binary mask, inpaint mode
  • _new_invoke, binary mask, non-inpaint mode
  • _new_invoke, normal generation with an inpainting model

Runned all this cases on sd1 model and compared - all looks ok

@RyanJDick RyanJDick enabled auto-merge July 29, 2024 14:15
@RyanJDick RyanJDick merged commit 2ad13ac into invoke-ai:main Jul 29, 2024
14 checks passed
@StAlKeR7779 StAlKeR7779 deleted the stalker7779/modular_inpaint branch July 29, 2024 14:32
blessedcoolant added a commit that referenced this pull request Jul 31, 2024
## Summary

Gradient mask node outputs mask tensor with values in range [-1, 1],
which unexpected range for mask.
It handled in denoise node the way it translates to [0, 2] mask, which
looks even more wrongly)
From discussion with @dunkeroni I understand him as he thought that
negative values will be treated same as 0, so clamping values not change
intended node logic.

## Related Issues / Discussions

#6643 

## QA Instructions

\-

## Merge Plan

\-

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [ ] _Tests added / updated (if applicable)_
- [ ] _Documentation added / updated (if applicable)_
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend PRs that change backend files invocations PRs that change invocations python PRs that change python files
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants