-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modular backend - inpaint #6643
Modular backend - inpaint #6643
Conversation
…aint model without source image Co-Authored-By: Ryan Dick <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried to test, but ran into a device mismatch error. I expect that it will be easy to reproduce, but if not let me know and I can provide more details.
Also, in addition to the test cases that you have already listed, we should make sure and test both gradient-mask and non-gradient-mask inpainting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// invokeai/app/invocations/denoise_latents.py def invoke(self, context: InvocationContext) -> LatentsOutput: if os.environ.get("USE_MODULAR_DENOISE", False): return self._new_invoke(context) else: return self._old_invoke(context)
During the transition period, why not make USE_MODULAR_DENOISE
into an app configuration variable? This will then automatically set a config parameter based on an environment variable named INVOKEAI_USE_MODULAR_DENOISE
and be consistent with how environment variables are handled elsewhere.
Ran into an apparent bug. When inpainting with
No such issue when |
Could you debug if |
You called it correctly. The model was configured as "normal". Changing its variant to "inpaint" fixed the issue. Now I have to figure out why the variant was set to normal - maybe an issue with the prober. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might make more sense to split the CreateGradientMaskInvocation bug fix out into its own PR. What do you think? This would simplify the testing scope a bit. And will certainly be nice if a rollback is necessary. I'll leave it up to you. If we keep the bug fix in this PR, can you update the PR description to reflect that it also fixes a bug in CreateGradientMaskInvocation - which will have a small impact on behavior.
We need to test all of the following cases for regression against
|
I don't feel strongly either way. But, I didn't suggest it earlier since it's intended to be short-lived and for developers only, so I didn't want to have to worry about handling config migrations. |
This reverts commit 9d1fcba.
Co-Authored-By: Ryan Dick <[email protected]>
Runned all this cases on sd1 model and compared - all looks ok |
Co-Authored-By: Ryan Dick <[email protected]>
## Summary Gradient mask node outputs mask tensor with values in range [-1, 1], which unexpected range for mask. It handled in denoise node the way it translates to [0, 2] mask, which looks even more wrongly) From discussion with @dunkeroni I understand him as he thought that negative values will be treated same as 0, so clamping values not change intended node logic. ## Related Issues / Discussions #6643 ## QA Instructions \- ## Merge Plan \- ## Checklist - [x] _The PR has a short but descriptive title, suitable for a changelog_ - [ ] _Tests added / updated (if applicable)_ - [ ] _Documentation added / updated (if applicable)_
Summary
Code for inpainting and inpaint models handling from #6577.
Separated in 2 extensions as discussed briefly before, so wait for discussion about such implementation.
Related Issues / Discussions
#6606
https://invokeai.notion.site/Modular-Stable-Diffusion-Backend-Design-Document-e8952daab5d5472faecdc4a72d377b0d
QA Instructions
Run with and without set
USE_MODULAR_DENOISE
environment.Try and compare outputs between backends in cases:
Merge Plan
Nope.
If you think that there should be some kind of tests - feel free to add.
Checklist