-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: canvas v2 #6771
feat: canvas v2 #6771
Conversation
db0587f
to
1fdcce9
Compare
Re: eraser brush opacity. This works fine! We can simply set the opacity of eraser lines to the desired opacity. Screen.Recording.2024-08-25.at.12.48.40.am.movUnfortunately there is a bug when using partial opacity for eraser lines, where pixels that should be transparent (0,0,0,0 - black with alpha 0) end up as (255,255,255,1). That's almost transparent, but not quite. As a reminder, these are 8 bit values, so fully opaque alpha is 255. The cause is related to how browsers/GPUs store and manipulate alpha channel data. In short, there is a subtle difference between an Here's what happens, admitting some gaps in my understanding:
The logic that determines which graph to run is erroneously sees no transparent pixels (they all have alpha 1 isntead of 0), triggering the Another lovely effect of this problem is that different browsers can return slightly different values. Chrome gives you (255,255,255,2) instead of (255,255,255,1)... Joy... Oh, and why isn't htis a problem with the existing eraser tool? Well, it is "opaque", so thanks to the compositing, the alpha channel of erased regions is set directly to 0. There's no quantization, so this problem doesn't occur. |
Dev build 4 -
|
20f1b7a
to
6f0974b
Compare
Dev build 5 -
Screen.Recording.2024-08-27.at.8.04.52.pm.mov |
a00944c
to
ea014c6
Compare
|
- Add `Open in Viewer` - Remove `Send to Image to Image` - Fix `Send to Canvas` - Split out logic for composability
- Global action bar on top - Selected Entity action bar below
c153dc8
to
d421eae
Compare
Dev build 13 -
|
Summary
Canvas V2.
Broad strokes of changes
Single generation tab
There is a single generation tab with two modes:
Generate
: Generated images go directly to the gallery. This replaces theGeneration
tab workflow, akayeet
mode.Compose
: Generated images are staged to be added to the canvas. This replaces theCanvas
tab workflow.Layers, Masks, Filters, Tools
Everything is a "layer" or "mask" now.
Layers
Layers can be drawn on, transformed, and filtered. A layer may be created from an image. Two kinds of layers:
Masks
Masks can be drawn on and transformed. A future enhancement will allow using images as masks. No filters for masks - maybe in the future. Two kinds of masks:
Filters
All ControlNet "preprocessors" are now exposed as filters. Filters run on the server. Technically, if you can nodeify some kind of image processing, it could be a filter.
Tools
Full rewrite
Almost all internal generation logic for the linear UI is new, including graphs and rendering. The control layers implementation was rough draft for Canvas V2, but almost none of it remains.
Canvas rendering implementation
Revised app layout component composition
While the app overall won't look different, they way components are rendered is a bit different - hopefully providing a much snappier UI at the cost of a bit more up-front loading.
Hotkeys
Quite a few of these have changed, and I haven't updated the hotkeys list yet. Sorry.
QA Instructions
🚨 This PR includes an unstable database migration. 🚨
Make sure to set
use_memory_db: true
in yourinvokeai.yaml
before testing.Dev builds
I'll be publishing regular dev builds to pypi for testing. I'll list each build as I publish it here.
To test a build, activate your venv and run the listed command.
Build list
pip install InvokeAI==4.2.9.dev20240823
pip install InvokeAI==4.2.9.dev20240824
Generate
mode, inpainted and outpainted images are pasted back on to the source image. InCompose
mode, inpainted and outpainted images are not pasted back on to the source image; they are transparent outside the generated region.pip install InvokeAI==4.2.9.dev3
Result
s.pip install InvokeAI==4.2.9.dev4
pip install InvokeAI==4.2.9.dev5
:main
(so it includes current FLUX implementation, though I don't think inpainting or img2img work...)pip install InvokeAI==4.2.9.dev6
:cmdk
(test run for whole-app command palette/omnibar)pip install InvokeAI==4.2.9.dev7
:pip install InvokeAI==4.2.9.dev8
+
buttons to each entity category headerdestination
column tosession_queue
table ❗ This is a breaking change, you'll need to manually fix your DB or just create a new one. You were using a memory db as indicated in the PR, though, right? In case you weren't, here's how to fix it from thesqlite3
CLI, then the app should start up.pip install InvokeAI==4.2.9.dev10
(build 9 had some issues):pip install InvokeAI==4.2.9.dev11
:[
and]
hotkeys for brush/eraser sizec
alt+[
andalt+]
hotkeys to cycle thru layersq
hotkey for quick switch between last two selected layersq
no matter what layer is selectedmerge visible
when layer type has 1 or 0 entitiesfit to bbox
button in transformer popup, this is useful to snap a control image to the bboxCanvasEntityTransformer
, move its state to nanostores atoms for reactivityCanvasStateApi
pip install InvokeAI==4.2.9.dev12
:pip install InvokeAI==5.0.0.dev13
:main
send-to-gallery
queue items to continue processingshift
behavior when transforming layer to match photoshop & affinityUI broken?
It is possible that the UI will break when you first open it due to conflicts with persisted state. If the UI totally fails to load, you can run this snippet from the browser's JS console to wipe the problematic persisted state:
Non-exhaustive list of known issues/TODOs
Major
Minor
TODO
Wishlist
Merge Plan
Do not push that button!
Checklist