Skip to content

Commit

Permalink
Merge branch 'main' into maryhipp/update-studio-destinations
Browse files Browse the repository at this point in the history
  • Loading branch information
maryhipp authored Sep 19, 2024
2 parents bd18da6 + 13db18f commit d9a9229
Show file tree
Hide file tree
Showing 166 changed files with 6,221 additions and 2,257 deletions.
2 changes: 1 addition & 1 deletion docs/features/CONTROLNET.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ There are several ways to install IP-Adapter models with an existing InvokeAI in

1. Through the command line interface launched from the invoke.sh / invoke.bat scripts, option [4] to download models.
2. Through the Model Manager UI with models from the *Tools* section of [models.invoke.ai](https://models.invoke.ai). To do this, copy the repo ID from the desired model page, and paste it in the Add Model field of the model manager. **Note** Both the IP-Adapter and the Image Encoder must be installed for IP-Adapter to work. For example, the [SD 1.5 IP-Adapter](https://models.invoke.ai/InvokeAI/ip_adapter_plus_sd15) and [SD1.5 Image Encoder](https://models.invoke.ai/InvokeAI/ip_adapter_sd_image_encoder) must be installed to use IP-Adapter with SD1.5 based models.
3. **Advanced -- Not recommended ** Manually downloading the IP-Adapter and Image Encoder files - Image Encoder folders shouid be placed in the `models\any\clip_vision` folders. IP Adapter Model folders should be placed in the relevant `ip-adapter` folder of relevant base model folder of Invoke root directory. For example, for the SDXL IP-Adapter, files should be added to the `model/sdxl/ip_adapter/` folder.
3. **Advanced -- Not recommended ** Manually downloading the IP-Adapter and Image Encoder files - Image Encoder folders should be placed in the `models\any\clip_vision` folders. IP Adapter Model folders should be placed in the relevant `ip-adapter` folder of relevant base model folder of Invoke root directory. For example, for the SDXL IP-Adapter, files should be added to the `model/sdxl/ip_adapter/` folder.

#### Using IP-Adapter

Expand Down
2 changes: 1 addition & 1 deletion docs/features/DATABASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Invoke uses a SQLite database to store image, workflow, model, and execution dat

We take great care to ensure your data is safe, by utilizing transactions and a database migration system.

Even so, when testing an prerelease version of the app, we strongly suggest either backing up your database or using an in-memory database. This ensures any prelease hiccups or databases schema changes will not cause problems for your data.
Even so, when testing a prerelease version of the app, we strongly suggest either backing up your database or using an in-memory database. This ensures any prelease hiccups or databases schema changes will not cause problems for your data.

## Database Backup

Expand Down
2 changes: 1 addition & 1 deletion docs/features/GALLERY.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ Each image also has a context menu (ctrl+click / right-click).
- ***Use Prompt **** this will load only the image's text prompts into the left-hand control panel
- ***Use Seed **** this will load only the image's Seed into the left-hand control panel
- ***Use All **** this will load all of the image's generation information into the left-hand control panel
- ***Send to Image to Image*** this will put the image into the left-hand panel in the Image to Image tab ana automatically open it
- ***Send to Image to Image*** this will put the image into the left-hand panel in the Image to Image tab and automatically open it
- ***Send to Unified Canvas*** This will (bold)replace whatever is already present(bold) in the Unified Canvas tab with the image and automatically open the tab
- ***Change Board*** this will oipen a small window that will let you move the image to a different board. This is the same as dragging the image to that board's thumbnail.
- ***Star Image*** this will add the image to the board's list of starred images that are always kept at the top of the gallery. This is the same as clicking on the star on the top right-hand side of the image that appears when you hover over the image with the mouse
Expand Down
2 changes: 1 addition & 1 deletion docs/features/MODEL_MERGING.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ be used to teach an old model new tricks.

## How to Merge Models

Model Merging can be be done by navigating to the Model Manager and clicking the "Merge Models" tab. From there, you can select the models and settings you want to use to merge th models.
Model Merging can be done by navigating to the Model Manager and clicking the "Merge Models" tab. From there, you can select the models and settings you want to use to merge the models.

## Settings

Expand Down
6 changes: 3 additions & 3 deletions docs/features/UNIFIED_CANVAS.md
Original file line number Diff line number Diff line change
Expand Up @@ -232,7 +232,7 @@ clarity on the intent and common use cases we expect for utilizing them.
### Compositing / Seam Correction

When doing Inpainting or Outpainting, Invoke needs to merge the pixels generated
by Stable Diffusion into your existing image. This is achieved through compositing - the area around the the boundary between your image and the new generation is
by Stable Diffusion into your existing image. This is achieved through compositing - the area around the boundary between your image and the new generation is
automatically blended to produce a seamless output. In a fully automatic
process, a mask is generated to cover the boundary, and then the area of the boundary is
Inpainted.
Expand All @@ -242,13 +242,13 @@ help to alter the parameters that control the Compositing. A larger blur and
a blur setting have been noted as producing
consistently strong results . Strength of 0.7 is best for reducing hard seams.

- **Mode** - What part of the image will have the the Compositing applied to it.
- **Mode** - What part of the image will have the Compositing applied to it.
- **Mask edge** will apply Compositing to the edge of the masked area
- **Mask** will apply Compositing to the entire masked area
- **Unmasked** will apply Compositing to the entire image
- **Steps** - Number of generation steps that will occur during the Coherence Pass, similar to Denoising Steps. Higher step counts will generally have better results.
- **Strength** - How much noise is added for the Coherence Pass, similar to Denoising Strength. A strength of 0 will result in an unchanged image, while a strength of 1 will result in an image with a completely new area as defined by the Mode setting.
- **Blur** - Adjusts the pixel radius of the the mask. A larger blur radius will cause the mask to extend past the visibly masked area, while too small of a blur radius will result in a mask that is smaller than the visibly masked area.
- **Blur** - Adjusts the pixel radius of the mask. A larger blur radius will cause the mask to extend past the visibly masked area, while too small of a blur radius will result in a mask that is smaller than the visibly masked area.
- **Blur Method** - The method of blur applied to the masked area.


Expand Down
2 changes: 1 addition & 1 deletion docs/features/UTILITIES.md
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,7 @@ finding and fixing three problems that can arise over time:
into the database.

3. The thumbnail for an image is missing, again causing a black
gallery thumbnail. This is fixed by running the "thumbnaiils"
gallery thumbnail. This is fixed by running the "thumbnails"
operation, which simply regenerates and re-registers the missing
thumbnail.

Expand Down
6 changes: 3 additions & 3 deletions docs/nodes/communityNodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ The suggested method is to use `git clone` to clone the repository the node is f

If you'd prefer, you can also just download the whole node folder from the linked repository and add it to the `nodes` folder.

To use a community workflow, download the the `.json` node graph file and load it into Invoke AI via the **Load Workflow** button in the Workflow Editor.
To use a community workflow, download the `.json` node graph file and load it into Invoke AI via the **Load Workflow** button in the Workflow Editor.

- Community Nodes
+ [Adapters-Linked](#adapters-linked-nodes)
Expand Down Expand Up @@ -427,7 +427,7 @@ This node works best with SDXL models, especially as the style can be described
5. `Prompt Strength Combine` - Combines weighted prompts for .and()/.blend()
6. `CSV To Index String` - Gets a string from a CSV by index. Includes a Random index option

The following Nodes are now included in v3.2 of Invoke and are nolonger in this set of tools.<br>
The following Nodes are now included in v3.2 of Invoke and are no longer in this set of tools.<br>
- `Prompt Join` -> `String Join`
- `Prompt Join Three` -> `String Join Three`
- `Prompt Replace` -> `String Replace`
Expand Down Expand Up @@ -456,7 +456,7 @@ See full docs here: https://github.com/skunkworxdark/Prompt-tools-nodes/edit/mai

### BriaAI Remove Background

**Description**: Implements one click background removal with BriaAI's new version 1.4 model which seems to be be producing better results than any other previous background removal tool.
**Description**: Implements one click background removal with BriaAI's new version 1.4 model which seems to be producing better results than any other previous background removal tool.

**Node Link:** https://github.com/blessedcoolant/invoke_bria_rmbg

Expand Down
12 changes: 7 additions & 5 deletions invokeai/app/invocations/compel.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.app.util.ti_utils import generate_ti_list
from invokeai.backend.lora.lora_model_raw import LoRAModelRaw
from invokeai.backend.lora.lora_patcher import LoRAPatcher
from invokeai.backend.model_patcher import ModelPatcher
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import (
BasicConditioningInfo,
Expand Down Expand Up @@ -81,9 +82,10 @@ def _lora_loader() -> Iterator[Tuple[LoRAModelRaw, float]]:
# apply all patches while the model is on the target device
text_encoder_info.model_on_device() as (cached_weights, text_encoder),
tokenizer_info as tokenizer,
ModelPatcher.apply_lora_text_encoder(
text_encoder,
loras=_lora_loader(),
LoRAPatcher.apply_lora_patches(
model=text_encoder,
patches=_lora_loader(),
prefix="lora_te_",
cached_weights=cached_weights,
),
# Apply CLIP Skip after LoRA to prevent LoRA application from failing on skipped layers.
Expand Down Expand Up @@ -176,9 +178,9 @@ def _lora_loader() -> Iterator[Tuple[LoRAModelRaw, float]]:
# apply all patches while the model is on the target device
text_encoder_info.model_on_device() as (cached_weights, text_encoder),
tokenizer_info as tokenizer,
ModelPatcher.apply_lora(
LoRAPatcher.apply_lora_patches(
text_encoder,
loras=_lora_loader(),
patches=_lora_loader(),
prefix=lora_prefix,
cached_weights=cached_weights,
),
Expand Down
8 changes: 5 additions & 3 deletions invokeai/app/invocations/denoise_latents.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@
from invokeai.app.util.controlnet_utils import prepare_control_image
from invokeai.backend.ip_adapter.ip_adapter import IPAdapter
from invokeai.backend.lora.lora_model_raw import LoRAModelRaw
from invokeai.backend.lora.lora_patcher import LoRAPatcher
from invokeai.backend.model_manager import BaseModelType, ModelVariantType
from invokeai.backend.model_patcher import ModelPatcher
from invokeai.backend.stable_diffusion import PipelineIntermediateState
Expand Down Expand Up @@ -979,9 +980,10 @@ def _lora_loader() -> Iterator[Tuple[LoRAModelRaw, float]]:
ModelPatcher.apply_freeu(unet, self.unet.freeu_config),
SeamlessExt.static_patch_model(unet, self.unet.seamless_axes), # FIXME
# Apply the LoRA after unet has been moved to its target device for faster patching.
ModelPatcher.apply_lora_unet(
unet,
loras=_lora_loader(),
LoRAPatcher.apply_lora_patches(
model=unet,
patches=_lora_loader(),
prefix="lora_unet_",
cached_weights=cached_weights,
),
):
Expand Down
47 changes: 45 additions & 2 deletions invokeai/app/invocations/flux_denoise.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
from typing import Callable, Optional
from contextlib import ExitStack
from typing import Callable, Iterator, Optional, Tuple

import torch
import torchvision.transforms as tv_transforms
Expand Down Expand Up @@ -29,6 +30,9 @@
pack,
unpack,
)
from invokeai.backend.lora.lora_model_raw import LoRAModelRaw
from invokeai.backend.lora.lora_patcher import LoRAPatcher
from invokeai.backend.model_manager.config import ModelFormat
from invokeai.backend.stable_diffusion.diffusers_pipeline import PipelineIntermediateState
from invokeai.backend.stable_diffusion.diffusion.conditioning_data import FLUXConditioningInfo
from invokeai.backend.util.devices import TorchDevice
Expand Down Expand Up @@ -187,9 +191,41 @@ def _run_diffusion(
noise=noise,
)

with transformer_info as transformer:
with (
transformer_info.model_on_device() as (cached_weights, transformer),
ExitStack() as exit_stack,
):
assert isinstance(transformer, Flux)

config = transformer_info.config
assert config is not None

# Apply LoRA models to the transformer.
# Note: We apply the LoRA after the transformer has been moved to its target device for faster patching.
if config.format in [ModelFormat.Checkpoint]:
# The model is non-quantized, so we can apply the LoRA weights directly into the model.
exit_stack.enter_context(
LoRAPatcher.apply_lora_patches(
model=transformer,
patches=self._lora_iterator(context),
prefix="",
cached_weights=cached_weights,
)
)
elif config.format in [ModelFormat.BnbQuantizedLlmInt8b, ModelFormat.BnbQuantizednf4b]:
# The model is quantized, so apply the LoRA weights as sidecar layers. This results in slower inference,
# than directly patching the weights, but is agnostic to the quantization format.
exit_stack.enter_context(
LoRAPatcher.apply_lora_sidecar_patches(
model=transformer,
patches=self._lora_iterator(context),
prefix="",
dtype=inference_dtype,
)
)
else:
raise ValueError(f"Unsupported model format: {config.format}")

x = denoise(
model=transformer,
img=x,
Expand Down Expand Up @@ -247,6 +283,13 @@ def _prep_inpaint_mask(self, context: InvocationContext, latents: torch.Tensor)
# `latents`.
return mask.expand_as(latents)

def _lora_iterator(self, context: InvocationContext) -> Iterator[Tuple[LoRAModelRaw, float]]:
for lora in self.transformer.loras:
lora_info = context.models.load(lora.lora)
assert isinstance(lora_info.model, LoRAModelRaw)
yield (lora_info.model, lora.weight)
del lora_info

def _build_step_callback(self, context: InvocationContext) -> Callable[[PipelineIntermediateState], None]:
def step_callback(state: PipelineIntermediateState) -> None:
state.latents = unpack(state.latents.float(), self.height, self.width).squeeze()
Expand Down
53 changes: 53 additions & 0 deletions invokeai/app/invocations/flux_lora_loader.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
from invokeai.app.invocations.baseinvocation import BaseInvocation, BaseInvocationOutput, invocation, invocation_output
from invokeai.app.invocations.fields import FieldDescriptions, Input, InputField, OutputField, UIType
from invokeai.app.invocations.model import LoRAField, ModelIdentifierField, TransformerField
from invokeai.app.services.shared.invocation_context import InvocationContext


@invocation_output("flux_lora_loader_output")
class FluxLoRALoaderOutput(BaseInvocationOutput):
"""FLUX LoRA Loader Output"""

transformer: TransformerField = OutputField(
default=None, description=FieldDescriptions.transformer, title="FLUX Transformer"
)


@invocation(
"flux_lora_loader",
title="FLUX LoRA",
tags=["lora", "model", "flux"],
category="model",
version="1.0.0",
)
class FluxLoRALoaderInvocation(BaseInvocation):
"""Apply a LoRA model to a FLUX transformer."""

lora: ModelIdentifierField = InputField(
description=FieldDescriptions.lora_model, title="LoRA", ui_type=UIType.LoRAModel
)
weight: float = InputField(default=0.75, description=FieldDescriptions.lora_weight)
transformer: TransformerField = InputField(
description=FieldDescriptions.transformer,
input=Input.Connection,
title="FLUX Transformer",
)

def invoke(self, context: InvocationContext) -> FluxLoRALoaderOutput:
lora_key = self.lora.key

if not context.models.exists(lora_key):
raise ValueError(f"Unknown lora: {lora_key}!")

if any(lora.lora.key == lora_key for lora in self.transformer.loras):
raise ValueError(f'LoRA "{lora_key}" already applied to transformer.')

transformer = self.transformer.model_copy(deep=True)
transformer.loras.append(
LoRAField(
lora=self.lora,
weight=self.weight,
)
)

return FluxLoRALoaderOutput(transformer=transformer)
3 changes: 2 additions & 1 deletion invokeai/app/invocations/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,7 @@ class CLIPField(BaseModel):

class TransformerField(BaseModel):
transformer: ModelIdentifierField = Field(description="Info to load Transformer submodel")
loras: List[LoRAField] = Field(description="LoRAs to apply on model loading")


class T5EncoderField(BaseModel):
Expand Down Expand Up @@ -202,7 +203,7 @@ def invoke(self, context: InvocationContext) -> FluxModelLoaderOutput:
assert isinstance(transformer_config, CheckpointConfigBase)

return FluxModelLoaderOutput(
transformer=TransformerField(transformer=transformer),
transformer=TransformerField(transformer=transformer, loras=[]),
clip=CLIPField(tokenizer=tokenizer, text_encoder=clip_encoder, loras=[], skipped_layers=0),
t5_encoder=T5EncoderField(tokenizer=tokenizer2, text_encoder=t5_encoder),
vae=VAEField(vae=vae),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
from invokeai.app.invocations.primitives import LatentsOutput
from invokeai.app.services.shared.invocation_context import InvocationContext
from invokeai.backend.lora.lora_model_raw import LoRAModelRaw
from invokeai.backend.model_patcher import ModelPatcher
from invokeai.backend.lora.lora_patcher import LoRAPatcher
from invokeai.backend.stable_diffusion.diffusers_pipeline import ControlNetData, PipelineIntermediateState
from invokeai.backend.stable_diffusion.multi_diffusion_pipeline import (
MultiDiffusionPipeline,
Expand Down Expand Up @@ -204,7 +204,11 @@ def _lora_loader() -> Iterator[Tuple[LoRAModelRaw, float]]:
# Load the UNet model.
unet_info = context.models.load(self.unet.unet)

with ExitStack() as exit_stack, unet_info as unet, ModelPatcher.apply_lora_unet(unet, _lora_loader()):
with (
ExitStack() as exit_stack,
unet_info as unet,
LoRAPatcher.apply_lora_patches(model=unet, patches=_lora_loader(), prefix="lora_unet_"),
):
assert isinstance(unet, UNet2DConditionModel)
latents = latents.to(device=unet.device, dtype=unet.dtype)
if noise is not None:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,11 @@ def get_image_count_for_board(self, board_id: str) -> int:
self._lock.acquire()
self._cursor.execute(
"""--sql
SELECT COUNT(*) FROM board_images WHERE board_id = ?;
SELECT COUNT(*)
FROM board_images
INNER JOIN images ON board_images.image_name = images.image_name
WHERE images.is_intermediate = FALSE
AND board_images.board_id = ?;
""",
(board_id,),
)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -407,6 +407,7 @@ def get_most_recent_image_for_board(self, board_id: str) -> Optional[ImageRecord
FROM images
JOIN board_images ON images.image_name = board_images.image_name
WHERE board_images.board_id = ?
AND images.is_intermediate = FALSE
ORDER BY images.starred DESC, images.created_at DESC
LIMIT 1;
""",
Expand Down
Loading

0 comments on commit d9a9229

Please sign in to comment.