Skip to content

Commit 676ccd8

Browse files
authored
Add IP-Adapter to docs (#4703)
## What type of PR is this? (check all applicable) - [ ] Refactor - [ ] Feature - [ ] Bug Fix - [ ] Optimization - [X] Documentation Update - [ ] Community Node Submission ## Have you discussed this change with the InvokeAI team? - [ ] Yes - [ ] No, because: ## Have you updated all relevant documentation? - [ ] Yes - [ ] No ## Description ## Related Tickets & Documents <!-- For pull requests that relate or close an issue, please include them below. For example having the text: "closes #1234" would connect the current pull request to issue 1234. And when we merge the pull request, Github will automatically close the issue. --> - Related Issue # - Closes # ## QA Instructions, Screenshots, Recordings <!-- Please provide steps on how to test changes, any hardware or software specifications as well as any other pertinent information. --> ## Added/updated tests? - [ ] Yes - [ ] No : _please replace this line with details on why tests have not been included_ ## [optional] Are there any post deployment tasks we need to perform?
2 parents 8158124 + a263a4f commit 676ccd8

File tree

3 files changed

+44
-12
lines changed

3 files changed

+44
-12
lines changed

docs/features/CONTROLNET.md

Lines changed: 35 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,11 @@
11
---
2-
title: ControlNet
2+
title: Control Adapters
33
---
44

5-
# :material-loupe: ControlNet
5+
# :material-loupe: Control Adapters
66

77
## ControlNet
88

9-
ControlNet
10-
119
ControlNet is a powerful set of features developed by the open-source
1210
community (notably, Stanford researcher
1311
[**@ilyasviel**](https://github.com/lllyasviel)) that allows you to
@@ -20,7 +18,7 @@ towards generating images that better fit your desired style or
2018
outcome.
2119

2220

23-
### How it works
21+
#### How it works
2422

2523
ControlNet works by analyzing an input image, pre-processing that
2624
image to identify relevant information that can be interpreted by each
@@ -30,7 +28,7 @@ composition, or other aspects of the image to better achieve a
3028
specific result.
3129

3230

33-
### Models
31+
#### Models
3432

3533
InvokeAI provides access to a series of ControlNet models that provide
3634
different effects or styles in your generated images. Currently
@@ -96,6 +94,8 @@ A model that generates normal maps from input images, allowing for more realisti
9694
**Image Segmentation**:
9795
A model that divides input images into segments or regions, each of which corresponds to a different object or part of the image. (More details coming soon)
9896

97+
**QR Code Monster**:
98+
A model that helps generate creative QR codes that still scan. Can also be used to create images with text, logos or shapes within them.
9999

100100
**Openpose**:
101101
The OpenPose control model allows for the identification of the general pose of a character by pre-processing an existing image with a clear human structure. With advanced options, Openpose can also detect the face or hands in the image.
@@ -120,7 +120,7 @@ With Pix2Pix, you can input an image into the controlnet, and then "instruct" th
120120
Each of these models can be adjusted and combined with other ControlNet models to achieve different results, giving you even more control over your image generation process.
121121

122122

123-
## Using ControlNet
123+
### Using ControlNet
124124

125125
To use ControlNet, you can simply select the desired model and adjust both the ControlNet and Pre-processor settings to achieve the desired result. You can also use multiple ControlNet models at the same time, allowing you to achieve even more complex effects or styles in your generated images.
126126

@@ -132,3 +132,31 @@ Weight - Strength of the Controlnet model applied to the generation for the sect
132132
Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the ControlNet applied.
133133

134134
Additionally, each ControlNet section can be expanded in order to manipulate settings for the image pre-processor that adjusts your uploaded image before using it in when you Invoke.
135+
136+
137+
## IP-Adapter
138+
139+
[IP-Adapter](https://ip-adapter.github.io) is a tooling that allows for image prompt capabilities with text-to-image diffusion models. IP-Adapter works by analyzing the given image prompt to extract features, then passing those features to the UNet along with any other conditioning provided.
140+
141+
![IP-Adapter + T2I](https://github.com/tencent-ailab/IP-Adapter/raw/main/assets/demo/ip_adpter_plus_multi.jpg)
142+
143+
![IP-Adapter + IMG2IMG](https://github.com/tencent-ailab/IP-Adapter/blob/main/assets/demo/image-to-image.jpg)
144+
145+
#### Installation
146+
There are several ways to install IP-Adapter models with an existing InvokeAI installation:
147+
148+
1. Through the command line interface launched from the invoke.sh / invoke.bat scripts, option [5] to download models.
149+
2. Through the Model Manager UI with models from the *Tools* section of [www.models.invoke.ai](www.models.invoke.ai). To do this, copy the repo ID from the desired model page, and paste it in the Add Model field of the model manager. **Note** Both the IP-Adapter and the Image Encoder must be installed for IP-Adapter to work. For example, the [SD 1.5 IP-Adapter](https://models.invoke.ai/InvokeAI/ip_adapter_plus_sd15) and [SD1.5 Image Encoder](https://models.invoke.ai/InvokeAI/ip_adapter_sd_image_encoder) must be installed to use IP-Adapter with SD1.5 based models.
150+
3. **Advanced -- Not recommended ** Manually downloading the IP-Adapter and Image Encoder files - Image Encoder folders shouid be placed in the `models\any\clip_vision` folders. IP Adapter Model folders should be placed in the relevant `ip-adapter` folder of relevant base model folder of Invoke root directory. For example, for the SDXL IP-Adapter, files should be added to the `model/sdxl/ip_adapter/` folder.
151+
152+
#### Using IP-Adapter
153+
154+
IP-Adapter can be used by navigating to the *Control Adapters* options and enabling IP-Adapter.
155+
156+
IP-Adapter requires an image to be used as the Image Prompt. It can also be used in conjunction with text prompts, Image-to-Image, Inpainting, Outpainting, ControlNets and LoRAs.
157+
158+
159+
Each IP-Adapter has two settings that are applied to the IP-Adapter:
160+
161+
* Weight - Strength of the IP-Adapter model applied to the generation for the section, defined by start/end
162+
* Start/End - 0 represents the start of the generation, 1 represents the end. The Start/end setting controls what steps during the generation process have the IP-Adapter applied.

docs/nodes/NODES.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,12 +4,12 @@ The workflow editor is a blank canvas allowing for the use of individual functio
44

55
If you're not familiar with Diffusion, take a look at our [Diffusion Overview.](../help/diffusion.md) Understanding how diffusion works will enable you to more easily use the Workflow Editor and build workflows to suit your needs.
66

7-
## UI Features
7+
## Features
88

99
### Linear View
1010
The Workflow Editor allows you to create a UI for your workflow, to make it easier to iterate on your generations.
1111

12-
To add an input to the Linear UI, right click on the input and select "Add to Linear View".
12+
To add an input to the Linear UI, right click on the input label and select "Add to Linear View".
1313

1414
The Linear UI View will also be part of the saved workflow, allowing you share workflows and enable other to use them, regardless of complexity.
1515

@@ -25,6 +25,10 @@ Any node or input field can be renamed in the workflow editor. If the input fiel
2525
* Backspace/Delete to delete a node
2626
* Shift+Click to drag and select multiple nodes
2727

28+
### Node Caching
29+
30+
Nodes have a "Use Cache" option in their footer. This allows for performance improvements by using the previously cached values during the workflow processing.
31+
2832

2933
## Important Concepts
3034

mkdocs.yml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -127,20 +127,20 @@ nav:
127127
- Manual Installation on Windows: 'installation/deprecated_documentation/INSTALL_WINDOWS.md'
128128
- Installing Invoke with pip: 'installation/deprecated_documentation/INSTALL_PCP.md'
129129
- Source Installer: 'installation/deprecated_documentation/INSTALL_SOURCE.md'
130-
- Nodes:
130+
- Workflows & Nodes:
131131
- Community Nodes: 'nodes/communityNodes.md'
132132
- Example Workflows: 'nodes/exampleWorkflows.md'
133133
- Nodes Overview: 'nodes/overview.md'
134134
- List of Default Nodes: 'nodes/defaultNodes.md'
135-
- Node Editor Usage: 'nodes/NODES.md'
135+
- Workflow Editor Usage: 'nodes/NODES.md'
136136
- ComfyUI to InvokeAI: 'nodes/comfyToInvoke.md'
137137
- Contributing Nodes: 'nodes/contributingNodes.md'
138138
- Features:
139139
- Overview: 'features/index.md'
140140
- New to InvokeAI?: 'help/gettingStartedWithAI.md'
141141
- Concepts: 'features/CONCEPTS.md'
142142
- Configuration: 'features/CONFIGURATION.md'
143-
- ControlNet: 'features/CONTROLNET.md'
143+
- Control Adapters: 'features/CONTROLNET.md'
144144
- Image-to-Image: 'features/IMG2IMG.md'
145145
- Controlling Logging: 'features/LOGGING.md'
146146
- Model Merging: 'features/MODEL_MERGING.md'

0 commit comments

Comments
 (0)