From 52f61698e94644042f7410084aae63d65c1d72ca Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Thu, 27 Jul 2023 18:29:12 +1000 Subject: [PATCH 1/7] added getting started with Invoke guide --- docs/features/index.md | 2 +- docs/help/gettingStartedWithAI.md | 93 +++++++++++++++++++++++++++++++ mkdocs.yml | 1 + 3 files changed, 95 insertions(+), 1 deletion(-) create mode 100644 docs/help/gettingStartedWithAI.md diff --git a/docs/features/index.md b/docs/features/index.md index ffc663dd64d..21683249c27 100644 --- a/docs/features/index.md +++ b/docs/features/index.md @@ -46,7 +46,7 @@ Personalize models by adding your own style or subjects. ## Other Features -### * [The NSFW Checker](NSFW.md) +### * [The NSFW Checker](WATERMARK+NSFW.md) Prevent InvokeAI from displaying unwanted racy images. ### * [Controlling Logging](LOGGING.md) diff --git a/docs/help/gettingStartedWithAI.md b/docs/help/gettingStartedWithAI.md new file mode 100644 index 00000000000..7fcc139232d --- /dev/null +++ b/docs/help/gettingStartedWithAI.md @@ -0,0 +1,93 @@ +# Getting Started with AI Image Generation + +New to image generation with AI? You’re in the right place! + +This is a high level walkthrough of some of the concepts and terms you’ll see as you start using InvokeAI. Please note, this is not an exhaustive guide and may be out of date due to the rapidly changing nature of the space. + +## Using InvokeAI + +### **Prompt Crafting** + +- Prompts are the basis of using InvokeAI, providing the models directions on what to generate. As a general rule of thumb, the more detailed your prompt is, the better your result will be. + + *To get started, here’s an easy template to use for structuring your prompts:* + +- Subject, Style, Quality, Aesthetic + - **Subject:** What your image will be about. E.g. “a futuristic city with trains”, “penguins floating on icebergs”, “friends sharing beers” + - **Style:** The style or medium in which your image will be in. E.g. “photograph”, “pencil sketch”, “oil paints”, or “pop art”, “cubism”, “abstract” + - **Quality:** A particular aspect or trait that you would like to see emphasized in your image. E.g. "award-winning", "featured in {relevant set of high quality works}", "professionally acclaimed". Many people often use "masterpiece". + - **Aesthetics:** The visual impact and design of the artwork. This can be colors, mood, lighting, setting, etc. +- There are two prompt boxes: *Positive Prompt* & *Negative Prompt*. + - A **Positive** Prompt includes words you want the model to reference when creating an image. + - Negative Prompt is for anything you want the model to eliminate when creating an image. It doesn’t always interpret things exactly the way you would, but helps control the generation process. Always try to include a few terms - you can typically use lower quality image terms like “blurry” or “distorted” with good success. +- Some examples prompts you can try on your own: + - A detailed oil painting of a tranquil forest at sunset with vibrant+ colors and soft, golden light filtering through the trees + - friends sharing beers in a busy city, realistic colored pencil sketch, twilight, masterpiece, bright, lively + +### Generation Workflows + +- Invoke offers a number of different workflows for interacting with models to produce images. Each is extremely powerful on its own, but together provide you an unparalleled way of producing high quality creative outputs that align with your vision. + - **Text to Image:** The text to image tab focuses on the key workflow of using a prompt to generate a new image. It includes other features that help control the generation process as well. + - **Image to Image:** With image to image, you provide an image as a reference (called the “initial image”), which provides more guidance around color and structure to the AI as it generates a new image. This is provided alongside the same features as Text to Image. + - **Unified Canvas:** The Unified Canvas is an advanced AI-first image editing tool that is easy to use, but hard to master. Drag an image onto the canvas from your gallery in order to regenerate certain elements, edit content or colors (known as inpainting), or extend the image with an exceptional degree of consistency and clarity (called outpainting). + +### Improving Image Quality + +- Fine tuning your prompt - the more specific you are, the closer the image will turn out to what is in your head! Adding more details in the Positive Prompt or Negative Prompt can help add / remove pieces of your image to improve it - You can also use advanced techniques like upweighting and downweighting to control the influence of certain words. [Learn more here](https://invoke-ai.github.io/InvokeAI/features/PROMPTS/#prompt-syntax-features). + - **Tip: If you’re seeing poor results, try adding the things you don’t like about the image to your negative prompt may help. E.g. distorted, low quality, unrealistic, etc.** +- Explore different models - Other models can produce different results due to the data they’ve been trained on. Each model has specific language and settings it works best with; a model’s documentation is your friend here. Play around with some and see what works best for you! +- Increasing Steps - The number of steps used controls how much time the model is given to produce an image, and depends on the “Scheduler” used. The schedule controls how each step is processed by the model. More steps tends to mean better results, but will take longer - We recommend at least 30 steps for most +- Tweak and Iterate - Remember, it’s best to change one thing at a time so you know what is working and what isn't. Sometimes you just need to try a new image, and other times using a new prompt might be the ticket. For testing, consider turning off the “random” Seed - Using the same seed with the same settings will produce the same image, which makes it the perfect way to learn exactly what your changes are doing. +- Explore Advanced Settings - InvokeAI has a full suite of tools available to allow you complete control over your image creation process - Check out our [docs if you want to learn more](https://invoke-ai.github.io/InvokeAI/features/). + +## Terms & Concepts + +### Stable Diffusion + +Stable Diffusion is deep learning, text-to-image model that is the foundation of the capabilities found in InvokeAI. Since the release of Stable Diffusion, there have been many subsequent models created based on Stable Diffusion that are designed to generate specific types of images. + +### Prompts + +Prompts provide the models directions on what to generate. As a general rule of thumb, the more detailed your prompt is, the better your result will be. + +### Models + +Models are the magic that power InvokeAI. These files represent the output of training a machine on understanding massive amounts of images - providing them with the capability to generate new images using just a text description of what you’d like to see. (Like Stable Diffusion!) + +Invoke offers a simple way to download several different models upon installation, but many more can be discovered online, including at ****. Each model can produce a unique style of output, based on the images it was trained on - Try out different models to see which best fits your creative vision! + +- *Models that contain “inpainting” in the name are designed for use with the inpainting feature of the Unified Canvas* + +### Noise + +### Scheduler + +Schedulers guide the process of removing noise (de-noising) from data. They determine: + +1. The number of steps to take to remove the noise. +2. Whether the steps are random (stochastic) or predictable (deterministic). +3. The specific method (algorithm) used for de-noising. + +Experimenting with different schedulers is recommended as each will produce different outputs! + +### Steps + +The number of de-noising steps each generation through. + +Schedulers can be intricate and there's often a balance to strike between how quickly they can de-noise data and how well they can do it. It's typically advised to experiment with different schedulers to see which one gives the best results. There has been a lot written on the internet about different schedulers, as well as exploring what the right level of "steps" are for each. You can save generation time by reducing the number of steps used, but you'll want to make sure that you are satisfied with the quality of images produced! + +### Low-Rank Adaptations / LoRAs + +Low-Rank Adaptations (LoRAs) ****are like a smaller, more focused version of model, intended to focus on training a better understanding of how a specific character, style, or concept looks. + +### Embeddings + +Embeddings, like LoRAs, assist with more easily prompting for certain characters, styles, or concepts. However, embeddings are trained to more update the relationship between a specific word (known as the “trigger”) and the intended output. Embeddings may sometimes also be referred to as Textual Inversions (TIs). + +### ControlNet + +ControlNet is a neural network model that can be used to control output from models. This can take many forms, such as controlling poses of people in generated images or providing edges to based image generation on. The impact of the ControlNet can also be adjusted to increase or decrease the similarity of the generated image to the ControlNet. + +### VAE + +Variational auto-encoder (VAE) is a generative AI algorithm that helps to generate finer details such as better faces, hands, colors etc. \ No newline at end of file diff --git a/mkdocs.yml b/mkdocs.yml index cbcaf52af62..a4bd638d515 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -157,6 +157,7 @@ nav: - Inpainting: 'deprecated/INPAINTING.md' - Outpainting: 'deprecated/OUTPAINTING.md' - Help: + - Getting Started with AI: 'help/gettingStartedWithAI.md' - Sampler Convergence: 'help/SAMPLER_CONVERGENCE.md' - Other: - Contributors: 'other/CONTRIBUTORS.md' From d78c97f8a8e24a65751336a2db925c15e2d28cfe Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Thu, 27 Jul 2023 18:51:48 +1000 Subject: [PATCH 2/7] Updated getting started guide and links --- docs/features/index.md | 3 +++ mkdocs.yml | 3 ++- 2 files changed, 5 insertions(+), 1 deletion(-) diff --git a/docs/features/index.md b/docs/features/index.md index 21683249c27..e55917af5b2 100644 --- a/docs/features/index.md +++ b/docs/features/index.md @@ -4,6 +4,9 @@ title: Overview Here you can find the documentation for InvokeAI's various features. +## The [Getting Started Guide](../help/gettingStartedWithAI) +A getting started guide for those new to AI image generation. + ## The Basics ### * The [Web User Interface](WEB.md) Guide to the Web interface. Also see the [WebUI Hotkeys Reference Guide](WEBUIHOTKEYS.md) diff --git a/mkdocs.yml b/mkdocs.yml index a4bd638d515..4857b6b4bbd 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -124,6 +124,7 @@ nav: - Overview: 'nodes/overview.md' - Features: - Overview: 'features/index.md' + - New to InvokeAI?: 'help/gettingStartedWithAI.md' - Concepts: 'features/CONCEPTS.md' - Configuration: 'features/CONFIGURATION.md' - ControlNet: 'features/CONTROLNET.md' @@ -157,7 +158,7 @@ nav: - Inpainting: 'deprecated/INPAINTING.md' - Outpainting: 'deprecated/OUTPAINTING.md' - Help: - - Getting Started with AI: 'help/gettingStartedWithAI.md' + - Getting Started: 'help/gettingStartedWithAI.md' - Sampler Convergence: 'help/SAMPLER_CONVERGENCE.md' - Other: - Contributors: 'other/CONTRIBUTORS.md' From 5300e353d850c79f005eca2b994055b1a64e849f Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Thu, 27 Jul 2023 18:58:44 +1000 Subject: [PATCH 3/7] updated community nodes doc --- docs/nodes/communityNodes.md | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-) diff --git a/docs/nodes/communityNodes.md b/docs/nodes/communityNodes.md index 3e6c9113467..907c5fc4249 100644 --- a/docs/nodes/communityNodes.md +++ b/docs/nodes/communityNodes.md @@ -29,8 +29,14 @@ The nodes linked below have been developed and contributed by members of the Inv ![2fe2150c-fd08-4e26-8c36-f0610bf441bb](https://github.com/ymgenesis/InvokeAI/assets/25252829/b0f7ecfe-f093-4147-a904-b9f131b41dc9) ![831b6b98-4f0f-4360-93c8-69a9c1338cbe](https://github.com/ymgenesis/InvokeAI/assets/25252829/fc7b0622-e361-4155-8a76-082894d084f0) +### Ideal Size + +**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of. + +**Node Link:** https://github.com/JPPhoto/ideal-size-node + -------------------------------- -### Super Cool Node Template +### Example Node Template **Description:** This node allows you to do super cool things with InvokeAI. @@ -40,13 +46,9 @@ The nodes linked below have been developed and contributed by members of the Inv **Output Examples** -![Invoke AI](https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png) - -### Ideal Size - -**Description:** This node calculates an ideal image size for a first pass of a multi-pass upscaling. The aim is to avoid duplication that results from choosing a size larger than the model is capable of. - -**Node Link:** https://github.com/JPPhoto/ideal-size-node +![Example Image](https://invoke-ai.github.io/InvokeAI/assets/invoke_ai_banner.png){: style="height:115px;width:240px"} ## Help If you run into any issues with a node, please post in the [InvokeAI Discord](https://discord.gg/ZmtBAhwWhy). + + From 562c937a14e64d55bcc4097cae88f0c781d14d0d Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Thu, 27 Jul 2023 21:46:39 +1000 Subject: [PATCH 4/7] Updated new user flow --- .../contribution_guides/development.md | 2 +- docs/index.md | 89 ++++++++----------- .../{index.md => INSTALLATION.md} | 46 +++++++++- mkdocs.yml | 5 +- 4 files changed, 85 insertions(+), 57 deletions(-) rename docs/installation/{index.md => INSTALLATION.md} (57%) diff --git a/docs/contributing/contribution_guides/development.md b/docs/contributing/contribution_guides/development.md index 59c2b05c0ef..a867c7fa1f7 100644 --- a/docs/contributing/contribution_guides/development.md +++ b/docs/contributing/contribution_guides/development.md @@ -16,7 +16,7 @@ If you don't feel ready to make a code contribution yet, no problem! You can als There are two paths to making a development contribution: 1. Choosing an open issue to address. Open issues can be found in the [Issues](https://github.com/invoke-ai/InvokeAI/issues?q=is%3Aissue+is%3Aopen) section of the InvokeAI repository. These are tagged by the issue type (bug, enhancement, etc.) along with the “good first issues” tag denoting if they are suitable for first time contributors. - 1. Additional items can be found on our roadmap <******************************link to roadmap>******************************. The roadmap is organized in terms of priority, and contains features of varying size and complexity. If there is an inflight item you’d like to help with, reach out to the contributor assigned to the item to see how you can help. + 1. Additional items can be found on our [roadmap](https://github.com/orgs/invoke-ai/projects/7). The roadmap is organized in terms of priority, and contains features of varying size and complexity. If there is an inflight item you’d like to help with, reach out to the contributor assigned to the item to see how you can help. 2. Opening a new issue or feature to add. **Please make sure you have searched through existing issues before creating new ones.** *Regardless of what you choose, please post in the [#dev-chat](https://discord.com/channels/1020123559063990373/1049495067846524939) channel of the Discord before you start development in order to confirm that the issue or feature is aligned with the current direction of the project. We value our contributors time and effort and want to ensure that no one’s time is being misspent.* diff --git a/docs/index.md b/docs/index.md index 319df59bae1..15ced4440b4 100644 --- a/docs/index.md +++ b/docs/index.md @@ -11,6 +11,33 @@ title: Home ``` --> + + + + + +
@@ -70,61 +97,23 @@ image-to-image generator. It provides a streamlined process with various new features and options to aid the image generation process. It runs on Windows, Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM. -**Quick links**: [Discord Server] -[Code and Downloads] [Bug Reports] [Discussion, Ideas & -Q&A] -
-!!! note - - This fork is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates. They will help aid diagnose issues faster. - -## :octicons-package-dependencies-24: Installation - -This fork is supported across Linux, Windows and Macintosh. Linux users can use -either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm -driver). +!!! Note -### [Installation Getting Started Guide](installation) -#### **[Automated Installer](installation/010_INSTALL_AUTOMATED.md)** -✅ This is the recommended installation method for first-time users. -#### [Manual Installation](installation/020_INSTALL_MANUAL.md) -This method is recommended for experienced users and developers -#### [Docker Installation](installation/040_INSTALL_DOCKER.md) -This method is recommended for those familiar with running Docker containers -### Other Installation Guides - - [PyPatchMatch](installation/060_INSTALL_PATCHMATCH.md) - - [XFormers](installation/070_INSTALL_XFORMERS.md) - - [CUDA and ROCm Drivers](installation/030_INSTALL_CUDA_AND_ROCM.md) - - [Installing New Models](installation/050_INSTALLING_MODELS.md) + This project is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates as it will help aid response time. -## :fontawesome-solid-computer: Hardware Requirements +## :octicons-link-24: Quick Links -### :octicons-cpu-24: System - -You wil need one of the following: - -- :simple-nvidia: An NVIDIA-based graphics card with 4 GB or more VRAM memory. -- :simple-amd: An AMD-based graphics card with 4 GB or more VRAM memory (Linux - only) -- :fontawesome-brands-apple: An Apple computer with an M1 chip. - -We do **not recommend** the following video cards due to issues with their -running in half-precision mode and having insufficient VRAM to render 512x512 -images in full-precision mode: - -- NVIDIA 10xx series cards such as the 1080ti -- GTX 1650 series cards -- GTX 1660 series cards - -### :fontawesome-solid-memory: Memory and Disk - -- At least 12 GB Main Memory RAM. -- At least 18 GB of free disk space for the machine learning model, Python, and - all its dependencies. + ## :octicons-gift-24: InvokeAI Features diff --git a/docs/installation/index.md b/docs/installation/INSTALLATION.md similarity index 57% rename from docs/installation/index.md rename to docs/installation/INSTALLATION.md index e30e5b59e0c..ee37807d897 100644 --- a/docs/installation/index.md +++ b/docs/installation/INSTALLATION.md @@ -1,6 +1,4 @@ ---- -title: Overview ---- +# Overview We offer several ways to install InvokeAI, each one suited to your experience and preferences. We suggest that everyone start by @@ -15,6 +13,48 @@ See the [troubleshooting section](010_INSTALL_AUTOMATED.md#troubleshooting) of the automated install guide for frequently-encountered installation issues. +This fork is supported across Linux, Windows and Macintosh. Linux users can use +either an Nvidia-based card (with CUDA support) or an AMD card (using the ROCm +driver). + +### [Installation Getting Started Guide](installation) +#### **[Automated Installer](010_INSTALL_AUTOMATED.md)** +✅ This is the recommended installation method for first-time users. +#### [Manual Installation](020_INSTALL_MANUAL.md) +This method is recommended for experienced users and developers +#### [Docker Installation](040_INSTALL_DOCKER.md) +This method is recommended for those familiar with running Docker containers +### Other Installation Guides + - [PyPatchMatch](installation/060_INSTALL_PATCHMATCH.md) + - [XFormers](installation/070_INSTALL_XFORMERS.md) + - [CUDA and ROCm Drivers](installation/030_INSTALL_CUDA_AND_ROCM.md) + - [Installing New Models](installation/050_INSTALLING_MODELS.md) + +## :fontawesome-solid-computer: Hardware Requirements + +### :octicons-cpu-24: System + +You wil need one of the following: + +- :simple-nvidia: An NVIDIA-based graphics card with 4 GB or more VRAM memory. +- :simple-amd: An AMD-based graphics card with 4 GB or more VRAM memory (Linux + only) +- :fontawesome-brands-apple: An Apple computer with an M1 chip. + +### :fontawesome-solid-memory: Memory and Disk + +- At least 12 GB Main Memory RAM. +- At least 18 GB of free disk space for the machine learning model, Python, and + all its dependencies. + +We do **not recommend** the following video cards due to issues with their +running in half-precision mode and having insufficient VRAM to render 512x512 +images in full-precision mode: + +- NVIDIA 10xx series cards such as the 1080ti +- GTX 1650 series cards +- GTX 1660 series cards + ## Installation options 1. [Automated Installer](010_INSTALL_AUTOMATED.md) diff --git a/mkdocs.yml b/mkdocs.yml index 4857b6b4bbd..6cbf4109719 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -36,7 +36,6 @@ theme: - navigation.instant - navigation.tabs - navigation.tabs.sticky - - navigation.top - navigation.tracking - navigation.indexes - navigation.path @@ -102,9 +101,9 @@ plugins: nav: - Home: 'index.md' - Installation: - - Overview: 'installation/index.md' + - Overview: 'installation/INSTALLATION.md' - Installing with the Automated Installer: 'installation/010_INSTALL_AUTOMATED.md' - - Installing manually: 'installation/020_INSTALL_MANUAL.md' + - Installing Manually: 'installation/020_INSTALL_MANUAL.md' - NVIDIA Cuda / AMD ROCm: 'installation/030_INSTALL_CUDA_AND_ROCM.md' - Installing with Docker: 'installation/040_INSTALL_DOCKER.md' - Installing Models: 'installation/050_INSTALLING_MODELS.md' From 514722d67aac911850122f8dd3d913ea1d852983 Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Fri, 28 Jul 2023 18:35:05 +1000 Subject: [PATCH 5/7] Update definitions to be more accurate --- docs/help/gettingStartedWithAI.md | 12 +++++++----- docs/installation/INSTALLATION.md | 8 ++++++++ 2 files changed, 15 insertions(+), 5 deletions(-) diff --git a/docs/help/gettingStartedWithAI.md b/docs/help/gettingStartedWithAI.md index 7fcc139232d..0cb75570c1a 100644 --- a/docs/help/gettingStartedWithAI.md +++ b/docs/help/gettingStartedWithAI.md @@ -42,6 +42,8 @@ This is a high level walkthrough of some of the concepts and terms you’ll see ## Terms & Concepts +If you're interested in learning more, check out [this presentation](https://docs.google.com/presentation/d/1IO78i8oEXFTZ5peuHHYkVF-Y3e2M6iM5tCnc-YBfcCM/edit?usp=sharing) from one of our maintainers (@lstein). + ### Stable Diffusion Stable Diffusion is deep learning, text-to-image model that is the foundation of the capabilities found in InvokeAI. Since the release of Stable Diffusion, there have been many subsequent models created based on Stable Diffusion that are designed to generate specific types of images. @@ -78,16 +80,16 @@ Schedulers can be intricate and there's often a balance to strike between how qu ### Low-Rank Adaptations / LoRAs -Low-Rank Adaptations (LoRAs) ****are like a smaller, more focused version of model, intended to focus on training a better understanding of how a specific character, style, or concept looks. +Low-Rank Adaptations (LoRAs) are like a smaller, more focused version of models, intended to focus on training a better understanding of how a specific character, style, or concept looks. -### Embeddings +### Textual Inversion Embeddings -Embeddings, like LoRAs, assist with more easily prompting for certain characters, styles, or concepts. However, embeddings are trained to more update the relationship between a specific word (known as the “trigger”) and the intended output. Embeddings may sometimes also be referred to as Textual Inversions (TIs). +Textual Inversion Embeddings, like LoRAs, assist with more easily prompting for certain characters, styles, or concepts. However, embeddings are trained to update the relationship between a specific word (known as the “trigger”) and the intended output. ### ControlNet -ControlNet is a neural network model that can be used to control output from models. This can take many forms, such as controlling poses of people in generated images or providing edges to based image generation on. The impact of the ControlNet can also be adjusted to increase or decrease the similarity of the generated image to the ControlNet. +ControlNets are neural network models that are able to extract key features from an existing image and use these features to guide the output of the image generation model. ### VAE -Variational auto-encoder (VAE) is a generative AI algorithm that helps to generate finer details such as better faces, hands, colors etc. \ No newline at end of file +Variational auto-encoder (VAE) is a encode/decode model that translates the "latents" image produced during the image generation procees to the large pixel images that we see. \ No newline at end of file diff --git a/docs/installation/INSTALLATION.md b/docs/installation/INSTALLATION.md index ee37807d897..b6f251fe48a 100644 --- a/docs/installation/INSTALLATION.md +++ b/docs/installation/INSTALLATION.md @@ -41,6 +41,14 @@ You wil need one of the following: only) - :fontawesome-brands-apple: An Apple computer with an M1 chip. +** SDXL 1.0 Requirements* +To use SDXL, user must have one of the following: +- :simple-nvidia: An NVIDIA-based graphics card with 8 GB or more VRAM memory. +- :simple-amd: An AMD-based graphics card with 16 GB or more VRAM memory (Linux + only) +- :fontawesome-brands-apple: An Apple computer with an M1 chip. + + ### :fontawesome-solid-memory: Memory and Disk - At least 12 GB Main Memory RAM. From 50e00fecebd2b5a899dfd0a6d4693e2cb17de934 Mon Sep 17 00:00:00 2001 From: Alexandre Macabies Date: Sun, 30 Jul 2023 16:25:12 +0200 Subject: [PATCH 6/7] Add missing Optional on a few nullable fields. --- invokeai/app/api/dependencies.py | 3 ++- invokeai/backend/install/model_install_backend.py | 6 +++--- invokeai/backend/model_management/model_manager.py | 2 +- 3 files changed, 6 insertions(+), 5 deletions(-) diff --git a/invokeai/app/api/dependencies.py b/invokeai/app/api/dependencies.py index a186daedf56..d609ce3be2c 100644 --- a/invokeai/app/api/dependencies.py +++ b/invokeai/app/api/dependencies.py @@ -1,5 +1,6 @@ # Copyright (c) 2022 Kyle Schouviller (https://github.com/kyle0654) +from typing import Optional from logging import Logger import os from invokeai.app.services.board_image_record_storage import ( @@ -54,7 +55,7 @@ def check_internet() -> bool: class ApiDependencies: """Contains and initializes all dependencies for the API""" - invoker: Invoker = None + invoker: Optional[Invoker] = None @staticmethod def initialize(config: InvokeAIAppConfig, event_handler_id: int, logger: Logger = logger): diff --git a/invokeai/backend/install/model_install_backend.py b/invokeai/backend/install/model_install_backend.py index e6e400ca70c..0845fea43ae 100644 --- a/invokeai/backend/install/model_install_backend.py +++ b/invokeai/backend/install/model_install_backend.py @@ -7,7 +7,7 @@ from dataclasses import dataclass, field from pathlib import Path from tempfile import TemporaryDirectory -from typing import List, Dict, Callable, Union, Set +from typing import Optional, List, Dict, Callable, Union, Set import requests from diffusers import DiffusionPipeline @@ -86,8 +86,8 @@ class ModelLoadInfo: name: str model_type: ModelType base_type: BaseModelType - path: Path = None - repo_id: str = None + path: Optional[Path] = None + repo_id: Optional[str] = None description: str = "" installed: bool = False recommended: bool = False diff --git a/invokeai/backend/model_management/model_manager.py b/invokeai/backend/model_management/model_manager.py index e381fef567a..832a96e18f0 100644 --- a/invokeai/backend/model_management/model_manager.py +++ b/invokeai/backend/model_management/model_manager.py @@ -276,7 +276,7 @@ class ModelInfo: hash: str location: Union[Path, str] precision: torch.dtype - _cache: ModelCache = None + _cache: Optional[ModelCache] = None def __enter__(self): return self.context.__enter__() From 0691e0a12a960762ead493158eeba034805e74c6 Mon Sep 17 00:00:00 2001 From: Millun Atluri Date: Mon, 31 Jul 2023 15:35:20 +1000 Subject: [PATCH 7/7] Few modifications to getting started doc --- docs/help/gettingStartedWithAI.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/help/gettingStartedWithAI.md b/docs/help/gettingStartedWithAI.md index 0cb75570c1a..1f22e1edba5 100644 --- a/docs/help/gettingStartedWithAI.md +++ b/docs/help/gettingStartedWithAI.md @@ -40,6 +40,7 @@ This is a high level walkthrough of some of the concepts and terms you’ll see - Tweak and Iterate - Remember, it’s best to change one thing at a time so you know what is working and what isn't. Sometimes you just need to try a new image, and other times using a new prompt might be the ticket. For testing, consider turning off the “random” Seed - Using the same seed with the same settings will produce the same image, which makes it the perfect way to learn exactly what your changes are doing. - Explore Advanced Settings - InvokeAI has a full suite of tools available to allow you complete control over your image creation process - Check out our [docs if you want to learn more](https://invoke-ai.github.io/InvokeAI/features/). + ## Terms & Concepts If you're interested in learning more, check out [this presentation](https://docs.google.com/presentation/d/1IO78i8oEXFTZ5peuHHYkVF-Y3e2M6iM5tCnc-YBfcCM/edit?usp=sharing) from one of our maintainers (@lstein). @@ -60,8 +61,6 @@ Invoke offers a simple way to download several different models upon installatio - *Models that contain “inpainting” in the name are designed for use with the inpainting feature of the Unified Canvas* -### Noise - ### Scheduler Schedulers guide the process of removing noise (de-noising) from data. They determine: @@ -92,4 +91,5 @@ ControlNets are neural network models that are able to extract key features from ### VAE -Variational auto-encoder (VAE) is a encode/decode model that translates the "latents" image produced during the image generation procees to the large pixel images that we see. \ No newline at end of file +Variational auto-encoder (VAE) is a encode/decode model that translates the "latents" image produced during the image generation procees to the large pixel images that we see. +