diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000000..e69de29bb2d diff --git a/404.html b/404.html new file mode 100644 index 00000000000..ea1ff057d82 --- /dev/null +++ b/404.html @@ -0,0 +1,2302 @@ + + + +
+ + + + + + + + + + + + + + + + +This release (along with the post1 and post2 follow-on releases) expands support for additional LoRA and LyCORIS models, upgrades diffusers versions, and fixes a few bugs.
+A number of LoRA/LyCORIS fine-tune files (those which alter the text encoder as well as the unet model) were not having the desired effect in InvokeAI. This bug has now been fixed. Full documentation of LoRA support is available at InvokeAI LoRA Support.
+Previously, InvokeAI did not distinguish between LoRA/LyCORIS models based on Stable Diffusion v1.5 vs those based on v2.0 and 2.1, leading to a crash when an incompatible model was loaded. This has now been fixed. In addition, the web pulldown menus for LoRA and Textual Inversion selection have been enhanced to show only those files that are compatible with the currently-selected Stable Diffusion model.
+Support for the newer LoKR LyCORIS files has been added.
+
The major enhancement in this version is that NVIDIA users no longer need to decide between speed and reproducibility. Previously, if you activated the Xformers library, you would see improvements in speed and memory usage, but multiple images generated with the same seed and other parameters would be slightly different from each other. This is no longer the case. Relative to 2.3.5 you will see improved performance when running without Xformers, and even better performance when Xformers is activated. In both cases, images generated with the same settings will be identical.
+Here are the new library versions: +Library Version +Torch 2.0.0 +Diffusers 0.16.1 +Xformers 0.0.19 +Compel 1.1.5 +Other Improvements
+When a model is loaded for the first time, InvokeAI calculates its checksum for incorporation into the PNG metadata. This process could take up to a minute on network-mounted disks and WSL mounts. This release noticeably speeds up the process.
+
The "import models from directory" and "import from URL" functionality in the console-based model installer has now been fixed.
+When running the WebUI, we have reduced the number of times that InvokeAI reaches out to HuggingFace to fetch the list of embeddable Textual Inversion models. We have also caught and fixed a problem with the updater not correctly detecting when another instance of the updater is running
+
What's New in 2.3.4
+This features release adds support for LoRA (Low-Rank Adaptation) and LyCORIS (Lora beYond Conventional) models, as well as some minor bug fixes.
+LoRA files contain fine-tuning weights that enable particular styles, subjects or concepts to be applied to generated images. LyCORIS files are an extended variant of LoRA. InvokeAI supports the most common LoRA/LyCORIS format, which ends in the suffix .safetensors. You will find numerous LoRA and LyCORIS models for download at Civitai, and a small but growing number at Hugging Face. Full documentation of LoRA support is available at InvokeAI LoRA Support.( Pre-release note: this page will only be available after release)
+To use LoRA/LyCORIS models in InvokeAI:
+Download the .safetensors files of your choice and place in /path/to/invokeai/loras. This directory was not present in earlier version of InvokeAI but will be created for you the first time you run the command-line or web client. You can also create the directory manually.
+
+Add withLora(lora-file,weight) to your prompts. The weight is optional and will default to 1.0. A few examples, assuming that a LoRA file named loras/sushi.safetensors is present:
+
family sitting at dinner table eating sushi withLora(sushi,0.9) +family sitting at dinner table eating sushi withLora(sushi, 0.75) +family sitting at dinner table eating sushi withLora(sushi)
+Multiple withLora() prompt fragments are allowed. The weight can be arbitrarily large, but the useful range is roughly 0.5 to 1.0. Higher weights make the LoRA's influence stronger. Negative weights are also allowed, which can lead to some interesting effects.
+Generate as you usually would! If you find that the image is too "crisp" try reducing the overall CFG value or reducing individual LoRA weights. As is the case with all fine-tunes, you'll get the best results when running the LoRA on top of the model similar to, or identical with, the one that was used during the LoRA's training. Don't try to load a SD 1.x-trained LoRA into a SD 2.x model, and vice versa. This will trigger a non-fatal error message and generation will not proceed.
+
+You can change the location of the loras directory by passing the --lora_directory option to `invokeai.
+
This version adds two new web interface buttons for inserting LoRA and Textual Inversion triggers into the prompt as shown in the screenshot below.
+Clicking on one or the other of the buttons will bring up a menu of available LoRA/LyCORIS or Textual Inversion trigger terms. Select a menu item to insert the properly-formatted withLora() or
Currently terms are inserted into the positive prompt textbox only. However, some textual inversion embeddings are designed to be used with negative prompts. To move a textual inversion trigger into the negative prompt, simply cut and paste it.
+By default the Textual Inversion menu only shows locally installed models found at startup time in /path/to/invokeai/embeddings. However, InvokeAI has the ability to dynamically download and install additional Textual Inversion embeddings from the HuggingFace Concepts Library. You may choose to display the most popular of these (with five or more likes) in the Textual Inversion menu by going to Settings and turning on "Show Textual Inversions from HF Concepts Library." When this option is activated, the locally-installed TI embeddings will be shown first, followed by uninstalled terms from Hugging Face. See The Hugging Face Concepts Library and Importing Textual Inversion files for more information.
+This release changes model switching behavior so that the command-line and Web UIs save the last model used and restore it the next time they are launched. It also improves the behavior of the installer so that the pip utility is kept up to date.
+These are known bugs in the release.
+The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
+Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
+
This is a bugfix and minor feature release.
+Since version 2.3.2 the following bugs have been fixed: +Bugs
+When using legacy checkpoints with an external VAE, the VAE file is now scanned for malware prior to loading. Previously only the main model weights file was scanned.
+Textual inversion will select an appropriate batchsize based on whether xformers is active, and will default to xformers enabled if the library is detected.
+The batch script log file names have been fixed to be compatible with Windows.
+Occasional corruption of the .next_prefix file (which stores the next output file name in sequence) on Windows systems is now detected and corrected.
+Support loading of legacy config files that have no personalization (textual inversion) section.
+An infinite loop when opening the developer's console from within the invoke.sh script has been corrected.
+Documentation fixes, including a recipe for detecting and fixing problems with the AMD GPU ROCm driver.
+
Enhancements
+It is now possible to load and run several community-contributed SD-2.0 based models, including the often-requested "Illuminati" model.
+The "NegativePrompts" embedding file, and others like it, can now be loaded by placing it in the InvokeAI embeddings directory.
+If no --model is specified at launch time, InvokeAI will remember the last model used and restore it the next time it is launched.
+On Linux systems, the invoke.sh launcher now uses a prettier console-based interface. To take advantage of it, install the dialog package using your package manager (e.g. sudo apt install dialog).
+When loading legacy models (safetensors/ckpt) you can specify a custom config file and/or a VAE by placing like-named files in the same directory as the model following this example:
+
my-favorite-model.ckpt +my-favorite-model.yaml +my-favorite-model.vae.pt # or my-favorite-model.vae.safetensors
+These are known bugs in the release.
+The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
+Windows Defender will sometimes raise Trojan or backdoor alerts for the codeformer.pth face restoration model, as well as the CIDAS/clipseg and runwayml/stable-diffusion-v1.5 models. These are false positives and can be safely ignored. InvokeAI performs a malware scan on all models as they are loaded. For additional security, you should use safetensors models whenever they are available.
+
This is a bugfix and minor feature release.
+Since version 2.3.1 the following bugs have been fixed:
+Black images appearing for potential NSFW images when generating with legacy checkpoint models and both --no-nsfw_checker and --ckpt_convert turned on.
+Black images appearing when generating from models fine-tuned on Stable-Diffusion-2-1-base. When importing V2-derived models, you may be asked to select whether the model was derived from a "base" model (512 pixels) or the 768-pixel SD-2.1 model.
+The "Use All" button was not restoring the Hi-Res Fix setting on the WebUI
+When using the model installer console app, models failed to import correctly when importing from directories with spaces in their names. A similar issue with the output directory was also fixed.
+Crashes that occurred during model merging.
+Restore previous naming of Stable Diffusion base and 768 models.
+Upgraded to latest versions of diffusers, transformers, safetensors and accelerate libraries upstream. We hope that this will fix the assertion NDArray > 2**32 issue that MacOS users have had when generating images larger than 768x768 pixels. Please report back.
+
As part of the upgrade to diffusers, the location of the diffusers-based models has changed from models/diffusers to models/hub. When you launch InvokeAI for the first time, it will prompt you to OK a one-time move. This should be quick and harmless, but if you have modified your models/diffusers directory in some way, for example using symlinks, you may wish to cancel the migration and make appropriate adjustments. +New "Invokeai-batch" script
+2.3.2 introduces a new command-line only script called invokeai-batch that can be used to generate hundreds of images from prompts and settings that vary systematically. This can be used to try the same prompt across multiple combinations of models, steps, CFG settings and so forth. It also allows you to template prompts and generate a combinatorial list like:
+a shack in the mountains, photograph +a shack in the mountains, watercolor +a shack in the mountains, oil painting +a chalet in the mountains, photograph +a chalet in the mountains, watercolor +a chalet in the mountains, oil painting +a shack in the desert, photograph +...
+If you have a system with multiple GPUs, or a single GPU with lots of VRAM, you can parallelize generation across the combinatorial set, reducing wait times and using your system's resources efficiently (make sure you have good GPU cooling).
+To try invokeai-batch out. Launch the "developer's console" using the invoke launcher script, or activate the invokeai virtual environment manually. From the console, give the command invokeai-batch --help in order to learn how the script works and create your first template file for dynamic prompt generation.
+These are known bugs in the release.
+The Ancestral DPMSolverMultistepScheduler (k_dpmpp_2a) sampler is not yet implemented for diffusers models and will disappear from the WebUI Sampler menu when a diffusers model is selected.
+Windows Defender will sometimes raise a Trojan alert for the codeformer.pth face restoration model. As far as we have been able to determine, this is a false positive and can be safely whitelisted.
+
This is primarily a bugfix release, but it does provide several new features that will improve the user experience.
+InvokeAI now makes it convenient to add, remove and modify models. You can individually import models that are stored on your local system, scan an entire folder and its subfolders for models and import them automatically, and even directly import models from the internet by providing their download URLs. You also have the option of designating a local folder to scan for new models each time InvokeAI is restarted.
+There are three ways of accessing the model management features:
+From the WebUI, click on the cube to the right of the model selection menu. This will bring up a form that allows you to import models individually from your local disk or scan a directory for models to import.
+
+Using the Model Installer App
+
Choose option (5) download and install models from the invoke launcher script to start a new console-based application for model management. You can use this to select from a curated set of starter models, or import checkpoint, safetensors, and diffusers models from a local disk or the internet. The example below shows importing two checkpoint URLs from popular SD sites and a HuggingFace diffusers model using its Repository ID. It also shows how to designate a folder to be scanned at startup time for new models to import.
+Command-line users can start this app using the command invokeai-model-install.
+Using the Command Line Client (CLI)
+
The !install_model and !convert_model commands have been enhanced to allow entering of URLs and local directories to scan and import. The first command installs .ckpt and .safetensors files as-is. The second one converts them into the faster diffusers format before installation.
+Internally InvokeAI is able to probe the contents of a .ckpt or .safetensors file to distinguish among v1.x, v2.x and inpainting models. This means that you do not need to include "inpaint" in your model names to use an inpainting model. Note that Stable Diffusion v2.x models will be autoconverted into a diffusers model the first time you use it.
+Please see INSTALLING MODELS for more information on model management.
+The installer now launches a console-based UI for setting and changing commonly-used startup options:
+After selecting the desired options, the installer installs several support models needed by InvokeAI's face reconstruction and upscaling features and then launches the interface for selecting and installing models shown earlier. At any time, you can edit the startup options by launching invoke.sh/invoke.bat and entering option (6) change InvokeAI startup options
+Command-line users can launch the new configure app using invokeai-configure.
+This release also comes with a renewed updater. To do an update without going through a whole reinstallation, launch invoke.sh or invoke.bat and choose option (9) update InvokeAI . This will bring you to a screen that prompts you to update to the latest released version, to the most current development version, or any released or unreleased version you choose by selecting the tag or branch of the desired version.
+Command-line users can run this interface by typing invokeai-configure
+There are now features to generate horizontal and vertical symmetry during generation. The way these work is to wait until a selected step in the generation process and then to turn on a mirror image effect. In addition to generating some cool images, you can also use this to make side-by-side comparisons of how an image will look with more or fewer steps. Access this option from the WebUI by selecting Symmetry from the image generation settings, or within the CLI by using the options --h_symmetry_time_pct and --v_symmetry_time_pct (these can be abbreviated to --h_sym and --v_sym like all other options).
+This release introduces a beta version of the WebUI Unified Canvas. To try it out, open up the settings dialogue in the WebUI (gear icon) and select Use Canvas Beta Layout:
+Refresh the screen and go to to Unified Canvas (left side of screen, third icon from the top). The new layout is designed to provide more space to work in and to keep the image controls close to the image itself:
+Model conversion and merging within the WebUI
+The WebUI now has an intuitive interface for model merging, as well as for permanent conversion of models from legacy .ckpt/.safetensors formats into diffusers format. These options are also available directly from the invoke.sh/invoke.bat scripts. +An easier way to contribute translations to the WebUI
+We have migrated our translation efforts to Weblate, a FOSS translation product. Maintaining the growing project's translations is now far simpler for the maintainers and community. Please review our brief translation guide for more information on how to contribute. +Numerous internal bugfixes and performance issues
+This releases quashes multiple bugs that were reported in 2.3.0. Major internal changes include upgrading to diffusers 0.13.0, and using the compel library for prompt parsing. See Detailed Change Log for a detailed list of bugs caught and squished. +Summary of InvokeAI command line scripts (all accessible via the launcher menu) +Command Description +invokeai Command line interface +invokeai --web Web interface +invokeai-model-install Model installer with console forms-based front end +invokeai-ti --gui Textual inversion, with a console forms-based front end +invokeai-merge --gui Model merging, with a console forms-based front end +invokeai-configure Startup configuration; can also be used to reinstall support models +invokeai-update InvokeAI software updater
+These are known bugs in the release. + MacOS users generating 768x768 pixel images or greater using diffusers models may experience a hard crash with assertion NDArray > 2**32 This appears to be an issu...
+**Transition to diffusers
+Version 2.3 provides support for both the traditional .ckpt
weight
+checkpoint files as well as the HuggingFace diffusers
format. This
+introduces several changes you should know about.
format
of ckpt
and a
+ weights
field that points to the absolute or ROOTDIR-relative
+ location of the ckpt file.inpainting-1.5:
+ description: RunwayML SD 1.5 model optimized for inpainting (4.27 GB)
+ repo_id: runwayml/stable-diffusion-inpainting
+ format: ckpt
+ width: 512
+ height: 512
+ weights: models/ldm/stable-diffusion-v1/sd-v1-5-inpainting.ckpt
+ config: configs/stable-diffusion/v1-inpainting-inference.yaml
+ vae: models/ldm/stable-diffusion-v1/vae-ft-mse-840000-ema-pruned.ckpt
+
A configuration stanza for a diffusers model hosted at HuggingFace will look like this,
+ with a format
of diffusers
and a repo_id
that points to the
+ repository ID of the model on HuggingFace:
stable-diffusion-2.1:
+description: Stable Diffusion version 2.1 diffusers model (5.21 GB)
+repo_id: stabilityai/stable-diffusion-2-1
+format: diffusers
+
A configuration stanza for a diffuers model stored locally should
+ look like this, with a format
of diffusers
, but a path
field
+ that points at the directory that contains model_index.json
:
waifu-diffusion:
+description: Latest waifu diffusion 1.4
+format: diffusers
+path: models/diffusers/hakurei-haifu-diffusion-1.4
+
ROOTDIR/models
to
+ store HuggingFace diffusers models.Consequently, the format of the models directory has changed to
+ mimic the HuggingFace cache directory. When HF_HOME and XDG_HOME
+ are not set, diffusers models are now automatically downloaded
+ and retrieved from the directory ROOTDIR/models/diffusers
,
+ while other models are stored in the directory
+ ROOTDIR/models/hub
. This organization is the same as that used
+ by HuggingFace for its cache management.
This allows you to share diffusers and ckpt model files easily with + other machine learning applications that use the HuggingFace + libraries. To do this, set the environment variable HF_HOME + before starting up InvokeAI to tell it what directory to + cache models in. To tell InvokeAI to use the standard HuggingFace + cache directory, you would set HF_HOME like this (Linux/Mac):
+export HF_HOME=~/.cache/huggingface
Both HuggingFace and InvokeAI will fall back to the XDG_CACHE_HOME
+ environment variable if HF_HOME is not set; this path
+ takes precedence over ROOTDIR/models
to allow for the same sharing
+ with other machine learning applications that use HuggingFace
+ libraries.
If you upgrade to InvokeAI 2.3.* from an earlier version, there
+ will be a one-time migration from the old models directory format
+ to the new one. You will see a message about this the first time
+ you start invoke.py
.
Both the front end back ends of the model manager have been + rewritten to accommodate diffusers. You can import models using + their local file path, using their URLs, or their HuggingFace + repo_ids. On the command line, all these syntaxes work:
+!import_model stabilityai/stable-diffusion-2-1-base
+!import_model /opt/sd-models/sd-1.4.ckpt
+!import_model https://huggingface.co/Fictiverse/Stable_Diffusion_PaperCut_Model/blob/main/PaperCut_v1.ckpt
+
**KNOWN BUGS (15 January 2023)
+On CUDA systems, the 768 pixel stable-diffusion-2.0 and
+ stable-diffusion-2.1 models can only be run as diffusers
models
+ when the xformer
library is installed and configured. Without
+ xformers
, InvokeAI returns black images.
Inpainting and outpainting have regressed in quality.
+Both these issues are being actively worked on.
+the invokeai
directory
Previously there were two directories to worry about, the directory that
+contained the InvokeAI source code and the launcher scripts, and the invokeai
+directory that contained the models files, embeddings, configuration and
+outputs. With the 2.2.4 release, this dual system is done away with, and
+everything, including the invoke.bat
and invoke.sh
launcher scripts, now
+live in a directory named invokeai
. By default this directory is located in
+your home directory (e.g. \Users\yourname
on Windows), but you can select
+where it goes at install time.
After installation, you can delete the install directory (the one that the zip
+file creates when it unpacks). Do not delete or move the invokeai
+directory!
Initialization file invokeai/invokeai.init
You can place frequently-used startup options in this file, such as the default
+number of steps or your preferred sampler. To keep everything in one place, this
+file has now been moved into the invokeai
directory and is named
+invokeai.init
.
To update from Version 2.2.3
+The easiest route is to download and unpack one of the 2.2.4 installer files.
+When it asks you for the location of the invokeai
runtime directory, respond
+with the path to the directory that contains your 2.2.3 invokeai
. That is, if
+invokeai
lives at C:\Users\fred\invokeai
, then answer with C:\Users\fred
+and answer "Y" when asked if you want to reuse the directory.
The update.sh
(update.bat
) script that came with the 2.2.3 source installer
+does not know about the new directory layout and won't be fully functional.
To update to 2.2.5 (and beyond) there's now an update path
+As they become available, you can update to more recent versions of InvokeAI
+using an update.sh
(update.bat
) script located in the invokeai
directory.
+Running it without any arguments will install the most recent version of
+InvokeAI. Alternatively, you can get set releases by running the update.sh
+script with an argument in the command shell. This syntax accepts the path to
+the desired release's zip file, which you can find by clicking on the green
+"Code" button on this repository's home page.
Other 2.2.4 Improvements
+Note
+This point release removes references to the binary installer from the +installation guide. The binary installer is not stable at the current +time. First time users are encouraged to use the "source" installer as +described in Installing InvokeAI with the Source Installer
+With InvokeAI 2.2, this project now provides enthusiasts and professionals a +robust workflow solution for creating AI-generated and human facilitated +compositions. Additional enhancements have been made as well, improving safety, +ease of use, and installation.
+Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a +512x768 image (and less for smaller images), and is compatible with +Windows/Linux/Mac (M1 & M2).
+You can see the release video here, which +introduces the main WebUI enhancement for version 2.2 - +The Unified Canvas. This new workflow is the +biggest enhancement added to the WebUI to date, and unlocks a stunning amount of +potential for users to create and iterate on their creations. The following +sections describe what's new for InvokeAI.
+Note
+The binary installer is not ready for prime time. First time users are recommended to install via the "source" installer accessible through the links at the bottom of this page.****
+With InvokeAI 2.2, this project now provides enthusiasts and professionals a +robust workflow solution for creating AI-generated and human facilitated +compositions. Additional enhancements have been made as well, improving safety, +ease of use, and installation.
+Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a +512x768 image (and less for smaller images), and is compatible with +Windows/Linux/Mac (M1 & M2).
+You can see the release video here, which +introduces the main WebUI enhancement for version 2.2 - +The Unified Canvas. +This new workflow is the biggest enhancement added to the WebUI to date, and +unlocks a stunning amount of potential for users to create and iterate on their +creations. The following sections describe what's new for InvokeAI.
+With InvokeAI 2.2, this project now provides enthusiasts and professionals a +robust workflow solution for creating AI-generated and human facilitated +compositions. Additional enhancements have been made as well, improving safety, +ease of use, and installation.
+Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a +512x768 image (and less for smaller images), and is compatible with +Windows/Linux/Mac (M1 & M2).
+You can see the release video here, which +introduces the main WebUI enhancement for version 2.2 - +The Unified Canvas. This new workflow is the +biggest enhancement added to the WebUI to date, and unlocks a stunning amount of +potential for users to create and iterate on their creations. The following +sections describe what's new for InvokeAI.
+.invokeai
file. See
+ Clientdream.py
script renamed invoke.py
. A dream.py
script wrapper remains for
+ backward compatibility.python3 scripts/invoke.py --web
!fix
command.--hires
option on invoke>
line allows
+ larger images to be created without duplicating elements,
+ at the cost of some performance.--perlin
and --threshold
options allow you to add and control
+ variation during image generation (see
+ Thresholding and Perlin Noise Initialization)invoke.py
now works on Windows, Linux and Mac
+ platforms.!history
!search
!clear
--full_precision
/ -F
. Simply omit it and invoke.py
will auto
+ configure. To switch away from auto use the new flag like
+ --precision=float32
.000011.455191342.01.png -- Two files produced by the same command using + 000011.455191342.02.png -- a batch size>1 (e.g. -b2). They have the same seed.
+000011.4160627868.grid#1-4.png -- a grid of four images (-g); the whole grid + can be regenerated with the indicated key
+We as members, contributors, and leaders pledge to make participation in our +community a harassment-free experience for everyone, regardless of age, body +size, visible or invisible disability, ethnicity, sex characteristics, gender +identity and expression, level of experience, education, socio-economic status, +nationality, personal appearance, race, religion, or sexual identity +and orientation.
+We pledge to act and interact in ways that contribute to an open, welcoming, +diverse, inclusive, and healthy community.
+Examples of behavior that contributes to a positive environment for our +community include:
+Examples of unacceptable behavior include:
+Community leaders are responsible for clarifying and enforcing our standards of +acceptable behavior and will take appropriate and fair corrective action in +response to any behavior that they deem inappropriate, threatening, offensive, +or harmful.
+Community leaders have the right and responsibility to remove, edit, or reject +comments, commits, code, wiki edits, issues, and other contributions that are +not aligned to this Code of Conduct, and will communicate reasons for moderation +decisions when appropriate.
+This Code of Conduct applies within all community spaces, and also applies when +an individual is officially representing the community in public spaces. +Examples of representing our community include using an official e-mail address, +posting via an official social media account, or acting as an appointed +representative at an online or offline event.
+Instances of abusive, harassing, or otherwise unacceptable behavior +may be reported to the community leaders responsible for enforcement +at https://github.com/invoke-ai/InvokeAI/issues. All complaints will +be reviewed and investigated promptly and fairly.
+All community leaders are obligated to respect the privacy and security of the +reporter of any incident.
+Community leaders will follow these Community Impact Guidelines in determining +the consequences for any action they deem in violation of this Code of Conduct:
+Community Impact: Use of inappropriate language or other behavior deemed +unprofessional or unwelcome in the community.
+Consequence: A private, written warning from community leaders, providing +clarity around the nature of the violation and an explanation of why the +behavior was inappropriate. A public apology may be requested.
+Community Impact: A violation through a single incident or series +of actions.
+Consequence: A warning with consequences for continued behavior. No +interaction with the people involved, including unsolicited interaction with +those enforcing the Code of Conduct, for a specified period of time. This +includes avoiding interactions in community spaces as well as external channels +like social media. Violating these terms may lead to a temporary or +permanent ban.
+Community Impact: A serious violation of community standards, including +sustained inappropriate behavior.
+Consequence: A temporary ban from any sort of interaction or public +communication with the community for a specified period of time. No public or +private interaction with the people involved, including unsolicited interaction +with those enforcing the Code of Conduct, is allowed during this period. +Violating these terms may lead to a permanent ban.
+Community Impact: Demonstrating a pattern of violation of community +standards, including sustained inappropriate behavior, harassment of an +individual, or aggression toward or disparagement of classes of individuals.
+Consequence: A permanent ban from any sort of public interaction within +the community.
+This Code of Conduct is adapted from the Contributor Covenant, +version 2.0, available at +https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
+Community Impact Guidelines were inspired by Mozilla's code of conduct +enforcement ladder.
+For answers to common questions about this code of conduct, see the FAQ at +https://www.contributor-covenant.org/faq. Translations are available at +https://www.contributor-covenant.org/translations.
+ + + + + + + + + + + + + + + + + + + + + + + + +The app is published in twice, in different build formats.
+pip install invokeai
. The updater uses this build.Make a developer call-out for PRs to merge. Merge and test things out.
+While the release workflow does not include end-to-end tests, it does pause before publishing so you can download and test the final build.
+The release.yml
workflow runs a number of jobs to handle code checks, tests, build and publish on PyPI.
It is triggered on tag push, when the tag matches v*
. It doesn't matter if you've prepped a release branch like release/v3.5.0
or are releasing from main
- it works the same.
++Because commits are reference-counted, it is safe to create a release branch, tag it, let the workflow run, then delete the branch. So long as the tag exists, that commit will exist.
+
Run make tag-release
to tag the current commit and kick off the workflow.
The release may also be dispatched manually.
+The workflow consists of a number of concurrently-run jobs, and two final publish jobs.
+The publish jobs require manual approval and are only run if the other jobs succeed.
+check-version
Job#This job checks that the git ref matches the app version. It matches the ref against the __version__
variable in invokeai/version/invokeai_version.py
.
When the workflow is triggered by tag push, the ref is the tag. If the workflow is run manually, the ref is the target selected from the Use workflow from dropdown.
+This job uses samuelcolvin/check-python-version.
+++Any valid version specifier works, so long as the tag matches the version. The release workflow works exactly the same for
+RC
,post
,dev
, etc.
python-tests
: runs pytest
on matrix of platformspython-checks
: runs ruff
(format and lint)frontend-tests
: runs vitest
frontend-checks
: runs prettier
(format), eslint
(lint), dpdm
(circular refs), tsc
(static type check) and knip
(unused imports)++TODO We should add
+mypy
orpyright
to thecheck-python
job.TODO We should add an end-to-end test job that generates an image.
+
build-installer
Job#This sets up both python and frontend dependencies and builds the python package. Internally, this runs installer/create_installer.sh
and uploads two artifacts:
dist
: the python distribution, to be published on PyPIInvokeAI-installer-${VERSION}.zip
: the installer to be included in the GitHub releaseAt this point, the release workflow pauses as the remaining publish jobs require approval. Time to test the installer.
+Because the installer pulls from PyPI, and we haven't published to PyPI yet, you will need to install from the wheel:
+dist.zip
and the installer from the Summary tab of the workflow--wheel
CLI arg, pointing at the wheel:++The same wheel file is bundled in the installer and in the
+dist
artifact, which is uploaded to PyPI. You should end up with the exactly the same installation as if the installer got the wheel from PyPI.
If testing reveals any issues, no worries. Cancel the workflow, which will cancel the pending publish jobs (you didn't approve them prematurely, right?).
+Now you can start from the top:
+main
and pull in the fixesmake tag-release
to move the tag to HEAD
(which has the fixes) and kick off the release workflow againThe publish jobs will run if any of the previous jobs fail.
+They use GitHub environments, which are configured as trusted publishers on PyPI.
+Both jobs require a maintainer to approve them from the workflow's Summary tab.
+testpypi
or pypi
)++If the version already exists on PyPI, the publish jobs will fail. PyPI only allows a given version to be published once - you cannot change it. If version published on PyPI has a problem, you'll need to "fail forward" by bumping the app version and publishing a followup release.
+
Check the python infrastructure status page for incidents.
+If there are no incidents, contact @hipsterusername or @lstein, who have owner access to GH and PyPI, to see if access has expired or something like that.
+publish-testpypi
Job#Publishes the distribution on the Test PyPI index, using the testpypi
GitHub environment.
This job is not required for the production PyPI publish, but included just in case you want to test the PyPI release.
+If approved and successful, you could try out the test release like this:
+# Create a new virtual environment
+python -m venv ~/.test-invokeai-dist --prompt test-invokeai-dist
+# Install the distribution from Test PyPI
+pip install --index-url https://test.pypi.org/simple/ invokeai
+# Run and test the app
+invokeai-web
+# Cleanup
+deactivate
+rm -rf ~/.test-invokeai-dist
+
publish-pypi
Job#Publishes the distribution on the production PyPI index, using the pypi
GitHub environment.
Once the release is published to PyPI, it's time to publish the GitHub release.
+scripts/get_external_contributions.py
to get a list of external contributions to shout out in the release notes.build
job into the Assets section of the release notes.++TODO Workflows can create a GitHub release from a template and upload release assets. One popular action to handle this is ncipollo/release-action. A future enhancement to the release process could set this up.
+
The build installer
workflow can be dispatched manually. This is useful to test the installer for a given branch or tag.
No checks are run, it just builds.
+The release
workflow can be dispatched manually. You must dispatch the workflow from the right tag, else it will fail the version check.
This functionality is available as a fallback in case something goes wonky. Typically, releases should be triggered via tag push as described above.
+ + + + + + + + + + + + + + + + + + + + + + + + +