Skip to content

Releases: invoke-ai/InvokeAI

InvokeAI Version 2.0.2 - A Stable Diffusion Toolkit

18 Oct 20:35
Compare
Choose a tag to compare

The invoke-ai team is excited to be able to share the release of InvokeAI 2.0 - A Stable Diffusion Toolkit, a project that aims to provide enthusiasts and professionals both a suite of robust image creation tools. Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).

InvokeAI was one of the earliest forks of the core CompVis repo (formerly lstein/stable-diffusion), and recently evolved into a full-fledged community driven and open source stable diffusion toolkit named InvokeAI. Version 2.0.0 of the tool introduces an entirely new WebUI Front-end with a Desktop mode, and an optimized back-end server that can be interacted with via CLI or extended with your own fork.

Release 2.0.2 updates three Python dependencies that were recently reported to have critical security holes, and enhances documentation. Otherwise, the feature set is identical to 2.0.1.

This version of the app improves in-app workflows leveraging GFPGAN and Codeformer for face restoration, and RealESRGAN upscaling - Additionally, the CLI also supports a large variety of features:

  • Inpainting
  • Outpainting
  • Negative Prompts (prompt unconditioning)
  • Fast online model switching
  • Textual Inversion
  • Improved Quality for Hi-Resolution Images (Embiggen, Hi-res Fixes, etc.)
  • And more...

Future updates planned included UI driven outpainting/inpainting, robust Cross Attention support, and an advanced node workflow for automating and sharing your workflows with the community.

What's Changed

New Contributors

Full Changelog: v2.0.0...v2.0.1

What's Changed

New Contributors

Full Changelog: v2.0.1...v2.0.2

InvokeAI Version 2.0.1 - A Stable Diffusion Toolkit

14 Oct 20:29
Compare
Choose a tag to compare

The invoke-ai team is excited to be able to share the release of InvokeAI 2.0 - A Stable Diffusion Toolkit, a project that aims to provide enthusiasts and professionals both a suite of robust image creation tools. Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).

InvokeAI was one of the earliest forks of the core CompVis repo (formerly lstein/stable-diffusion), and recently evolved into a full-fledged community driven and open source stable diffusion toolkit named InvokeAI. Version 2.0.0 of the tool introduces an entirely new WebUI Front-end with a Desktop mode, and an optimized back-end server that can be interacted with via CLI or extended with your own fork.

Release 2.0.1 corrects an error that was causing the k* samplers to produce noisy images at high step counts. Otherwise the feature set is the same as 2.0.0.

This version of the app improves in-app workflows leveraging GFPGAN and Codeformer for face restoration, and RealESRGAN upscaling - Additionally, the CLI also supports a large variety of features:

  • Inpainting
  • Outpainting
  • Negative Prompts (prompt unconditioning)
  • Fast online model switching
  • Textual Inversion
  • Improved Quality for Hi-Resolution Images (Embiggen, Hi-res Fixes, etc.)
  • And more...

Future updates planned included UI driven outpainting/inpainting, robust Cross Attention support, and an advanced node workflow for automating and sharing your workflows with the community.

What's Changed

New Contributors

Full Changelog: v2.0.0...v2.0.1

InvokeAI Version 2.0.0 - A Stable Diffusion Toolkit

10 Oct 13:45
Compare
Choose a tag to compare

The invoke-ai team is excited to be able to share the release of InvokeAI 2.0 - A Stable Diffusion Toolkit, a project that aims to provide enthusiasts and professionals both a suite of robust image creation tools. Optimized for efficiency, InvokeAI needs only ~3.5GB of VRAM to generate a 512x768 image (and less for smaller images), and is compatible with Windows/Linux/Mac (M1 & M2).

InvokeAI was one of the earliest forks of the core CompVis repo (formerly lstein/stable-diffusion), and recently evolved into a full-fledged community driven and open source stable diffusion toolkit named InvokeAI. Version 2.0.0 of the tool introduces an entirely new WebUI Front-end with a Desktop mode, and an optimized back-end server that can be interacted with via CLI or extended with your own fork.

This version of the app improves in-app workflows leveraging GFPGAN and Codeformer for face restoration, and RealESRGAN upscaling - Additionally, the CLI also supports a large variety of features:

  • Inpainting
  • Outpainting
  • Negative Prompts (prompt unconditioning)
  • Textual Inversion
  • Improved Quality for Hi-Resolution Images (Embiggen, Hi-res Fixes, etc.)
  • And more...

Future updates planned included UI driven outpainting/inpainting, robust Cross Attention support, and an advanced node workflow for automating and sharing your workflows with the community.

SD-Dream Version 1.14.1

12 Sep 22:15
9b28c65
Compare
Choose a tag to compare

This is identical to release 1.14 except that it reverts the name of the conda environment from "sd-ldm" back to the original "ldm".

Features from 1.14:

  • Memory optimizations for small-RAM cards. 512x512 now possible on 4 GB GPUs.
  • Full support for Apple hardware with M1 or M2 chips.
  • Add "seamless mode" for circular tiling of image. Generates beautiful effects. (prixt).
  • Inpainting support.
  • Improved web server GUI.
  • Lots of code and documentation cleanups.

SD-Dream Version 1.14

12 Sep 18:38
Compare
Choose a tag to compare
  • Memory optimizations for small-RAM cards. 512x512 now possible on 4 GB GPUs.
  • Full support for Apple hardware with M1 or M2 chips.
  • Add "seamless mode" for circular tiling of image. Generates beautiful effects. (prixt).
  • Inpainting support.
  • Improved web server GUI.
  • Lots of code and documentation cleanups.

SD-Dream Version 1.13

03 Sep 16:29
Compare
Choose a tag to compare

New features and bug fixes:

  • Support image variations (see VARIATIONS.md) (Kevin Gibbons and many contributors and reviewers)
  • Supports a Google Colab notebook for a standalone server running on Google hardware Arturo Mendivil
  • WebUI supports GFPGAN/ESRGAN facial reconstruction and upscaling Kevin Gibbons
  • WebUI supports incremental display of in-progress images during generation Kevin Gibbons
  • A new configuration file scheme that allows new models (including upcoming stable-diffusion-v1.5) to be added without altering the code. (David Wager)
  • Can specify --grid on dream.py command line as the default.
  • Miscellaneous internal bug and stability fixes.
  • Works on M1 Apple hardware (several contributors, but particular thanks to James Reynolds )
  • Multiple bug fixes.