You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Load single-file checkpoints directly without conversion (#6510)
* use model_class.load_singlefile() instead of converting; works, but performance is poor
* adjust the convert api - not right just yet
* working, needs sql migrator update
* rename migration_11 before conflict merge with main
* Update invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py
Co-authored-by: Ryan Dick <[email protected]>
* Update invokeai/backend/model_manager/load/model_loaders/stable_diffusion.py
Co-authored-by: Ryan Dick <[email protected]>
* implement lightweight version-by-version config migration
* simplified config schema migration code
* associate sdxl config with sdxl VAEs
* remove use of original_config_file in load_single_file()
---------
Co-authored-by: Lincoln Stein <[email protected]>
Co-authored-by: Ryan Dick <[email protected]>
@@ -85,7 +85,7 @@ class InvokeAIAppConfig(BaseSettings):
85
85
log_tokenization: Enable logging of parsed prompt tokens.
86
86
patchmatch: Enable patchmatch inpaint code.
87
87
models_dir: Path to the models directory.
88
-
convert_cache_dir: Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.
88
+
convert_cache_dir: Path to the converted models cache directory (DEPRECATED, but do not delete because it is needed for migration from previous versions).
89
89
download_cache_dir: Path to the directory that contains dynamically downloaded models.
90
90
legacy_conf_dir: Path to directory of legacy checkpoint config files.
91
91
db_dir: Path to InvokeAI databases directory.
@@ -102,7 +102,6 @@ class InvokeAIAppConfig(BaseSettings):
102
102
profiles_dir: Path to profiles output directory.
103
103
ram: Maximum memory amount used by memory model cache for rapid switching (GB).
104
104
vram: Amount of VRAM reserved for model storage (GB).
105
-
convert_cache: Maximum size of on-disk converted models cache (GB).
106
105
lazy_offload: Keep models in VRAM until their space is needed.
107
106
log_memory_usage: If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.
108
107
device: Preferred execution device. `auto` will choose the device depending on the hardware platform and the installed torch capabilities.<br>Valid values: `auto`, `cpu`, `cuda`, `cuda:1`, `mps`
@@ -148,7 +147,7 @@ class InvokeAIAppConfig(BaseSettings):
148
147
149
148
# PATHS
150
149
models_dir: Path=Field(default=Path("models"), description="Path to the models directory.")
151
-
convert_cache_dir: Path=Field(default=Path("models/.convert_cache"), description="Path to the converted models cache directory. When loading a non-diffusers model, it will be converted and store on disk at this location.")
150
+
convert_cache_dir: Path=Field(default=Path("models/.convert_cache"), description="Path to the converted models cache directory (DEPRECATED, but do not delete because it is needed for migration from previous versions).")
152
151
download_cache_dir: Path=Field(default=Path("models/.download_cache"), description="Path to the directory that contains dynamically downloaded models.")
153
152
legacy_conf_dir: Path=Field(default=Path("configs"), description="Path to directory of legacy checkpoint config files.")
154
153
db_dir: Path=Field(default=Path("databases"), description="Path to InvokeAI databases directory.")
@@ -170,9 +169,8 @@ class InvokeAIAppConfig(BaseSettings):
170
169
profiles_dir: Path=Field(default=Path("profiles"), description="Path to profiles output directory.")
171
170
172
171
# CACHE
173
-
ram: float=Field(default_factory=get_default_ram_cache_size, gt=0, description="Maximum memory amount used by memory model cache for rapid switching (GB).")
174
-
vram: float=Field(default=DEFAULT_VRAM_CACHE, ge=0, description="Amount of VRAM reserved for model storage (GB).")
ram: float=Field(default_factory=get_default_ram_cache_size, gt=0, description="Maximum memory amount used by memory model cache for rapid switching (GB).")
173
+
vram: float=Field(default=DEFAULT_VRAM_CACHE, ge=0, description="Amount of VRAM reserved for model storage (GB).")
176
174
lazy_offload: bool=Field(default=True, description="Keep models in VRAM until their space is needed.")
177
175
log_memory_usage: bool=Field(default=False, description="If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.")
0 commit comments