Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configuration and model installer for new model layout #3547

Merged
merged 46 commits into from
Jun 28, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
ada7399
rewrite of widget display - marshalling needs rewrite
Jun 16, 2023
f28d500
configure/install basically working; needs edge case testing
Jun 17, 2023
15f8132
add direct-call script for model installer
Jun 17, 2023
ddb3f4b
make configure script work properly on empty rootdir
Jun 17, 2023
e1d53b8
Merge branch 'main' into lstein/installer-for-new-model-layout
lstein Jun 17, 2023
294b1e8
test and fix edge cases
Jun 20, 2023
678bb4f
Merge branch 'lstein/installer-for-new-model-layout' of github.com:in…
Jun 20, 2023
ac6403f
address some of ebr issues
Jun 20, 2023
2fc19d9
suppress description in "other models" tab for space reasons
Jun 20, 2023
90df316
Merge branch 'main' into lstein/installer-for-new-model-layout
lstein Jun 20, 2023
b727442
better window size behavior under alacritty & terminator
Jun 21, 2023
1c31efa
punctuation fix in user message
Jun 21, 2023
33b04f6
migration script working well
Jun 22, 2023
d65c833
migration now integrated into invokeai-configure
Jun 22, 2023
c7b7e08
Merge branch 'main' into lstein/installer-for-new-model-layout
lstein Jun 23, 2023
a910403
correctly migrate models that have relative paths
Jun 23, 2023
65d0e80
Merge branch 'main' into lstein/installer-for-new-model-layout
lstein Jun 23, 2023
56bd873
make relative model paths work in model manager
Jun 23, 2023
afd19ab
merge
Jun 23, 2023
3043af4
implement vae passthru
Jun 23, 2023
58d1857
merge with main
Jun 23, 2023
54b7442
adjust for change in list_models() API
Jun 23, 2023
466ec3a
add router API support for model manager heuristic_import()`
Jun 23, 2023
539d1f3
remove redundant prediction_type and attention_upscaling flags
Jun 23, 2023
ba1371a
rename ModelType.Pipeline to ModelType.Main
Jun 24, 2023
d5f7426
Merge branch 'main' into lstein/installer-for-new-model-layout
Jun 24, 2023
c3c4a71
implemented Stalker's suggested improvements
Jun 24, 2023
a3c22b5
Remove upcast_attention and prediction_type from stable diffusion mod…
StAlKeR7779 Jun 25, 2023
60b37b7
fix model manager documentation
lstein Jun 25, 2023
c91d1ea
Merge branch 'lstein/installer-for-new-model-layout' of github.com:in…
lstein Jun 25, 2023
160b5d7
add support for an autoimport models directory scanned at startup time
Jun 25, 2023
23c22ac
Refactor logic/small fixes
StAlKeR7779 Jun 26, 2023
1ba94a9
Fixes
StAlKeR7779 Jun 26, 2023
7b97639
Merge branch 'main' into lstein/installer-for-new-model-layout
ebr Jun 26, 2023
47e6512
query for 'main' model type when populating UI lists
ebr Jun 26, 2023
a2ddb38
fix add_model() logic
Jun 26, 2023
011adfc
merge with main
Jun 26, 2023
f67dec7
Merge branch 'main' into lstein/installer-for-new-model-layout
lstein Jun 26, 2023
b7e9d09
Merge branch 'main' into lstein/installer-for-new-model-layout
ebr Jun 26, 2023
823e098
prompt user for prediction type when autoimporting a v2 model without…
Jun 26, 2023
8c74f49
Merge branch 'lstein/installer-for-new-model-layout' of github.com:in…
Jun 26, 2023
044fe6b
remove dangling debug statement
Jun 26, 2023
f15d28d
improved wording of v2 selection prompt
Jun 27, 2023
e8ed0fa
autoimport from embedding/controlnet/lora folders designated in start…
Jun 27, 2023
72209d0
Merge branch 'main' into lstein/installer-for-new-model-layout
lstein Jun 28, 2023
79fc708
warn but do not crash when model scan finds random cruft in `models` …
Jun 28, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 28 additions & 3 deletions invokeai/app/api/routers/models.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# Copyright (c) 2023 Kyle Schouviller (https://github.com/kyle0654) and 2023 Kent Keirsey (https://github.com/hipsterusername)

from typing import Annotated, Literal, Optional, Union, Dict
from typing import Literal, Optional, Union

from fastapi import Query
from fastapi.routing import APIRouter, HTTPException
from pydantic import BaseModel, Field, parse_obj_as
from ..dependencies import ApiDependencies
from invokeai.backend import BaseModelType, ModelType
from invokeai.backend.model_management.models import OPENAPI_MODEL_CONFIGS
from invokeai.backend.model_management.models import OPENAPI_MODEL_CONFIGS, SchedulerPredictionType
MODEL_CONFIGS = Union[tuple(OPENAPI_MODEL_CONFIGS)]

models_router = APIRouter(prefix="/v1/models", tags=["models"])
Expand Down Expand Up @@ -51,11 +51,14 @@ class CreateModelResponse(BaseModel):
info: Union[CkptModelInfo, DiffusersModelInfo] = Field(discriminator="format", description="The model info")
status: str = Field(description="The status of the API response")

class ImportModelRequest(BaseModel):
name: str = Field(description="A model path, repo_id or URL to import")
prediction_type: Optional[Literal['epsilon','v_prediction','sample']] = Field(description='Prediction type for SDv2 checkpoint files')

class ConversionRequest(BaseModel):
name: str = Field(description="The name of the new model")
info: CkptModelInfo = Field(description="The converted model info")
save_location: str = Field(description="The path to save the converted model weights")


class ConvertedModelResponse(BaseModel):
name: str = Field(description="The name of the new model")
Expand Down Expand Up @@ -105,6 +108,28 @@ async def update_model(

return model_response

@models_router.post(
"/",
operation_id="import_model",
responses={200: {"status": "success"}},
)
async def import_model(
model_request: ImportModelRequest
) -> None:
""" Add Model """
items_to_import = set([model_request.name])
prediction_types = { x.value: x for x in SchedulerPredictionType }
logger = ApiDependencies.invoker.services.logger

installed_models = ApiDependencies.invoker.services.model_manager.heuristic_import(
items_to_import = items_to_import,
prediction_type_helper = lambda x: prediction_types.get(model_request.prediction_type)
)
if len(installed_models) > 0:
logger.info(f'Successfully imported {model_request.name}')
else:
logger.error(f'Model {model_request.name} not imported')
raise HTTPException(status_code=500, detail=f'Model {model_request.name} not imported')

@models_router.delete(
"/{model_name}",
Expand Down
2 changes: 1 addition & 1 deletion invokeai/app/invocations/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ def invoke(self, context: InvocationContext) -> ModelLoaderOutput:

base_model = self.model.base_model
model_name = self.model.model_name
model_type = ModelType.Pipeline
model_type = ModelType.Main

# TODO: not found exceptions
if not context.services.model_manager.model_exists(
Expand Down
11 changes: 7 additions & 4 deletions invokeai/app/services/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
conf_path: configs/models.yaml
legacy_conf_dir: configs/stable-diffusion
outdir: outputs
autoconvert_dir: null
autoimport_dir: null
Models:
model: stable-diffusion-1.5
embeddings: true
Expand Down Expand Up @@ -367,16 +367,19 @@ class InvokeAIAppConfig(InvokeAISettings):

always_use_cpu : bool = Field(default=False, description="If true, use the CPU for rendering even if a GPU is available.", category='Memory/Performance')
free_gpu_mem : bool = Field(default=False, description="If true, purge model from GPU after each generation.", category='Memory/Performance')
max_loaded_models : int = Field(default=2, gt=0, description="Maximum number of models to keep in memory for rapid switching", category='Memory/Performance')
max_loaded_models : int = Field(default=3, gt=0, description="Maximum number of models to keep in memory for rapid switching", category='Memory/Performance')
precision : Literal[tuple(['auto','float16','float32','autocast'])] = Field(default='float16',description='Floating point precision', category='Memory/Performance')
sequential_guidance : bool = Field(default=False, description="Whether to calculate guidance in serial instead of in parallel, lowering memory requirements", category='Memory/Performance')
xformers_enabled : bool = Field(default=True, description="Enable/disable memory-efficient attention", category='Memory/Performance')
tiled_decode : bool = Field(default=False, description="Whether to enable tiled VAE decode (reduces memory consumption with some performance penalty)", category='Memory/Performance')

root : Path = Field(default=_find_root(), description='InvokeAI runtime root directory', category='Paths')
autoconvert_dir : Path = Field(default=None, description='Path to a directory of ckpt files to be converted into diffusers and imported on startup.', category='Paths')
autoimport_dir : Path = Field(default='autoimport/main', description='Path to a directory of models files to be imported on startup.', category='Paths')
lora_dir : Path = Field(default='autoimport/lora', description='Path to a directory of LoRA/LyCORIS models to be imported on startup.', category='Paths')
embedding_dir : Path = Field(default='autoimport/embedding', description='Path to a directory of Textual Inversion embeddings to be imported on startup.', category='Paths')
controlnet_dir : Path = Field(default='autoimport/controlnet', description='Path to a directory of ControlNet embeddings to be imported on startup.', category='Paths')
conf_path : Path = Field(default='configs/models.yaml', description='Path to models definition file', category='Paths')
models_dir : Path = Field(default='./models', description='Path to the models directory', category='Paths')
models_dir : Path = Field(default='models', description='Path to the models directory', category='Paths')
legacy_conf_dir : Path = Field(default='configs/stable-diffusion', description='Path to directory of legacy checkpoint config files', category='Paths')
db_dir : Path = Field(default='databases', description='Path to InvokeAI databases directory', category='Paths')
outdir : Path = Field(default='outputs', description='Default folder for output images', category='Paths')
Expand Down
Loading