-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add simplified model manager install API to InvocationContext #6132
Conversation
9cc1f20
to
af1b57a
Compare
I have added a migration script that tidies up the |
537a626
to
3ddd7ce
Compare
3ddd7ce
to
fa6efac
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what I was expecting the implementation to be, but it definitely wasn't as simple as this - great work.
I've requested a few changes and there's one discussion item that I'd like to marinate on before we change the public invocation API.
invokeai/app/services/shared/sqlite_migrator/migrations/migration_10.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay in reviewing. I've tidied a few things and tested everything, working great!
Two minor issues noted.
@psychedelicious I've addressed the remaining issues you raised. Thanks for a thorough review. |
Yes, mypy is having trouble tracking the return type of several methods. I haven't figured out what causes the problem and don't want to add a # type: ignore. But maybe I should 'cause I'm not ready to turn to pyright. |
We shouldn't add |
@RyanJDick Would you mind doing one last review of this PR? |
You've convinced me. I've switched to pyright! |
Looks like 43/44 files have changed since I last looked 😅 . I'll plan to spend a chunk of time on this tomorrow. |
@RyanJDick Can narrow that down to reviewing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just reviewed the invocation_context.py
API.
invokeai/app/services/shared/sqlite_migrator/migrations/migration_11.py
Outdated
Show resolved
Hide resolved
@RyanJDick I've fixed the issues you identified. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
invocation_context.py
looks good to me ✅
I'll defer to @psychedelicious for final approval, since he has a more complete understanding of this PR than me at this point.
f91b350
to
7427195
Compare
1dfedea
to
fde58ce
Compare
Summary
This three two model manager-related methods to the InvocationContext uniform API. They are accessible via
context.models.*
:load_local_model(model_path: Path, loader: Optional[Callable[[Path], AnyModel]] = None) -> LoadedModelWithoutConfig
Load the model located at the indicated path.
This will load a local model (.safetensors, .ckpt or diffusers directory) into the model manager RAM cache and return its
LoadedModelWithoutConfig
. If the optional loader argument is provided, the loader will be invoked to load the model into memory. Otherwise the method will callsafetensors.torch.load_file()
torch.load()
(with a pickle scan), orfrom_pretrained()
as appropriate to the path type.Be aware that the
LoadedModelWithoutConfig
object differs fromLoadedModel
by having noconfig
attribute.Here is an example of usage:
load_remote_model(source: str | AnyHttpUrl, loader: Optional[Callable[[Path], AnyModel]] = None) -> LoadedModelWithoutConfig
Load the model located at the indicated URL or repo_id.
This is similar to
load_local_model()
but it accepts either a HugginFace repo_id (as a string), or a URL. The model's file(s) will be downloaded tomodels/.download_cache
and then loaded, returning adownload_and_cache_model( source: str | AnyHttpUrl, access_token: Optional[str] = None, timeout: Optional[int] = 0) -> Path
Download the model file located at source to the models cache and return its Path. This will check
models/.download_cache
for the desired model file and download it from the indicated source if not already present. The local Path to the downloaded file is then returned.Other Changes
This PR performs a migration, in which it renames
models/.cache
tomodels/.convert_cache
, and migrates previously-downloaded ESRGAN, openpose, DepthAnything and Lama inpaint models from themodels/core
directory intomodels/.download_cache
.There are a number of legacy model files in
models/core
, such as GFPGAN, which are no longer used. This PR deletes them and tidies up themodels/core
directory.Related Issues / Discussions
I have systematically replaced all the calls to
download_with_progress_bar()
. This function is no longer used elsewhere and has been removed.QA Instructions
I have added unit tests for the three new calls. You may test that the
load_and_cache_model()
call is working by running the upscaler within the web app. On first try, you will see the model file being downloaded into the models.cache
directory. On subsequent tries, the model will either load from RAM (if it hasn't been displaced) or will be loaded from the filesystem.Merge Plan
Squash merge when approved.
Checklist