-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configuration and model installer for new model layout #3547
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was able to get the model directory configured from an entirely clean slate by running invokeai-model-install
, but encountered a few snags along the way:
Installing the tile
ControlNet model always throws the following stack trace (others install without issue):
[2023-06-18 01:40:32,068]::[InvokeAI]::INFO --> Installing lllyasviel/control_v11f1e_sd15_tile [3/18]
An exception has occurred: local variable 'location' referenced before assignment Details:
Traceback (most recent call last):
File "/home/clipclop/Code/invokeai/invokeai/frontend/install/model_install.py", line 795, in main
select_and_download_models(opt)
File "/home/clipclop/Code/invokeai/invokeai/frontend/install/model_install.py", line 720, in select_and_download_models
process_and_execute(opt, installApp.install_selections)
File "/home/clipclop/Code/invokeai/invokeai/frontend/install/model_install.py", line 641, in process_and_execute
installer.install(selections)
File "/home/clipclop/Code/invokeai/invokeai/backend/install/model_install_backend.py", line 176, in install
self.heuristic_install(path)
File "/home/clipclop/Code/invokeai/invokeai/backend/install/model_install_backend.py", line 216, in heuristic_install
self._install_repo(str(path))
File "/home/clipclop/Code/invokeai/invokeai/backend/install/model_install_backend.py", line 290, in _install_repo
info = ModelProbe().heuristic_probe(location, self.prediction_helper)
UnboundLocalError: local variable 'location' referenced before assignment
a Max Retries Exceeded
error was often occurring for me during model install - my internet connection can't be ruled out, but it's unlikely
ConnectionError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Max retries exceeded with url:
/repos/dd/02/dd029ef86933ecf6b4217f44c7134244bdc25d40e90fa28ae5b35baef963a6fe/2e62de6305c5ac4ebdf61c2252944239c4d583dee9dd2745fa1d82b6f6590b0c?response-content-disposition=attachment%3B+fil
ename*%3DUTF-8%27%27pytorch_lora_weights.bin%3B+filename%3D%22pytorch_lora_weights.bin%22%3B&response-content-type=application%2Foctet-stream&Expires=1687327969&Policy=eyJTdGF0ZW1lbnQiOlt7I
- consistently failed to install the
sd-model-finetuned-lora-t4
LoRA:
[2023-06-18 02:30:55,181]::[InvokeAI]::INFO --> Installing sayakpaul/sd-model-finetuned-lora-t4 [2/5]
[2023-06-18 02:30:56,039]::[InvokeAI]::INFO --> pytorch_lora_weights.bin: Downloading...
pytorch_lora_weights.bin: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.29M/3.29M [00:00<00:00, 8.93MiB/s]
An exception has occurred: 'NoneType' object has no attribute 'value' Details:
Traceback (most recent call last):
File "/home/clipclop/Code/invokeai/invokeai/frontend/install/model_install.py", line 795, in main
select_and_download_models(opt)
File "/home/clipclop/Code/invokeai/invokeai/frontend/install/model_install.py", line 720, in select_and_download_models
process_and_execute(opt, installApp.install_selections)
File "/home/clipclop/Code/invokeai/invokeai/frontend/install/model_install.py", line 641, in process_and_execute
installer.install(selections)
File "/home/clipclop/Code/invokeai/invokeai/backend/install/model_install_backend.py", line 176, in install
self.heuristic_install(path)
File "/home/clipclop/Code/invokeai/invokeai/backend/install/model_install_backend.py", line 216, in heuristic_install
self._install_repo(str(path))
File "/home/clipclop/Code/invokeai/invokeai/backend/install/model_install_backend.py", line 291, in _install_repo
dest = self.config.models_path / info.base_type.value / info.model_type.value / self._get_model_name(repo_id,location)
AttributeError: 'NoneType' object has no attribute 'value'
- after installing LoRA or controlnets (possibly only after failure, but I haven't gone back to re-test yet), the
.safetensors
files appear on the list:
- it seems that Huggingface downloads additionally end up in HF cache, resulting in double consumption of disk space:
-
invokeai-model-install
correctly hands off toinvokeai-configure
when the root directory does not exist. but then terminating the configure script and re-runninginvokeai-model-install
does not run the configure script anymore, presumably because the directory structure was already created before we agreed to execute the config script. This might be unexpected to the user. -
after almost every download, I am getting the following warnings:
/usr/lib/python3.10/tempfile.py:999: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmpr6g8tuq3wandb'>
_warnings.warn(warn_message, ResourceWarning)
/usr/lib/python3.10/tempfile.py:999: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmptzdrgzq6wandb-artifacts'>
_warnings.warn(warn_message, ResourceWarning)
/usr/lib/python3.10/tempfile.py:999: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmp7v5sc6wfwandb-media'>
_warnings.warn(warn_message, ResourceWarning)
/usr/lib/python3.10/tempfile.py:999: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmpxtf4ryxbwandb-media'>
_warnings.warn(warn_message, ResourceWarning)
I am still testing generation and model switching - will add that feedback when ready
very minor and may be unrelated to this PR:
-
"I accept the terms of CreativeML license" is checked by default - should be left unchecked for the user to agree
-
both
invokeai-model-install
andinvokeai-configure
TUI is broken (lines wrapped, unable to navigate between tabs or buttons) if the terminal is smaller than 66x189 characters on either dimension. But changing the terminal size fixes the line wrap (the lines are still overflown beyond the edge of the terminal, but it becomes usable)
…voke-ai/InvokeAI into lstein/installer-for-new-model-layout
@ebr
|
Thank you!!
One new issue that just came up:
|
I'm gnashing my teeth! There's code in there that is supposed to detect the rows and columns of the current terminal emulator and either (1) resize the window to the proper dimensions if too small and the emulator supports it; or (2) tell the user to maximize the window if resizing fails. Are you seeing neither of these behaviors with
That's disturbing. The same root-finding code should be used for the web client and you should see the same error there (unless it's bypassing the code in some way). The logic is in
Is |
I tried to test a migration from prenodes tag using (invokeai-configure --root="/home/invokeuser/userfiles/" --yes) and got this error
|
@ebr It turns out that both If this works for you, please re-review and remove the requested changes. |
Thank you. The ability to migrate from earlier versions is not yet wrapped into the configure script. I'll add it today. |
Sorry didnt realised it hadn't been added yet, fyi if it helps fresh install with --yes works how it should for me, but --default_only asks for user input (not sure what args are wanted to be automated) |
I'm getting the
error as well. I tried running the migrate_models_to_3.0.py script but didn't help. Trying a fresh install (venv and all) now, but on a slow connection... |
to support renaming of 'pipeline' models to 'main'
@lstein I re-tested this and all works. I also pushed a commit fixing UI model selection that broke due to change in naming from |
"Fixes" the test suite generally so it doesn't fail CI, but some tests needed to be skipped/xfailed due to recent refactor. - ignore three test suites that broke following the model manager refactor - move `InvocationServices` fixture to `conftest.py` - add `boards` items to the `InvocationServices` fixture This PR makes the unit tests work, but end-to-end tests are temporarily commented out due to `invokeai-configure` being broken in `main` - pending #3547 Looks like a lot of the tests need to be rewritten as they reference `TextToImageInvocation` / `ImageToImageInvocation`
… .yaml file don't ask user for prediction type of a config.yaml provided
…voke-ai/InvokeAI into lstein/installer-for-new-model-layout
Rewrite lora to be applied by model patching as it gives us benefits: 1) On model execution calculates result only on model weight, while with hooks we need to calculate on model and each lora 2) As lora now patched in model weights, there no need to store lora in vram Results: Speed: | loras count | hook | patch | | --- | --- | --- | | 0 | ~4.92 it/s | ~4.92 it/s | | 1 | ~3.51 it/s | ~4.89 it/s | | 2 | ~2.76 it/s | ~4.92 it/s | VRAM: | loras count | hook | patch | | --- | --- | --- | | 0 | ~3.6 gb | ~3.6 gb | | 1 | ~4.0 gb | ~3.6 gb | | 2 | ~4.4 gb | ~3.7 gb | As based on #3547 wait to merge.
Restore invokeai-configure and invokeai-model-install
This PR updates invokeai-configure and invokeai-model-install to work with the new model manager file layout. It addresses a naming issue for
ModelType.Main
(wasModelType.Pipeline
) requested by @blessedcoolant, and adds back the feature that allows users to dump models into anautoimport
directory for discovery at startup time.