Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configuration and model installer for new model layout #3547

Merged
merged 46 commits into from
Jun 28, 2023

Conversation

lstein
Copy link
Collaborator

@lstein lstein commented Jun 17, 2023

Restore invokeai-configure and invokeai-model-install

This PR updates invokeai-configure and invokeai-model-install to work with the new model manager file layout. It addresses a naming issue for ModelType.Main (was ModelType.Pipeline) requested by @blessedcoolant, and adds back the feature that allows users to dump models into an autoimport directory for discovery at startup time.

@lstein lstein marked this pull request as ready for review June 17, 2023 23:26
Copy link
Member

@ebr ebr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was able to get the model directory configured from an entirely clean slate by running invokeai-model-install, but encountered a few snags along the way:

Installing the tile ControlNet model always throws the following stack trace (others install without issue):

[2023-06-18 01:40:32,068]::[InvokeAI]::INFO --> Installing lllyasviel/control_v11f1e_sd15_tile [3/18]
An exception has occurred: local variable 'location' referenced before assignment Details:
Traceback (most recent call last):
  File "/home/clipclop/Code/invokeai/invokeai/frontend/install/model_install.py", line 795, in main
    select_and_download_models(opt)
  File "/home/clipclop/Code/invokeai/invokeai/frontend/install/model_install.py", line 720, in select_and_download_models
    process_and_execute(opt, installApp.install_selections)
  File "/home/clipclop/Code/invokeai/invokeai/frontend/install/model_install.py", line 641, in process_and_execute
    installer.install(selections)
  File "/home/clipclop/Code/invokeai/invokeai/backend/install/model_install_backend.py", line 176, in install
    self.heuristic_install(path)
  File "/home/clipclop/Code/invokeai/invokeai/backend/install/model_install_backend.py", line 216, in heuristic_install
    self._install_repo(str(path))
  File "/home/clipclop/Code/invokeai/invokeai/backend/install/model_install_backend.py", line 290, in _install_repo
    info = ModelProbe().heuristic_probe(location, self.prediction_helper)
UnboundLocalError: local variable 'location' referenced before assignment

a Max Retries Exceeded error was often occurring for me during model install - my internet connection can't be ruled out, but it's unlikely

ConnectionError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Max retries exceeded with url:
/repos/dd/02/dd029ef86933ecf6b4217f44c7134244bdc25d40e90fa28ae5b35baef963a6fe/2e62de6305c5ac4ebdf61c2252944239c4d583dee9dd2745fa1d82b6f6590b0c?response-content-disposition=attachment%3B+fil
ename*%3DUTF-8%27%27pytorch_lora_weights.bin%3B+filename%3D%22pytorch_lora_weights.bin%22%3B&response-content-type=application%2Foctet-stream&Expires=1687327969&Policy=eyJTdGF0ZW1lbnQiOlt7I
  • consistently failed to install the sd-model-finetuned-lora-t4 LoRA:
[2023-06-18 02:30:55,181]::[InvokeAI]::INFO --> Installing sayakpaul/sd-model-finetuned-lora-t4 [2/5]
[2023-06-18 02:30:56,039]::[InvokeAI]::INFO --> pytorch_lora_weights.bin: Downloading...
pytorch_lora_weights.bin: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3.29M/3.29M [00:00<00:00, 8.93MiB/s]
An exception has occurred: 'NoneType' object has no attribute 'value' Details:
Traceback (most recent call last):
  File "/home/clipclop/Code/invokeai/invokeai/frontend/install/model_install.py", line 795, in main
    select_and_download_models(opt)
  File "/home/clipclop/Code/invokeai/invokeai/frontend/install/model_install.py", line 720, in select_and_download_models
    process_and_execute(opt, installApp.install_selections)
  File "/home/clipclop/Code/invokeai/invokeai/frontend/install/model_install.py", line 641, in process_and_execute
    installer.install(selections)
  File "/home/clipclop/Code/invokeai/invokeai/backend/install/model_install_backend.py", line 176, in install
    self.heuristic_install(path)
  File "/home/clipclop/Code/invokeai/invokeai/backend/install/model_install_backend.py", line 216, in heuristic_install
    self._install_repo(str(path))
  File "/home/clipclop/Code/invokeai/invokeai/backend/install/model_install_backend.py", line 291, in _install_repo
    dest = self.config.models_path / info.base_type.value / info.model_type.value / self._get_model_name(repo_id,location)
AttributeError: 'NoneType' object has no attribute 'value'
  • after installing LoRA or controlnets (possibly only after failure, but I haven't gone back to re-test yet), the .safetensors files appear on the list:

image

  • it seems that Huggingface downloads additionally end up in HF cache, resulting in double consumption of disk space:

image

image

  • invokeai-model-install correctly hands off to invokeai-configure when the root directory does not exist. but then terminating the configure script and re-running invokeai-model-install does not run the configure script anymore, presumably because the directory structure was already created before we agreed to execute the config script. This might be unexpected to the user.

  • after almost every download, I am getting the following warnings:

/usr/lib/python3.10/tempfile.py:999: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmpr6g8tuq3wandb'>
  _warnings.warn(warn_message, ResourceWarning)
/usr/lib/python3.10/tempfile.py:999: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmptzdrgzq6wandb-artifacts'>
  _warnings.warn(warn_message, ResourceWarning)
/usr/lib/python3.10/tempfile.py:999: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmp7v5sc6wfwandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/usr/lib/python3.10/tempfile.py:999: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/tmp/tmpxtf4ryxbwandb-media'>
  _warnings.warn(warn_message, ResourceWarning)

I am still testing generation and model switching - will add that feedback when ready


very minor and may be unrelated to this PR:

  • "I accept the terms of CreativeML license" is checked by default - should be left unchecked for the user to agree

  • both invokeai-model-install and invokeai-configure TUI is broken (lines wrapped, unable to navigate between tabs or buttons) if the terminal is smaller than 66x189 characters on either dimension. But changing the terminal size fixes the line wrap (the lines are still overflown beyond the edge of the terminal, but it becomes usable)
    image
    image

@lstein
Copy link
Collaborator Author

lstein commented Jun 20, 2023

@ebr
I have addressed the following issues:

  1. "tile" controlnet not loading (fixed load)
  2. "sd-model-finetuned-lora-t4" not loading (removed from list of starter LoRAs; torch.load() crashing on this file for some reason)
  3. ".safetensors" files appearing in model listing rather than just the base name. This also fixes duplicated entries appearing.
  4. Yes, HuggingFace pipeline models now appear in both the models directory and the main HuggingFace cache, consuming double the disk space. I'm simply downloading the models to the local directory, and HuggingFace is doing its caching thing in the background. I could remove the cached copy, but I don't know what other programs on the user's computer might be using it.
  5. I'm also seeing the warning about implicitly cleaning up the temporary directory, but I'm not sure where it's coming from. Will investigate.
  6. I have adjusted the TUI size requirements. Could you try again on your system?

@ebr
Copy link
Member

ebr commented Jun 20, 2023

Thank you!!

  1. Confirmed fixed ✔️
  2. Confirmed workaround ✔️
  3. Confirmed fixed ✔️
  4. Sounds good now that I know to expect it, and actually this may be a very good thing to avoid re-downloads when users reinstall or install in another location - HF will just use its cache. The downside is that users who install on a non-system drive may unexpectedly run out of disk space. But I agree that deleting anything in the users' cache directory would be arguably even worse. Not sure how to solve this right now aside from documenting it, but also maybe this isn't worth holding up 3.0. ✅
  5. I think it may be just telling us to use a context manager, but I'm not sure if these warnings are coming from our code or HuggingFace. ❔
  6. TUI was still line-wrapping for me, but then I tested in several emulators and realized it's not universally happening. It's fine in gnome-terminal (no line wrapping, line overflow is hidden until resize). But both terminator and alacritty are having the issue. Considering the multitude of system configurations and terminal emulators out there, this is probably an area we will never be able to get perfectly right, and can say this is "good enough". ✔️

One new issue that just came up:

  • FileNotFoundError: [Errno 2] No such file or directory: '/home/clipclop/invokeai/configs/models.yaml' unless INVOKEAI_ROOT is set to a location of an existing runtime directory. To be clear, my .venv is in the repo root, and my runtime dir is on another drive. So this is certainly an edge case, but will present itself for developers with a similar setup

@lstein
Copy link
Collaborator Author

lstein commented Jun 20, 2023

Thank you!!

  1. Confirmed fixed heavy_check_mark
  2. Confirmed workaround heavy_check_mark
  3. Confirmed fixed heavy_check_mark
  4. Sounds good now that I know to expect it, and actually this may be a very good thing to avoid re-downloads when users reinstall or install in another location - HF will just use its cache. The downside is that users who install on a non-system drive may unexpectedly run out of disk space. But I agree that deleting anything in the users' cache directory would be arguably even worse. Not sure how to solve this right now aside from documenting it, but also maybe this isn't worth holding up 3.0. white_check_mark
  5. I think it may be just telling us to use a context manager, but I'm not sure if these warnings are coming from our code or HuggingFace. grey_question
  6. TUI was still line-wrapping for me, but then I tested in several emulators and realized it's not universally happening. It's fine in gnome-terminal (no line wrapping, line overflow is hidden until resize). But both terminator and alacritty are having the issue. Considering the multitude of system configurations and terminal emulators out there, this is probably an area we will never be able to get perfectly right, and can say this is "good enough". heavy_check_mark

I'm gnashing my teeth! There's code in there that is supposed to detect the rows and columns of the current terminal emulator and either (1) resize the window to the proper dimensions if too small and the emulator supports it; or (2) tell the user to maximize the window if resizing fails. Are you seeing neither of these behaviors with terminator and alacritty ? I will try installing these on my Linux system and see what can be done.

One new issue that just came up:

  • FileNotFoundError: [Errno 2] No such file or directory: '/home/clipclop/invokeai/configs/models.yaml' unless INVOKEAI_ROOT is set to a location of an existing runtime directory. To be clear, my .venv is in the repo root, and my runtime dir is on another drive. So this is certainly an edge case, but will present itself for developers with a similar setup

That's disturbing. The same root-finding code should be used for the web client and you should see the same error there (unless it's bypassing the code in some way). The logic is in invokeai.app.services.config and it goes like this:

  1. If -root is passed on the command line, use that.
  2. If INVOKEAI_ROOT is set, use that.
  3. Look at the parent of .venv and if there is an invokeai.yaml configuration file there, use that directory.
  4. Otherwise use ~invokeai

Is invokeai-web finding the correct models.yaml file? If so, I wonder how it is doing this?

@mickr777
Copy link
Contributor

mickr777 commented Jun 21, 2023

I tried to test a migration from prenodes tag using (invokeai-configure --root="/home/invokeuser/userfiles/" --yes) and got this error

[2023-06-21 13:44:39,495]::[InvokeAI]::INFO --> ** Migrating invokeai.init to invokeai.yaml
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/invokeuser/venv/bin/invokeai-configure:8 in <module>                                       │
│                                                                                                  │
│   5 from invokeai.frontend.install import invokeai_configure                                     │
│   6 if __name__ == '__main__':                                                                   │
│   7 │   sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])                         │
│ ❱ 8 │   sys.exit(invokeai_configure())                                                           │
│   9                                                                                              │
│                                                                                                  │
│ /home/invokeuser/InvokeAI/invokeai/backend/install/invokeai_configure.py:844 in main             │
│                                                                                                  │
│   841 │   │   if not config.model_conf_path.exists():                                            │
│   842 │   │   │   initialize_rootdir(config.root_path, opt.yes_to_all)                           │
│   843 │   │                                                                                      │
│ ❱ 844 │   │   models_to_download = default_user_selections(opt)                                  │
│   845 │   │   if opt.yes_to_all:                                                                 │
│   846 │   │   │   write_default_options(opt, new_init_file)                                      │
│   847 │   │   │   init_options = Namespace(                                                      │
│                                                                                                  │
│ /home/invokeuser/InvokeAI/invokeai/backend/install/invokeai_configure.py:631 in                  │
│ default_user_selections                                                                          │
│                                                                                                  │
│   628 │   return opts                                                                            │
│   629                                                                                            │
│   630 def default_user_selections(program_opts: Namespace) -> InstallSelections:                 │
│ ❱ 631 │   installer = ModelInstall(config)                                                       │
│   632 │   models = installer.all_models()                                                        │
│   633 │   return InstallSelections(                                                              │
│   634 │   │   install_models=[models[installer.default_model()].path or models[installer.defau   │
│                                                                                                  │
│ /home/invokeuser/InvokeAI/invokeai/backend/install/model_install_backend.py:98 in __init__       │
│                                                                                                  │
│    95 │   │   │   │    prediction_type_helper: Callable[[Path],SchedulerPredictionType]=None,    │
│    96 │   │   │   │    access_token:str = None):                                                 │
│    97 │   │   self.config = config                                                               │
│ ❱  98 │   │   self.mgr = ModelManager(config.model_conf_path)                                    │
│    99 │   │   self.datasets = OmegaConf.load(Dataset_path)                                       │
│   100 │   │   self.prediction_helper = prediction_type_helper                                    │
│   101 │   │   self.access_token = access_token or HfFolder.get_token()                           │
│                                                                                                  │
│ /home/invokeuser/InvokeAI/invokeai/backend/model_management/model_manager.py:261 in __init__     │
│                                                                                                  │
│   258 │   │   elif not isinstance(config, DictConfig):                                           │
│   259 │   │   │   raise ValueError('config argument must be an OmegaConf object, a Path or a s   │
│   260 │   │                                                                                      │
│ ❱ 261 │   │   self.config_meta = ConfigMeta(**config.pop("__metadata__"))                        │
│   262 │   │   # TODO: metadata not found                                                         │
│   263 │   │   # TODO: version check                                                              │
│   264                                                                                            │
│                                                                                                  │
│ /home/invokeuser/venv/lib/python3.10/site-packages/omegaconf/dictconfig.py:517 in pop            │
│                                                                                                  │
│   514 │   │   │   │   │   else:                                                                  │
│   515 │   │   │   │   │   │   raise ConfigKeyError(f"Key not found: '{key!s}'")                  │
│   516 │   │   except Exception as e:                                                             │
│ ❱ 517 │   │   │   self._format_and_raise(key=key, value=None, cause=e)                           │
│   518 │                                                                                          │
│   519 │   def keys(self) -> KeysView[DictKeyType]:                                               │
│   520 │   │   if self._is_missing() or self._is_interpolation() or self._is_none():              │
│                                                                                                  │
│ /home/invokeuser/venv/lib/python3.10/site-packages/omegaconf/base.py:231 in _format_and_raise    │
│                                                                                                  │
│   228 │   │   msg: Optional[str] = None,                                                         │
│   229 │   │   type_override: Any = None,                                                         │
│   230 │   ) -> None:                                                                             │
│ ❱ 231 │   │   format_and_raise(                                                                  │
│   232 │   │   │   node=self,                                                                     │
│   233 │   │   │   key=key,                                                                       │
│   234 │   │   │   value=value,                                                                   │
│                                                                                                  │
│ /home/invokeuser/venv/lib/python3.10/site-packages/omegaconf/_utils.py:899 in format_and_raise   │
│                                                                                                  │
│    896 │   │   ex.ref_type = ref_type                                                            │
│    897 │   │   ex.ref_type_str = ref_type_str                                                    │
│    898 │                                                                                         │
│ ❱  899 │   _raise(ex, cause)                                                                     │
│    900                                                                                           │
│    901                                                                                           │
│    902 def type_str(t: Any, include_module_name: bool = False) -> str:                           │
│                                                                                                  │
│ /home/invokeuser/venv/lib/python3.10/site-packages/omegaconf/_utils.py:797 in _raise             │
│                                                                                                  │
│    794 │   │   ex.__cause__ = cause                                                              │
│    795 │   else:                                                                                 │
│    796 │   │   ex.__cause__ = None                                                               │
│ ❱  797 │   raise ex.with_traceback(sys.exc_info()[2])  # set env var OC_CAUSE=1 for full trace   │
│    798                                                                                           │
│    799                                                                                           │
│    800 def format_and_raise(                                                                     │
│                                                                                                  │
│ /home/invokeuser/venv/lib/python3.10/site-packages/omegaconf/dictconfig.py:515 in pop            │
│                                                                                                  │
│   512 │   │   │   │   │   │   │   f"Key not found: '{key!s}' (path: '{full}')"                   │
│   513 │   │   │   │   │   │   )                                                                  │
│   514 │   │   │   │   │   else:                                                                  │
│ ❱ 515 │   │   │   │   │   │   raise ConfigKeyError(f"Key not found: '{key!s}'")                  │
│   516 │   │   except Exception as e:                                                             │
│   517 │   │   │   self._format_and_raise(key=key, value=None, cause=e)                           │
│   518                                                                                            │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ConfigKeyError: Key not found: '__metadata__'
    full_key: __metadata__
    object_type=dict

@lstein
Copy link
Collaborator Author

lstein commented Jun 21, 2023

@ebr It turns out that both terminator and alacritty accept the xterm window resizing command and report the new size of the window, but don't actually change the window size. So I put in a check for these emulators and fall back to asking the user to resize the window manually. I also did some more work on the forms to reduce their vertical heigh requirements.

If this works for you, please re-review and remove the requested changes.

@lstein
Copy link
Collaborator Author

lstein commented Jun 21, 2023

I tried to test a migration from prenodes tag using (invokeai-configure --root="/home/invokeuser/userfiles/" --yes) and got this error

[2023-06-21 13:44:39,495]::[InvokeAI]::INFO --> ** Migrating invokeai.init to invokeai.yaml
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/invokeuser/venv/bin/invokeai-configure:8 in <module>                                       │
│                                                                                                  │
│   5 from invokeai.frontend.install import invokeai_configure                                     │
│   6 if __name__ == '__main__':                                                                   │
│   7 │   sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])                         │
│ ❱ 8 │   sys.exit(invokeai_configure())                                                           │
│   9                                                                                              │
│                                                                                                  │
│ /home/invokeuser/InvokeAI/invokeai/backend/install/invokeai_configure.py:844 in main             │
│                                                                                                  │
│   841 │   │   if not config.model_conf_path.exists():                                            │
│   842 │   │   │   initialize_rootdir(config.root_path, opt.yes_to_all)                           │
│   843 │   │                                                                                      │
│ ❱ 844 │   │   models_to_download = default_user_selections(opt)                                  │
│   845 │   │   if opt.yes_to_all:                                                                 │
│   846 │   │   │   write_default_options(opt, new_init_file)                                      │
│   847 │   │   │   init_options = Namespace(                                                      │
│                                                                                                  │
│ /home/invokeuser/InvokeAI/invokeai/backend/install/invokeai_configure.py:631 in                  │
│ default_user_selections                                                                          │
│                                                                                                  │
│   628 │   return opts                                                                            │
│   629                                                                                            │
│   630 def default_user_selections(program_opts: Namespace) -> InstallSelections:                 │
│ ❱ 631 │   installer = ModelInstall(config)                                                       │
│   632 │   models = installer.all_models()                                                        │
│   633 │   return InstallSelections(                                                              │
│   634 │   │   install_models=[models[installer.default_model()].path or models[installer.defau   │
│                                                                                                  │
│ /home/invokeuser/InvokeAI/invokeai/backend/install/model_install_backend.py:98 in __init__       │
│                                                                                                  │
│    95 │   │   │   │    prediction_type_helper: Callable[[Path],SchedulerPredictionType]=None,    │
│    96 │   │   │   │    access_token:str = None):                                                 │
│    97 │   │   self.config = config                                                               │
│ ❱  98 │   │   self.mgr = ModelManager(config.model_conf_path)                                    │
│    99 │   │   self.datasets = OmegaConf.load(Dataset_path)                                       │
│   100 │   │   self.prediction_helper = prediction_type_helper                                    │
│   101 │   │   self.access_token = access_token or HfFolder.get_token()                           │
│                                                                                                  │
│ /home/invokeuser/InvokeAI/invokeai/backend/model_management/model_manager.py:261 in __init__     │
│                                                                                                  │
│   258 │   │   elif not isinstance(config, DictConfig):                                           │
│   259 │   │   │   raise ValueError('config argument must be an OmegaConf object, a Path or a s   │
│   260 │   │                                                                                      │
│ ❱ 261 │   │   self.config_meta = ConfigMeta(**config.pop("__metadata__"))                        │
│   262 │   │   # TODO: metadata not found                                                         │
│   263 │   │   # TODO: version check                                                              │
│   264                                                                                            │
│                                                                                                  │
│ /home/invokeuser/venv/lib/python3.10/site-packages/omegaconf/dictconfig.py:517 in pop            │
│                                                                                                  │
│   514 │   │   │   │   │   else:                                                                  │
│   515 │   │   │   │   │   │   raise ConfigKeyError(f"Key not found: '{key!s}'")                  │
│   516 │   │   except Exception as e:                                                             │
│ ❱ 517 │   │   │   self._format_and_raise(key=key, value=None, cause=e)                           │
│   518 │                                                                                          │
│   519 │   def keys(self) -> KeysView[DictKeyType]:                                               │
│   520 │   │   if self._is_missing() or self._is_interpolation() or self._is_none():              │
│                                                                                                  │
│ /home/invokeuser/venv/lib/python3.10/site-packages/omegaconf/base.py:231 in _format_and_raise    │
│                                                                                                  │
│   228 │   │   msg: Optional[str] = None,                                                         │
│   229 │   │   type_override: Any = None,                                                         │
│   230 │   ) -> None:                                                                             │
│ ❱ 231 │   │   format_and_raise(                                                                  │
│   232 │   │   │   node=self,                                                                     │
│   233 │   │   │   key=key,                                                                       │
│   234 │   │   │   value=value,                                                                   │
│                                                                                                  │
│ /home/invokeuser/venv/lib/python3.10/site-packages/omegaconf/_utils.py:899 in format_and_raise   │
│                                                                                                  │
│    896 │   │   ex.ref_type = ref_type                                                            │
│    897 │   │   ex.ref_type_str = ref_type_str                                                    │
│    898 │                                                                                         │
│ ❱  899 │   _raise(ex, cause)                                                                     │
│    900                                                                                           │
│    901                                                                                           │
│    902 def type_str(t: Any, include_module_name: bool = False) -> str:                           │
│                                                                                                  │
│ /home/invokeuser/venv/lib/python3.10/site-packages/omegaconf/_utils.py:797 in _raise             │
│                                                                                                  │
│    794 │   │   ex.__cause__ = cause                                                              │
│    795 │   else:                                                                                 │
│    796 │   │   ex.__cause__ = None                                                               │
│ ❱  797 │   raise ex.with_traceback(sys.exc_info()[2])  # set env var OC_CAUSE=1 for full trace   │
│    798                                                                                           │
│    799                                                                                           │
│    800 def format_and_raise(                                                                     │
│                                                                                                  │
│ /home/invokeuser/venv/lib/python3.10/site-packages/omegaconf/dictconfig.py:515 in pop            │
│                                                                                                  │
│   512 │   │   │   │   │   │   │   f"Key not found: '{key!s}' (path: '{full}')"                   │
│   513 │   │   │   │   │   │   )                                                                  │
│   514 │   │   │   │   │   else:                                                                  │
│ ❱ 515 │   │   │   │   │   │   raise ConfigKeyError(f"Key not found: '{key!s}'")                  │
│   516 │   │   except Exception as e:                                                             │
│   517 │   │   │   self._format_and_raise(key=key, value=None, cause=e)                           │
│   518                                                                                            │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ConfigKeyError: Key not found: '__metadata__'
    full_key: __metadata__
    object_type=dict

Thank you. The ability to migrate from earlier versions is not yet wrapped into the configure script. I'll add it today.

@mickr777
Copy link
Contributor

mickr777 commented Jun 21, 2023

Thank you. The ability to migrate from earlier versions is not yet wrapped into the configure script. I'll add it today.

Sorry didnt realised it hadn't been added yet, fyi if it helps fresh install with --yes works how it should for me, but --default_only asks for user input (not sure what args are wanted to be automated)

@GreggHelt2
Copy link
Contributor

GreggHelt2 commented Jun 22, 2023

I tried to test a migration from prenodes tag using (invokeai-configure --root="/home/invokeuser/userfiles/" --yes) and got this error
....
ConfigKeyError: Key not found: 'metadata'
full_key: metadata
object_type=dict

Thank you. The ability to migrate from earlier versions is not yet wrapped into the configure script. I'll add it today.

I'm getting the

ConfigKeyError: Key not found: 'metadata'

error as well.

I tried running the migrate_models_to_3.0.py script but didn't help. Trying a fresh install (venv and all) now, but on a slow connection...

@ebr
Copy link
Member

ebr commented Jun 26, 2023

@lstein I re-tested this and all works. I also pushed a commit fixing UI model selection that broke due to change in naming from pipeline to main

lstein added a commit that referenced this pull request Jun 26, 2023
"Fixes" the test suite generally so it doesn't fail CI, but some tests
needed to be skipped/xfailed due to recent refactor.

- ignore three test suites that broke following the model manager
refactor
- move `InvocationServices` fixture to `conftest.py`
- add `boards` items to the `InvocationServices`  fixture

This PR makes the unit tests work, but end-to-end tests are temporarily
commented out due to `invokeai-configure` being broken in `main` -
pending #3547

Looks like a lot of the tests need to be rewritten as they reference
`TextToImageInvocation` / `ImageToImageInvocation`
@lstein lstein enabled auto-merge June 28, 2023 19:27
@lstein lstein merged commit 2d85f9a into main Jun 28, 2023
6 of 7 checks passed
@lstein lstein deleted the lstein/installer-for-new-model-layout branch June 28, 2023 19:31
lstein added a commit that referenced this pull request Jun 28, 2023
Rewrite lora to be applied by model patching as it gives us benefits:
1) On model execution calculates result only on model weight, while with
hooks we need to calculate on model and each lora
2) As lora now patched in model weights, there no need to store lora in
vram

Results:
Speed:
| loras count | hook | patch |
| --- | --- | --- |
| 0 | ~4.92 it/s | ~4.92 it/s |
| 1 | ~3.51 it/s | ~4.89 it/s |
| 2 | ~2.76 it/s | ~4.92 it/s |

VRAM:
| loras count | hook | patch |
| --- | --- | --- |
| 0 | ~3.6 gb | ~3.6 gb |
| 1 | ~4.0 gb | ~3.6 gb |
| 2 | ~4.4 gb | ~3.7 gb |

As based on #3547 wait to merge.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants