Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ONNX Support #3562

Merged
merged 91 commits into from
Jul 31, 2023
Merged

ONNX Support #3562

merged 91 commits into from
Jul 31, 2023

Conversation

StAlKeR7779
Copy link
Contributor

@StAlKeR7779 StAlKeR7779 commented Jun 20, 2023

Note: this branch based on #3548, not on main

While find out what needs to be done to implement onnx, found that I can do draft of it pretty quickly, so... here it is)
Supports LoRA and TI.
As example - cat with sadcatmeme lora:
image
image

blessedcoolant and others added 30 commits June 17, 2023 21:28
Update the text to imaeg and image to image graphs to work with the new model loader. Currently only supports 1.x models. Will update this soon to make it work with all models.
Basically updated all slices to be more descriptive in their names. Did so in order to make sure theres good naming scheme available for secondary models.
So the long names do not get cut off.
…ms to model schema(as they not not referenced in case of Literal definition)
@RyanJDick
Copy link
Collaborator

Was that yoda image done on an olive model per chance? Olive models don't support applying loras I believe. If it's just a standard onnx model, could you send me the model and lora you used?

Correct. It was an olive-optimized model.

@brandonrising brandonrising requested a review from ebr as a code owner July 29, 2023 01:02
@lstein
Copy link
Collaborator

lstein commented Jul 30, 2023

Comments:

  1. I had to pip install onnxruntime.
  2. Ran invokeai-model-install --add axodoxian/stable_diffusion_onnx from the command line -- worked
  3. Confirmed that model appears in the Web Model Manager. It doesn't appear in the model manager TUI however; I can help with this.
  4. Tried the text2image linear workflow and got this:
[2023-07-29 20:51:55,415]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "/home/lstein/Projects/InvokeAI/invokeai/app/services/processor.py", line 86, in __process
    outputs = invocation.invoke(
  File "/home/lstein/Projects/InvokeAI/invokeai/app/invocations/onnx.py", line 261, in invoke
    unet.create_session(h, w)
  File "/home/lstein/Projects/InvokeAI/invokeai/backend/model_management/models/base.py", line 625, in create_session
    raise e
  File "/home/lstein/Projects/InvokeAI/invokeai/backend/model_management/models/base.py", line 623, in create_session
    self.session = InferenceSession(self.proto.SerializeToString(), providers=providers, sess_options=sess)
  File "/home/lstein/invokeai-main/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/home/lstein/invokeai-main/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 435, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Failed to find kernel for GroupNorm(1) (node GroupNorm_0). Kernel not found

[2023-07-29 20:51:55,419]::[InvokeAI]::ERROR --> Error while invoking:
[ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Failed to find kernel for GroupNorm(1) (node GroupNorm_0). Kernel not found
  1. Searched the web and found onnxruntime-gpu. Installed that.
  2. Text2Image works, getting ~9 it/s for a 512x512 image. Stock SD-1.5 is giving ~8 it/s at same resolution.
  3. Img2Img not working for me. The model doesn't appear in the model selection menu.
  4. Tried the node editor. Graphs made with ONNXSD1ModelLoader did not work (validation errors), but Onnx Model Loader program loaded and rendered an image.

Overall very positive experience. Just need a better step-by-step guide for people like me who are coming to the ONNX architecture from basically zero.

Copy link
Collaborator

@lstein lstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Needs img2img and canvas support and a better step-by-step guide. Overall, though, looks great.

@brandonrising
Copy link
Collaborator

Comments:

  1. I had to pip install onnxruntime.
  2. Ran invokeai-model-install --add axodoxian/stable_diffusion_onnx from the command line -- worked
  3. Confirmed that model appears in the Web Model Manager. It doesn't appear in the model manager TUI however; I can help with this.
  4. Tried the text2image linear workflow and got this:
[2023-07-29 20:51:55,415]::[InvokeAI]::ERROR --> Traceback (most recent call last):
  File "/home/lstein/Projects/InvokeAI/invokeai/app/services/processor.py", line 86, in __process
    outputs = invocation.invoke(
  File "/home/lstein/Projects/InvokeAI/invokeai/app/invocations/onnx.py", line 261, in invoke
    unet.create_session(h, w)
  File "/home/lstein/Projects/InvokeAI/invokeai/backend/model_management/models/base.py", line 625, in create_session
    raise e
  File "/home/lstein/Projects/InvokeAI/invokeai/backend/model_management/models/base.py", line 623, in create_session
    self.session = InferenceSession(self.proto.SerializeToString(), providers=providers, sess_options=sess)
  File "/home/lstein/invokeai-main/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/home/lstein/invokeai-main/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 435, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Failed to find kernel for GroupNorm(1) (node GroupNorm_0). Kernel not found

[2023-07-29 20:51:55,419]::[InvokeAI]::ERROR --> Error while invoking:
[ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Failed to find kernel for GroupNorm(1) (node GroupNorm_0). Kernel not found
  1. Searched the web and found onnxruntime-gpu. Installed that.
  2. Text2Image works, getting ~9 it/s for a 512x512 image. Stock SD-1.5 is giving ~8 it/s at same resolution.
  3. Img2Img not working for me. The model doesn't appear in the model selection menu.
  4. Tried the node editor. Graphs made with ONNXSD1ModelLoader did not work (validation errors), but Onnx Model Loader program loaded and rendered an image.

Overall very positive experience. Just need a better step-by-step guide for people like me who are coming to the ONNX architecture from basically zero.

Thanks for checking it out! I updated the installer to install onnxruntime/onnxruntime-gpu/onnxruntime-directml through optional dependencies based on their elections for different combinations of GPUs and Driver installations. Do you think updating the Readme would suffice telling people they need to install the correct onnxruntime for their environment?

Also yeah, I currently have it setup to only allow onnxruntime for linear text to image and node-editor. It's disabled on all other screens. I was thinking it would be good to go ahead and get it out with minimal functionality and slowly roll out more features as we move forward in separate PRs rather than maintaining this as it gets bigger. Definitely open to a discussion around this though.

I'll definitely work on better documentation for how to use ONNX in workflows, definitely a weak spot in this PR!

@brandonrising brandonrising changed the title [WIP] ONNX Support ONNX Support Jul 31, 2023
@hipsterusername hipsterusername merged commit 81654da into main Jul 31, 2023
7 of 8 checks passed
@hipsterusername hipsterusername deleted the feat/onnx branch July 31, 2023 21:34
@hipsterusername
Copy link
Member

Merged after discussion w/ LStein & Brandon in discord.

Copy link
Collaborator

@psychedelicious psychedelicious left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a number of issues that need to be fixed.

Also, the PR was merged with 9 errors on the frontend check, there are a number of typing problems. I've added comments for each problem.

@@ -61,6 +61,7 @@ dependencies = [
"numpy",
"npyscreen",
"omegaconf",
"onnx",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • I had to install this after updating to main in order for the app to run
  • then I also had to install onnxruntime manually, I guess because onnx was installed, otherwise the app did not start up

I did not ask for onnx. So I'm not sure the optional dependencies are working as expected.

Comment on lines 229 to +243
export const zMainModel = z.object({
model_name: z.string().min(1),
base_model: zBaseModel,
model_type: zModelType,
});

/**
* Type alias for model parameter, inferred from its zod schema
*/
export type MainModelParam = z.infer<typeof zMainModel>;

/**
* Type alias for model parameter, inferred from its zod schema
*/
export type OnnxModelParam = z.infer<typeof zMainModel>;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Schemas with single valid values for a property should just have the literal there.

If the main model schema needs model_type now, it should have model_type: z.literal("main"). Same for onnx model schema. Otherwise the client will accept an onnx or sd main model in either situation.

@hipsterusername hipsterusername restored the feat/onnx branch August 1, 2023 02:57
@psychedelicious psychedelicious mentioned this pull request Aug 1, 2023
12 tasks
@lalith-mcw
Copy link

[2023-07-29 20:51:55,415]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "/home/lstein/Projects/InvokeAI/invokeai/app/services/processor.py", line 86, in __process
outputs = invocation.invoke(
File "/home/lstein/Projects/InvokeAI/invokeai/app/invocations/onnx.py", line 261, in invoke
unet.create_session(h, w)
File "/home/lstein/Projects/InvokeAI/invokeai/backend/model_management/models/base.py", line 625, in create_session
raise e
File "/home/lstein/Projects/InvokeAI/invokeai/backend/model_management/models/base.py", line 623, in create_session
self.session = InferenceSession(self.proto.SerializeToString(), providers=providers, sess_options=sess)
File "/home/lstein/invokeai-main/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 383, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/home/lstein/invokeai-main/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 435, in _create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Failed to find kernel for GroupNorm(1) (node GroupNorm_0). Kernel not found

[2023-07-29 20:51:55,419]::[InvokeAI]::ERROR --> Error while invoking:
[ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Failed to find kernel for GroupNorm(1) (node GroupNorm_0). Kernel not found

This error shouldn't be there if I am trying to run the CPU version. When I try to install the CPU specific toolsets still the error is seen

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants