-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Onnx Pipeline: Inference for text to image conversion #3380
Onnx Pipeline: Inference for text to image conversion #3380
Conversation
@saikrishna2893 - Thanks for the contribution! To confirm, is this integrated into the new |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really appreciate the work that went into this.
I'm sad to say that this will have to be modified in order to work with nodes. In particular, CLI.py is going to disappear from the repository soon. Please take a look at the invokeai/app
tree, in particular invokeai/app/invocation/latents.py
, to understand how the new text to image inference system works.
Does the ONNYX pipeline take advantage of CUDA, and if so, how does it perform? |
Also note the CI failures. |
The Onnx Pipeline currently uses CPU as its device. Pipeline makes use of OpenVINO execution provider for enhanced optimized inferencing. |
@lstein can you point out any documentation related to use of app and node structure. Example commands to run and test. We have done code through of the invokeai/app. Quite unclear on some of the working. We have checked PR#3180, description from discussion page and other PR's. Have seen some commands using pipe to create multiple sessions of inference. Any further information on this would be helpful. Thanks. Faced errors when running following commands:
|
Hello! Following up on the request from discussions with @lalith-mcw here - CCing @lstein and @StAlKeR7779 for visibility. To confirm, is there a reason you're looking for CLI documentation? I ask because 3.0 supports a graph-based API, that can be accessed via OpenAPI documentation. It may be easier to get direct implementation support if you join Discord. That is where we offer live Dev feedback and Q&A, and where a number of folks find implementation guidance. In any case, I believe that you'll need the following guidance, which I believe @lstein and/or @StAlKeR7779 can provide more details on:
If you reach out to me on Discord, I can create a channel for us to discuss this project. Thanks again for you and the team's support |
Currently in this PR we do provide an option for the user to select their own model type between |
Hi @saikrishna2893 , we have implemented Onnx support in #3562. It is integrated in the nodes backend, but support is limited to text to image only for now. |
Initial version of ONNX based inference pipeline for Text to image conversion based on Stable-diffusion models.
Performance related information: Tested on Raphael (AMD 7600X) and Raptor Lake (I5-13000K)
Onnx based inference performance - InvokeAI-pipelines.pdf
Sample output from execution of Onnx Pipeline tested on Raptor-Lake (I5-13000k) on CPU:
Sample output from execution of Pytorch Pipeline tested on Raptor-Lake (I5-13000k) on CPU: