-
Notifications
You must be signed in to change notification settings - Fork 489
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference with a list of prompts without re-loading the model each time #122
Comments
Does it work if you just make prompt a ["Prompt List", "List of Prompts"] ? Inference.py seems to have this built in and your code would be doing the same thing except dumping cuda cache and trying to reload the pipeline every loop. |
The prompt list as inputs is not supported currently, whether a single GPU or multi-GPU with xDiT. |
Hello, I have encountered the same problem as @JosephPai. Do you have a solution to fix the problem? |
In issue Tencent#122 (Tencent#122), every for loop the parallelize_transformer will reset the pipeline causing the problem. If changing the parallelize_transfomer to __init__,it will solve the issue without affecting other functions.
I found that this problem is caused by reinitializing the parallelize_transformer function. I have solved this problem in #130. |
Solve the issue #122 by updating inference.py
Hi authors, I would like to run the model for a list of prompts in
multi-gpu
mode. To save the time on loading the pre-trained model each time, I modified thesample_video.py
file with a for loop to work on a list of prompts.However, the code works well for the first prompt, but always fails at the second one.
Could you help look into this issue? Thanks.
Error message:
The text was updated successfully, but these errors were encountered: