You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems your get_T2I_Flash_pipeline requires more than 40G memory in a A100 GPU when trying to inference.However,when I tried to load the model in 2*40G A100 GPU,it seems your codes has not supported it yet?
Could you provide a way to load this model with multiple gpus?
The text was updated successfully, but these errors were encountered:
It seems your get_T2I_Flash_pipeline requires more than 40G memory in a A100 GPU when trying to inference.However,when I tried to load the model in 2*40G A100 GPU,it seems your codes has not supported it yet?
Could you provide a way to load this model with multiple gpus?
The text was updated successfully, but these errors were encountered: