-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support FLUX? #2421
Comments
@algorithmconquer thanks for your interest in TrtLLM. I cannot open the link. Could you send the right link? |
@hello-11 The right link is https://github.com/NVIDIA/TensorRT; |
@hello-11 Can you give a convert_checkpoint.py to transform the dit(transformer) of flux? |
@algorithmconquer Can this example of dit help you? |
@hello-11 I have two questions; Question1: In https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/dit, why only support pp_size = 1? Question2: dit model layer name is different from the dit(transformer) of flux, how to achieve convert_checkpoint.py according to https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/dit? |
@algorithmconquer We did have plan to support Flux in the near future. |
|
@ChunhuanLin Thank you for your response! |
Is there a planned release date for flux.1-dev? |
i would love to see the flux support and stable diffusion 3.5 |
Is there any plan to support FLUX in the future?https://github.com/NVIDIA/TensorRT,this currently does not support multi-GPU parallelism.
The text was updated successfully, but these errors were encountered: