Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support FLUX? #2421

Open
algorithmconquer opened this issue Nov 7, 2024 · 10 comments
Open

support FLUX? #2421

algorithmconquer opened this issue Nov 7, 2024 · 10 comments
Assignees
Labels
question Further information is requested triaged Issue has been triaged by maintainers

Comments

@algorithmconquer
Copy link

Is there any plan to support FLUX in the future?https://github.com/NVIDIA/TensorRT,this currently does not support multi-GPU parallelism.

@hello-11
Copy link
Collaborator

hello-11 commented Nov 7, 2024

@algorithmconquer thanks for your interest in TrtLLM. I cannot open the link. Could you send the right link?

@hello-11 hello-11 added triaged Issue has been triaged by maintainers waiting for feedback feature request New feature or request labels Nov 8, 2024
@algorithmconquer
Copy link
Author

@hello-11 The right link is https://github.com/NVIDIA/TensorRT;

@algorithmconquer
Copy link
Author

@hello-11 Can you give a convert_checkpoint.py to transform the dit(transformer) of flux?

@hello-11
Copy link
Collaborator

hello-11 commented Nov 8, 2024

@algorithmconquer Can this example of dit help you?

@algorithmconquer
Copy link
Author

@hello-11 I have two questions; Question1: In https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/dit, why only support pp_size = 1? Question2: dit model layer name is different from the dit(transformer) of flux, how to achieve convert_checkpoint.py according to https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/dit?

@ChunhuanLin
Copy link

@algorithmconquer We did have plan to support Flux in the near future.

@hello-11 hello-11 added question Further information is requested and removed feature request New feature or request labels Nov 11, 2024
@ChunhuanLin
Copy link

@hello-11 I have two questions; Question1: In https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/dit, why only support pp_size = 1? Question2: dit model layer name is different from the dit(transformer) of flux, how to achieve convert_checkpoint.py according to https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/dit?

  1. Please go to this why Dit does not support pp_size > 1 #2427 for the discussion.
  2. We need modify convert_checkpoint.py to follow the Flux checkpoint. Besides this script, we also need to support Flux model here.
    We still need some works to support Flux indeed. And we are going to support it in the near future.

@algorithmconquer
Copy link
Author

@ChunhuanLin Thank you for your response!

@MagicRUBICK
Copy link

Is there a planned release date for flux.1-dev?

@pashanitw
Copy link

i would love to see the flux support and stable diffusion 3.5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested triaged Issue has been triaged by maintainers
Projects
None yet
Development

No branches or pull requests

5 participants