Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when running Flux1.0 dev and HunyuanDiT-v1.2-Diffusers with multiple prompts #398

Open
henryhe4004 opened this issue Dec 18, 2024 · 3 comments

Comments

@henryhe4004
Copy link

docker image: https://hub.docker.com/r/thufeifeibear/xdit-dev
GPUs: 8*A100
Cuda version:12.5
Driver Version: 535.216.01
terminate called after throwing an instance of 'c10::DistBackendError'

script:
torchrun --nproc_per_node=8 ./examples/flux_example.py --model black-forest-labs/FLUX.1-dev --pipefusion_parallel_degree 2 --ulysses_degree 2 --ring_degree 2 --height 1024 --width 1024 --no_use_resolution_binning --num_inference_steps 28 --warmup_steps 5 --prompt 'brown dog laying on the ground with a metal bowl in front of him.' 'A serene sunset over a tranquil ocean.' 'A dense forest covered in morning mist.' 'A majestic mountain range under a starry sky.' 'A peaceful village surrounded by rolling green hills.' 'A desert with golden sand dunes under a clear blue sky.' 'A tropical beach with palm trees and turquoise waters.' 'A futuristic cyborg warrior in a neon-lit city.' 'A mysterious hooded figure standing in the rain.' 'A regal queen seated on a golden throne.' --use_parallel_vae

[rank4]: Traceback (most recent call last):
[rank4]: File "/workspace/xDiT/./examples/flux_example.py", line 92, in
[rank4]: main()
[rank4]: File "/workspace/xDiT/./examples/flux_example.py", line 51, in main
[rank4]: output = pipe(
[rank4]: File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
[rank4]: return func(*args, **kwargs)
[rank4]: File "/workspace/xDiT/xfuser/model_executor/pipelines/base_pipeline.py", line 218, in wrapper
[rank4]: return func(*args, **kwargs)
[rank4]: File "/workspace/xDiT/xfuser/model_executor/pipelines/base_pipeline.py", line 166, in data_parallel_fn
[rank4]: return func(self, *args, **kwargs)
[rank4]: File "/workspace/xDiT/xfuser/model_executor/pipelines/base_pipeline.py", line 186, in check_naive_forward_fn
[rank4]: return func(self, *args, **kwargs)
[rank4]: File "/workspace/xDiT/xfuser/model_executor/pipelines/pipeline_flux.py", line 242, in call
[rank4]: ) = self.encode_prompt(
[rank4]: File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/flux/pipeline_flux.py", line 351, in encode_prompt
[rank4]: pooled_prompt_embeds = self._get_clip_prompt_embeds(
[rank4]: File "/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/flux/pipeline_flux.py", line 287, in _get_clip_prompt_embeds
[rank4]: prompt_embeds = self.text_encoder(text_input_ids.to(device), output_hidden_states=False)
[rank4]: RuntimeError: CUDA error: an illegal memory access was encountered
[rank4]: CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
[rank4]: For debugging consider passing CUDA_LAUNCH_BLOCKING=1
[rank4]: Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

@feifeibear
Copy link
Collaborator

Could keep only one prompt in the args and see if that is work?

@henryhe4004
Copy link
Author

it works at only one prompt

@feifeibear
Copy link
Collaborator

You would like to ingest multiple prompt to the model. We never use it in this way, because you can ingest the prompt one by one they are the same.
If you need the feature we will try to fix the bug.

@feifeibear feifeibear changed the title Running Flux1.0 dev and HunyuanDiT-v1.2-Diffusers with VAE parallism and meet cuda error Error when running Flux1.0 dev and HunyuanDiT-v1.2-Diffusers with multiple prompts Dec 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants