-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Potential risk of getting stuck in PipeFusion #310
Comments
@feifeibear Can you help me make a double check of this logic? I am not quite familiar with this project. |
Your code snippet in the issue is very helpful. But, can you also give us a run script to reproduce the error in xdit. Also what kind of GPU cluser are you using? |
@feifeibear Sorry, I was busy recently. It's hard to reproduce the error on gpu, because i can only change the output picture size to make size of the patch_latent bigger, and it will OOM to make the picture big enough to reproduce the error. |
num_pipeline_patch can not be set too large, for example I sometimes encounter stuck when it is set to 16. |
the problem has been fixed with properiate NCCL env setting! |
@feifeibear Hi, Can you tell me which NCCL env should be set to solve the problem ? |
You can try to export NCCL_DEBUG='INFO' to get more information, check if there is information like 'via SHM/direct/direct'. If so, try export NCCL_SHM_DISABLE='1' before running the scripts. @HOOLoLo tell me if you still get stuck |
I have submitted a issue in pytorch: pytorch/pytorch#138074 which describes the problem, hoping they will add a new interface of setting custom stream for communication.
This problem hasn't occurred so far because the send kernel of NCCL will ignore the recv kernel and complete, when the size of data is less than 64MB.
Do you guys know of any other solutions?
The text was updated successfully, but these errors were encountered: