-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] training crash when set --tp-comm-overlap #1274
Comments
hello,you can pull up the latest code and use |
hi,
and I pull the latest repo,the same error |
How about trying the newest TE? |
sorry,I have set |
Maybe the local implementation does not support tp-overlap. |
Describe the bug
training crash when set --tp-comm-overlap
To Reproduce
MODEL_PARALLEL_ARGS=(
--tensor-model-parallel-size 8
--pipeline-model-parallel-size 1
--use-flash-attn
--sequence-parallel
--tp-comm-overlap
)
docker run --rm --gpus=all --shm-size=10g --ulimit memlock=-1 --ulimit stack=67108864 --ipc=host -v /mnt/data01/fake_data:/home nvcr.io/nvidia/pytorch:24.04-py3 bash -c "cd /home/Megatron-LM && bash examples/gpt3/single.sh"
Expected behavior
run successfully
Stack trace/logs
Environment (please complete the following information):
Proposed fix
If you have a proposal for how to fix the issue state it here or link to a PR.
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: