Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

custom allreduce + torch.compile #10121

Merged
merged 15 commits into from
Nov 26, 2024

Conversation

SageMoore
Copy link
Contributor

@SageMoore SageMoore commented Nov 7, 2024

This Pr changes pynccl all reduce to be out of place and removes support for torch distributed's all reduce.

Copy link

github-actions bot commented Nov 7, 2024

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

Copy link

mergify bot commented Nov 8, 2024

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @SageMoore.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Nov 8, 2024
@mergify mergify bot removed the needs-rebase label Nov 8, 2024
else:
torch.distributed.all_reduce(input_, group=self.device_group)
assert pynccl_comm is not None
with pynccl_comm.change_state(enable=True,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can change pynccl to be always enabled.

@youkaichao youkaichao changed the title [V1] Allow piecewise cuda graphs to run with custom allreduce custom allreduce + torch.compile Nov 26, 2024
Signed-off-by: youkaichao <[email protected]>
@mergify mergify bot added the documentation Improvements or additions to documentation label Nov 26, 2024
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: youkaichao <[email protected]>
@youkaichao
Copy link
Member

should work now vllm serve meta-llama/Llama-3.1-8B-Instruct -O3 -tp 2 :

input_.shape=torch.Size([2048, 4096]), using pynccl allreduce
(VllmWorkerProcess pid=3908615) input_.shape=torch.Size([2048, 4096]), using pynccl allreduce


(VllmWorkerProcess pid=3908615) input_.shape=torch.Size([256, 4096]), using custom allreduce
input_.shape=torch.Size([248, 4096]), using custom allreduce
...
(VllmWorkerProcess pid=3908615) input_.shape=torch.Size([8, 4096]), using custom allreduce
input_.shape=torch.Size([2, 4096]), using custom allreduce
(VllmWorkerProcess pid=3908615) input_.shape=torch.Size([4, 4096]), using custom allreduce
input_.shape=torch.Size([1, 4096]), using custom allreduce
(VllmWorkerProcess pid=3908615) input_.shape=torch.Size([2, 4096]), using custom allreduce
INFO 11-25 16:46:25 custom_all_reduce.py:224] Registering 2275 cuda graph addresses
(VllmWorkerProcess pid=3908615) input_.shape=torch.Size([1, 4096]), using custom allreduce
(VllmWorkerProcess pid=3908615) INFO 11-25 16:46:26 custom_all_reduce.py:224] Registering 2275 cuda graph addresses

for profiling size [2048, 4096], it is using pynccl.

for decode size [256, 4096], it is using custom allreduce.

@youkaichao
Copy link
Member

@SageMoore thanks for your pioneering investigation!

Comment on lines +362 to +365
# TODO: pynccl should not use `stream=`
# it can just always use the current stream.
out = pynccl_comm.all_reduce(input_,
stream=torch.cuda.current_stream())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was a little confused about what this TODO meant, so I had to dig a bit.

Looks like PyNcclCommunicator creates a new stream in its __init__ method and uses it by default so we always have to pass in the current stream. Do you know it behaves this way?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mostly historical. we can remove it. but i don't want to do it in this pr.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I completely agree.

Copy link
Collaborator

@tlrmchlsmth tlrmchlsmth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me!

@youkaichao youkaichao enabled auto-merge (squash) November 26, 2024 04:43
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Nov 26, 2024
@SageMoore
Copy link
Contributor Author

@SageMoore thanks for your pioneering investigation!

Thanks for the help getting this over the line!

@youkaichao youkaichao disabled auto-merge November 26, 2024 06:00
@youkaichao youkaichao merged commit 9a88f89 into vllm-project:main Nov 26, 2024
63 of 67 checks passed
@youkaichao youkaichao deleted the custom-ar-stuff branch November 26, 2024 06:00
afeldman-nm pushed a commit to neuralmagic/vllm that referenced this pull request Dec 2, 2024
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Signed-off-by: Andrew Feldman <[email protected]>
sleepwalker2017 pushed a commit to sleepwalker2017/vllm that referenced this pull request Dec 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants