Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[V1] Fix torch profiling for offline inference #11125

Merged
merged 3 commits into from
Dec 12, 2024

Conversation

ywang96
Copy link
Member

@ywang96 ywang96 commented Dec 12, 2024

Tested with VLLM_USE_V1=1 python3 examples/offline_inference_with_profiler.py. Without this fix, running the same script will error out with

RuntimeWarning: coroutine 'InprocClient.profile' was never awaited

Edit:
Also fixed for VLLM_USE_V1=1 VLLM_ENABLE_V1_MULTIPROCESSING=1 python3 examples/offline_inference_with_profiler.py. Two notable changes are:

  • We need to wrap the existing code with if __name__ == "__main__": to make it work with MP/SyncMPClient.
  • A sleep time buffer was added so that the main process won't terminate too early for the background process to finish writing the profiling output. (This is somewhat hacky but I don't know if there's a better way to do so)

Signed-off-by: Roger Wang <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@ywang96
Copy link
Member Author

ywang96 commented Dec 12, 2024

Hello @Abatom! It looks like you added the support for profiling with V1 in #10564, did you test it with all three clients? (InprocClient, SyncMPClient, AsyncMPClient)

@ywang96 ywang96 marked this pull request as ready for review December 12, 2024 06:59
@ywang96 ywang96 changed the title [V1] Fix profiling with InprocClient [V1] Fix profiling for offline inference Dec 12, 2024
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
@ywang96 ywang96 changed the title [V1] Fix profiling for offline inference [V1] Fix torch profiling for offline inference Dec 12, 2024
@ywang96 ywang96 requested a review from tlrmchlsmth December 12, 2024 08:20
@Abatom
Copy link
Contributor

Abatom commented Dec 12, 2024

Hello @Abatom! It looks like you added the support for profiling with V1 in #10564, did you test it with all three clients? (InprocClient, SyncMPClient, AsyncMPClient)

Sorry, I didn't test everything.

Copy link
Collaborator

@tlrmchlsmth tlrmchlsmth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing this!

@@ -15,19 +16,25 @@
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)

# Create an LLM.
llm = LLM(model="facebook/opt-125m", tensor_parallel_size=1)
if __name__ == "__main__":
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that you shouldn't need the if __name__ == "__main__": once #11074 lands

@tlrmchlsmth tlrmchlsmth added the ready ONLY add when PR is ready to merge/full CI is needed label Dec 12, 2024
@tlrmchlsmth tlrmchlsmth enabled auto-merge (squash) December 12, 2024 14:13
@tlrmchlsmth tlrmchlsmth merged commit 4816d20 into vllm-project:main Dec 12, 2024
67 checks passed
sleepwalker2017 pushed a commit to sleepwalker2017/vllm that referenced this pull request Dec 13, 2024
BKitor pushed a commit to BKitor/vllm that referenced this pull request Dec 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants