-
Notifications
You must be signed in to change notification settings - Fork 179
[ModelRunner] Support embedding inputs #916
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
1532649
to
0bd6624
Compare
@wangxiyuan this is ready for review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's good to add a test as well
@@ -0,0 +1,83 @@ | |||
import torch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice example
2fd64bf
to
727f97f
Compare
d5b3ad9
to
331a251
Compare
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
Need to support V1 |
|
Signed-off-by: wangli <[email protected]>
What this PR does / why we need it?
llm.generate({"prompt_embeds": input_embeds}, sampling_params)
or
prompt_embeds
to examplesDoes this PR introduce any user-facing change?
How was this patch tested?
CI passed with new added/existing test.
and I have test with the example script in this pr, and the output seems looks good: