-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support Bailing LLM from ALIPAY for #3487 #3543
base: main
Are you sure you want to change the base?
Conversation
Please take a look at this pull request. If you have any questions, please feel free to contact us. Thank you very much. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Could you help review this PR for Bailing LLM from ALIPAY? If any comments please let us know. Thank you. |
I just change the file "fastchat/serve/gradio_web_server.py" in this PR and use new Context class replacing List. This change can make the command 'python -m fastchat.serve.gradio_web_server --controller "" --share --register xxxx' cowork with your code sync in PR #3546. Could you help review this PR? If any comments please let me know. |
39a9d30
to
c8dda13
Compare
LGTM |
c8dda13
to
ff56f18
Compare
ff56f18
to
ba8a246
Compare
ba8a246
to
e5da119
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change default url
Define the conversation template for Bailing LLM and provide the iterator.
Including: temperature top_p top_k max_tokens
New code defines the Context class and use it as input parameter.
c9a9a46
to
660ea0a
Compare
Why are these changes needed?
This PR implements the support for Bailing reasoning. The Bailing LLM provides the HTTP end point for reasoning and can be acessed by HTTP Post.
Related issue number (if applicable)
Close #3487
Checks
format.sh
to lint the changes in this PR.