-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use ChatCompletion
?
#2
Comments
Hi, @hcffffff! I didn't expect someone to find this repository, and unit tests for this project are undergoing. |
Hi @hcffffff! I have a hotfix for ChatCompletion here: 58db2ee. This is a minimal demo: import openai_manager
print(openai_manager.ChatCompletion.create(model='gpt-3.5-turbo',
messages=[
[{"role": "user", "content": "Hello!"}],
[{"role": "user", "content": "Hello!"}, {"role": "assistant", "content": "Hello there!"}, {
"role": "user", "content": "Who are you?"}]
])) A reasonable output would be:
Note like Hope this helps! |
Also, please pull from this repo to update, as I plan to refactor the codebase this weekend. And no upgrade will be made to PyPI until the refactoring is finished. |
Thanks again, that really helps a lot! And another question is, how to set arguments like |
Wow, that's a problem I considered before. I assume you want to attach different model_kwargs to different requests, like temp=1 for the first request and temp=0 for others. I would like to add it to my TODO list with a dedicated interface in |
Hello you can check my Betsy repository this will maybe help you |
Thanks, and really appreciate your work!
|
Are there any examples of using
openai_manager.ChatCompletion.create()
function, cuz there seems to be a problem while using thecreate
function.In detail, the
ChatCompletion
usemessages
as prompt, and it conflicts with the code here.So could you please show me an example of
openai_manager.ChatCompletion
usage? Thanks a lot.The text was updated successfully, but these errors were encountered: