-
Notifications
You must be signed in to change notification settings - Fork 286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Chore: Support openai o1 model #937
base: main
Are you sure you want to change the base?
Conversation
pkg/openai/client.go
Outdated
@@ -259,8 +259,12 @@ func toMessages(request types.CompletionRequest, compat bool) (result []openai.C | |||
} | |||
|
|||
if len(systemPrompts) > 0 { | |||
role := types.CompletionMessageRoleTypeSystem | |||
if useO1Model { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
According to docs, for o1 model it is better to use developer message.
https://platform.openai.com/docs/guides/reasoning#advice-on-prompting
pkg/openai/client.go
Outdated
@@ -446,6 +455,22 @@ func (c *Client) Call(ctx context.Context, messageRequest types.CompletionReques | |||
return &result, nil | |||
} | |||
|
|||
func isO1Model(model string, envs []string) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is two ways to check whether the model is used by o1
- check if modeName is o1. This is true if used with standalone gptscript.
- Check if
OPENAI_MODEL_NAME
is set. This will be set in Obot to determine the name.
Signed-off-by: Daishan Peng <[email protected]>
062d703
to
77cb909
Compare
I changed the approach and move the logic to openai-model-provider. However, we still need a way to dynamically turn off stream since o1 doesn't support it. So the way to do that is to check In plain gptscript user would be expected to set that when they are using o1 model, as there is no smart way to automatically do this though. |
Openai O1 model doesn't support stream, and doesn't support setting temperature on chat completion. We have to tweak that in order to support o1 model.
obot-platform/obot#1131