Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chore: Support openai o1 model #937

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

StrongMonkey
Copy link
Contributor

@StrongMonkey StrongMonkey commented Jan 22, 2025

Openai O1 model doesn't support stream, and doesn't support setting temperature on chat completion. We have to tweak that in order to support o1 model.

obot-platform/obot#1131

@@ -259,8 +259,12 @@ func toMessages(request types.CompletionRequest, compat bool) (result []openai.C
}

if len(systemPrompts) > 0 {
role := types.CompletionMessageRoleTypeSystem
if useO1Model {
Copy link
Contributor Author

@StrongMonkey StrongMonkey Jan 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to docs, for o1 model it is better to use developer message.

https://platform.openai.com/docs/guides/reasoning#advice-on-prompting

@@ -446,6 +455,22 @@ func (c *Client) Call(ctx context.Context, messageRequest types.CompletionReques
return &result, nil
}

func isO1Model(model string, envs []string) bool {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is two ways to check whether the model is used by o1

  1. check if modeName is o1. This is true if used with standalone gptscript.
  2. Check if OPENAI_MODEL_NAME is set. This will be set in Obot to determine the name.

@StrongMonkey
Copy link
Contributor Author

StrongMonkey commented Jan 23, 2025

I changed the approach and move the logic to openai-model-provider. However, we still need a way to dynamically turn off stream since o1 doesn't support it. So the way to do that is to check GPTSCRIPT_INTERNAL_OPENAI_STREAMING exists in envs so that we could dynamic set it in obot.

In plain gptscript user would be expected to set that when they are using o1 model, as there is no smart way to automatically do this though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant