Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'messages[0].role' does not support 'system' with this model. for OpenAI o1-mini / o1-preview #762

Open
yurhasko opened this issue Dec 13, 2024 · 4 comments

Comments

@yurhasko
Copy link

All o1-* models don't support 'system' role: https://stackoverflow.com/questions/78981559/why-the-new-openai-model-o1-preview-doesnt-support-system-role

So any attempt to work with o1-mini / o1-preview fails with the following error:

image
@Amnish04
Copy link
Collaborator

Amnish04 commented Dec 13, 2024

@yurhasko I can't think of an ideal solution for this in code as this is a model issue.

You could try configuring the chat to only supported parameters.

The following worked for me:

  1. Delete the system message in chat.
image 2. Set the temperature to `1` in settings, that is `o1` default. image

Result:
image

@humphd @tarasglek Please correct me if there's a better way.

@yurhasko
Copy link
Author

yurhasko commented Dec 13, 2024

Hmm, ok, thanks, it works now. I believe it's worth to fix it in one of the following ways:

  1. adding a condition to have temperature set to 1 under the hood always when o1-* is used, overriding the user setting
  2. allowing to assign different temperature values to different models, so that user can still have 0 by default and 1 for o1-* (the same applies to the system prompt)
  3. at least showing some sort of hint when user switches to o1-* to inform about current o1-* limitations, like:
    o1 models currently have some limitations. Switch temperature to 1 and delete system prompt to use o1

@humphd
Copy link
Collaborator

humphd commented Dec 14, 2024

Doing provider-level, and also model-level, param overrides is a good idea we could explore.

However, I'm not sure about the chat completions change, since system messages are part of the spec. What if you use o1 via OpenRouter.ai instead of OpenAI? Do they solve this for you?

Also, out of interest, what are your costs like with o1 for this?

Finally, I notice we aren't using the right icon for o1.

@tarasglek
Copy link
Owner

tarasglek commented Dec 17, 2024

I think we need a way to toggle the type of message while editing it. I've seen this with llama image models too. I'd prefer to not do that automagically..except maybe as a suggestion while handling error "Would you like to change your system message to user message?"

Regading temp, that we can probably fix automatically. There are a models that are picky re temp settings...Can easily pattern match on temperature errors and set it to a higher value.

I think per-provider/per-model temp settings make sense in longer term. In general would be cool if we had per-provider memory for last-model used, can store last-temp used there too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants