Skip to content

Commit

Permalink
.Net: Improve docs for OpenAI repsonse format property (#9801)
Browse files Browse the repository at this point in the history
### Motivation and Context

Closes #7171 

### Description

<!-- Describe your changes, the overall approach, the underlying design.
These notes will help understanding how your code works. Thanks! -->

### Contribution Checklist

<!-- Before submitting this PR, please make sure: -->

- [ ] The code builds clean without any errors or warnings
- [ ] The PR follows the [SK Contribution
Guidelines](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md)
and the [pre-submission formatting
script](https://github.com/microsoft/semantic-kernel/blob/main/CONTRIBUTING.md#development-scripts)
raises no violations
- [ ] All unit tests pass, and I have added new tests where possible
- [ ] I didn't break anyone 😄

---------

Co-authored-by: Dmytro Struk <[email protected]>
  • Loading branch information
markwallace-microsoft and dmytrostruk authored Nov 26, 2024
1 parent bead9ef commit 370c89a
Showing 1 changed file with 4 additions and 0 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -143,6 +143,10 @@ public long? Seed
/// Gets or sets the response format to use for the completion.
/// </summary>
/// <remarks>
/// An object specifying the format that the model must output.
/// Setting to <c>{ "type": "json_schema", "json_schema": { ...} }</c> enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.
/// Setting to <c>{ "type": "json_object" }</c> enables JSON mode, which ensures the message the model generates is valid JSON.
/// Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason= "length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.
/// Possible values are:
/// <para>- <see cref="string"/> values: <c>"json_object"</c>, <c>"text"</c>;</para>
/// <para>- <see cref="ChatResponseFormat"/> object;</para>
Expand Down

0 comments on commit 370c89a

Please sign in to comment.