Skip to content

Commit

Permalink
Update documents
Browse files Browse the repository at this point in the history
  • Loading branch information
mkht committed Sep 13, 2024
1 parent 2986ee1 commit 073f207
Show file tree
Hide file tree
Showing 7 changed files with 81 additions and 49 deletions.
7 changes: 7 additions & 0 deletions CHANGELOG.ja.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,11 @@
# 変更履歴
### 4.4.0
- タブ補完に`o1-preview``o1-mini`モデルを追加します
- `Request-ChatCompletion``-MaxCompletionTokens`パラメータを追加します
`-MaxTokens`パラメータは非推奨となりましたが、引き続き使用可能です
- `gpt-3.5-turbo-0613``gpt-3.5-turbo-16k-0613`は2024年9月13日に廃止されました
これらのモデルは引き続き呼び出すことができますが、モデル名の補完からは削除されます

### 4.3.0
- `Get-ThreadRunStep``-Include`パラメータを追加します
- `New-Assistant``-RankerForFileSearch`および`-ScoreThresholdForFileSearch`パラメータを追加します
Expand Down
3 changes: 2 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# Changelog
### Unreleased
### 4.4.0
- Add `o1-preview` and `o1-mini` models to tab completions.
- Add `-MaxCompletionTokens` parameter for `Request-ChatCompletion`.
The `-MaxTokens` parameter is now deprecated.
- `gpt-3.5-turbo-0613` and `gpt-3.5-turbo-16k-0613` is deprecated on 2024-09-13.
Expand Down
6 changes: 3 additions & 3 deletions Docs/Enter-ChatGPT.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Enter-ChatGPT
[-Temperature <Double>]
[-TopP <Double>]
[-StopSequence <String[]>]
[-MaxTokens <Int32>]
[-MaxCompletionTokens <Int32>]
[-PresencePenalty <Double>]
[-FrequencyPenalty <Double>]
[-TimeoutSec <Int32>]
Expand Down Expand Up @@ -99,13 +99,13 @@ Required: False
Position: Named
```

### -MaxTokens
### -MaxCompletionTokens
The maximum number of tokens allowed for the generated answer.
Maximum value depends on model. (`4096` for `gpt-3.5-turbo` or `8192` for `gpt-4`)

```yaml
Type: Int32
Aliases: max_tokens
Aliases: max_completion_tokens
Required: False
Position: Named
```
Expand Down
70 changes: 46 additions & 24 deletions PSOpenAI-Help.xml
Original file line number Diff line number Diff line change
Expand Up @@ -1666,8 +1666,8 @@ So `0.1` means only the tokens comprising the top `10%` probability mass are con
</dev:type>
<dev:defaultValue>None</dev:defaultValue>
</command:parameter>
<command:parameter required="false" variableLength="true" globbing="false" pipelineInput="false" position="named" aliases="max_tokens">
<maml:name>MaxTokens</maml:name>
<command:parameter required="false" variableLength="true" globbing="false" pipelineInput="false" position="named" aliases="max_completion_tokens">
<maml:name>MaxCompletionTokens</maml:name>
<maml:description>
<maml:para>The maximum number of tokens allowed for the generated answer.
Maximum value depends on model. (`4096` for `gpt-3.5-turbo` or `8192` for `gpt-4`)</maml:para>
Expand Down Expand Up @@ -1850,8 +1850,8 @@ So `0.1` means only the tokens comprising the top `10%` probability mass are con
</dev:type>
<dev:defaultValue>None</dev:defaultValue>
</command:parameter>
<command:parameter required="false" variableLength="true" globbing="false" pipelineInput="false" position="named" aliases="max_tokens">
<maml:name>MaxTokens</maml:name>
<command:parameter required="false" variableLength="true" globbing="false" pipelineInput="false" position="named" aliases="max_completion_tokens">
<maml:name>MaxCompletionTokens</maml:name>
<maml:description>
<maml:para>The maximum number of tokens allowed for the generated answer.
Maximum value depends on model. (`4096` for `gpt-3.5-turbo` or `8192` for `gpt-4`)</maml:para>
Expand Down Expand Up @@ -7357,8 +7357,8 @@ If not specified, it will try to use `$global:OPENAI_ORGANIZATION` or `$env:OPEN
<maml:description>
<maml:para>Specifies the format that the model must output.
- `auto` is default.
- `json_object` enables JSON mode, which guarantees the message the model generates is valid JSON.
- `json_schema` enables Structured Outputs which guarantees the model will match your supplied JSON schema.</maml:para>
- `json_object` enables JSON mode, which ensures the message the model generates is valid JSON.
- `json_schema` enables Structured Outputs which ensures the model will match your supplied JSON schema.</maml:para>
<maml:para>- `raw_response` returns raw response content from API.</maml:para>
</maml:description>
<command:parameterValue required="true" variableLength="false">Object</command:parameterValue>
Expand Down Expand Up @@ -7651,8 +7651,8 @@ If not specified, it will try to use `$global:OPENAI_ORGANIZATION` or `$env:OPEN
<maml:description>
<maml:para>Specifies the format that the model must output.
- `auto` is default.
- `json_object` enables JSON mode, which guarantees the message the model generates is valid JSON.
- `json_schema` enables Structured Outputs which guarantees the model will match your supplied JSON schema.</maml:para>
- `json_object` enables JSON mode, which ensures the message the model generates is valid JSON.
- `json_schema` enables Structured Outputs which ensures the model will match your supplied JSON schema.</maml:para>
<maml:para>- `raw_response` returns raw response content from API.</maml:para>
</maml:description>
<command:parameterValue required="true" variableLength="false">Object</command:parameterValue>
Expand Down Expand Up @@ -11585,8 +11585,19 @@ The default value is `1`.</maml:para>
<command:parameter required="false" variableLength="true" globbing="false" pipelineInput="false" position="named" aliases="max_tokens">
<maml:name>MaxTokens</maml:name>
<maml:description>
<maml:para>The maximum number of tokens allowed for the generated answer.
Maximum value depends on model. (`4096` for `gpt-3.5-turbo` or `8192` for `gpt-4`)</maml:para>
<maml:para>This value is now deprecated in favor of MaxCompletionTokens.</maml:para>
</maml:description>
<command:parameterValue required="true" variableLength="false">Int32</command:parameterValue>
<dev:type>
<maml:name>Int32</maml:name>
<maml:uri />
</dev:type>
<dev:defaultValue>None</dev:defaultValue>
</command:parameter>
<command:parameter required="false" variableLength="true" globbing="false" pipelineInput="false" position="named" aliases="max_completion_tokens">
<maml:name>MaxCompletionTokens</maml:name>
<maml:description>
<maml:para>An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.</maml:para>
</maml:description>
<command:parameterValue required="true" variableLength="false">Int32</command:parameterValue>
<dev:type>
Expand Down Expand Up @@ -11665,8 +11676,8 @@ ID 23182 maps to "apple" and ID 88847 maps to "banana". Thus, this example incre
<maml:description>
<maml:para>Specifies the format that the model must output.
- `text` is default.
- `json_object` enables JSON mode, which guarantees the message the model generates is valid JSON.
- `json_schema` enables Structured Outputs which guarantees the model will match your supplied JSON schema.</maml:para>
- `json_object` enables JSON mode, which ensures the message the model generates is valid JSON.
- `json_schema` enables Structured Outputs which ensures the model will match your supplied JSON schema.</maml:para>
<maml:para>- `raw_response` returns raw response content from API.</maml:para>
</maml:description>
<command:parameterValue required="true" variableLength="false">Object</command:parameterValue>
Expand Down Expand Up @@ -12043,8 +12054,19 @@ The default value is `1`.</maml:para>
<command:parameter required="false" variableLength="true" globbing="false" pipelineInput="false" position="named" aliases="max_tokens">
<maml:name>MaxTokens</maml:name>
<maml:description>
<maml:para>The maximum number of tokens allowed for the generated answer.
Maximum value depends on model. (`4096` for `gpt-3.5-turbo` or `8192` for `gpt-4`)</maml:para>
<maml:para>This value is now deprecated in favor of MaxCompletionTokens.</maml:para>
</maml:description>
<command:parameterValue required="true" variableLength="false">Int32</command:parameterValue>
<dev:type>
<maml:name>Int32</maml:name>
<maml:uri />
</dev:type>
<dev:defaultValue>None</dev:defaultValue>
</command:parameter>
<command:parameter required="false" variableLength="true" globbing="false" pipelineInput="false" position="named" aliases="max_completion_tokens">
<maml:name>MaxCompletionTokens</maml:name>
<maml:description>
<maml:para>An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.</maml:para>
</maml:description>
<command:parameterValue required="true" variableLength="false">Int32</command:parameterValue>
<dev:type>
Expand Down Expand Up @@ -12123,8 +12145,8 @@ ID 23182 maps to "apple" and ID 88847 maps to "banana". Thus, this example incre
<maml:description>
<maml:para>Specifies the format that the model must output.
- `text` is default.
- `json_object` enables JSON mode, which guarantees the message the model generates is valid JSON.
- `json_schema` enables Structured Outputs which guarantees the model will match your supplied JSON schema.</maml:para>
- `json_object` enables JSON mode, which ensures the message the model generates is valid JSON.
- `json_schema` enables Structured Outputs which ensures the model will match your supplied JSON schema.</maml:para>
<maml:para>- `raw_response` returns raw response content from API.</maml:para>
</maml:description>
<command:parameterValue required="true" variableLength="false">Object</command:parameterValue>
Expand Down Expand Up @@ -15575,8 +15597,8 @@ We serves
<maml:description>
<maml:para>Specifies the format that the model must output.
- `auto` is default.
- `json_object` enables JSON mode, which guarantees the message the model generates is valid JSON.
- `json_schema` enables Structured Outputs which guarantees the model will match your supplied JSON schema.</maml:para>
- `json_object` enables JSON mode, which ensures the message the model generates is valid JSON.
- `json_schema` enables Structured Outputs which ensures the model will match your supplied JSON schema.</maml:para>
<maml:para>- `raw_response` returns raw response content from API.</maml:para>
</maml:description>
<command:parameterValue required="true" variableLength="false">Object</command:parameterValue>
Expand Down Expand Up @@ -15881,8 +15903,8 @@ If not specified, it will try to use `$global:OPENAI_ORGANIZATION` or `$env:OPEN
<maml:description>
<maml:para>Specifies the format that the model must output.
- `auto` is default.
- `json_object` enables JSON mode, which guarantees the message the model generates is valid JSON.
- `json_schema` enables Structured Outputs which guarantees the model will match your supplied JSON schema.</maml:para>
- `json_object` enables JSON mode, which ensures the message the model generates is valid JSON.
- `json_schema` enables Structured Outputs which ensures the model will match your supplied JSON schema.</maml:para>
<maml:para>- `raw_response` returns raw response content from API.</maml:para>
</maml:description>
<command:parameterValue required="true" variableLength="false">Object</command:parameterValue>
Expand Down Expand Up @@ -17544,8 +17566,8 @@ You can create a batch input item by using the Request-ChatCompletion cmdlet wit
<maml:description>
<maml:para>Specifies the format that the model must output.
- `default` will only outputs text message.</maml:para>
<maml:para>- `json_object` enables JSON mode, which guarantees the message the model generates is valid JSON.
- `json_schema` enables Structured Outputs which guarantees the model will match your supplied JSON schema.</maml:para>
<maml:para>- `json_object` enables JSON mode, which ensures the message the model generates is valid JSON.
- `json_schema` enables Structured Outputs which ensures the model will match your supplied JSON schema.</maml:para>
<maml:para>- `raw_response` returns raw response content from API.</maml:para>
</maml:description>
<command:parameterValue required="true" variableLength="false">Object</command:parameterValue>
Expand Down Expand Up @@ -18013,8 +18035,8 @@ If not specified, it will try to use `$global:OPENAI_ORGANIZATION` or `$env:OPEN
<maml:description>
<maml:para>Specifies the format that the model must output.
- `default` will only outputs text message.</maml:para>
<maml:para>- `json_object` enables JSON mode, which guarantees the message the model generates is valid JSON.
- `json_schema` enables Structured Outputs which guarantees the model will match your supplied JSON schema.</maml:para>
<maml:para>- `json_object` enables JSON mode, which ensures the message the model generates is valid JSON.
- `json_schema` enables Structured Outputs which ensures the model will match your supplied JSON schema.</maml:para>
<maml:para>- `raw_response` returns raw response content from API.</maml:para>
</maml:description>
<command:parameterValue required="true" variableLength="false">Object</command:parameterValue>
Expand Down
8 changes: 5 additions & 3 deletions Public/Enter-ChatGPT.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,9 @@ function Enter-ChatGPT {
'gpt-4-32k',
'gpt-4-32k-0613',
'gpt-4-turbo',
'gpt-4-turbo-2024-04-09'
'gpt-4-turbo-2024-04-09',
'o1-mini',
'o1-preview'
)]
[string][LowerCaseTransformation()]$Model,

Expand All @@ -41,8 +43,8 @@ function Enter-ChatGPT {

[Parameter()]
[ValidateRange(0, 2147483647)]
[Alias('max_tokens')]
[int]$MaxTokens,
[Alias('max_completion_tokens')]
[int]$MaxCompletionTokens,

[Parameter()]
[ValidateRange(-2.0, 2.0)]
Expand Down
4 changes: 2 additions & 2 deletions Tests/Batch/Batch.E2E.tests.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Describe 'Batch E2E Test' {
It 'STEP1: Create multiple batch input objects' {
# Create 4 input objects
{ (1..4) | ForEach-Object {
$script:BatchInputs += Request-ChatCompletion -Message 'Hello.' -Model gpt-4o-mini -AsBatch -CustomBatchId ("custom-batchtest-$_") -MaxTokens 15 -ea Stop
$script:BatchInputs += Request-ChatCompletion -Message 'Hello.' -Model gpt-4o-mini -AsBatch -CustomBatchId ("custom-batchtest-$_") -MaxCompletionTokens 15 -ea Stop
} } | Should -Not -Throw
$script:BatchInputs | Should -HaveCount 4
$script:BatchInputs[0].custom_id | Should -Be 'custom-batchtest-1'
Expand Down Expand Up @@ -96,7 +96,7 @@ Describe 'Batch E2E Test' {
It 'STEP1: Create multiple batch input objects' {
# Create 2 input objects
{ (1..2) | ForEach-Object {
$script:BatchInputs += Request-ChatCompletion -Message 'Hello.' -Model $script:Model -AsBatch -CustomBatchId ("custom-batchtest-$_") -MaxTokens 15 -ea Stop
$script:BatchInputs += Request-ChatCompletion -Message 'Hello.' -Model $script:Model -AsBatch -CustomBatchId ("custom-batchtest-$_") -MaxCompletionTokens 15 -ea Stop
} } | Should -Not -Throw
$script:BatchInputs | Should -HaveCount 2
$script:BatchInputs[0].custom_id | Should -Be 'custom-batchtest-1'
Expand Down
Loading

0 comments on commit 073f207

Please sign in to comment.