All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog and this project adheres to Semantic Versioning.
- Add support for Assistants API v2 and Vector Stores endpoints (#420)
- Add vector store endpoints documentation (#420)
- Add support for Assistants API v2 and Vector Stores endpoint (#405)
-
Support for usage stream option on chat endpoint (#398)
-
Missing output paramenter on streamed code interpreter outpu (#406)
- Add support for Batches endpoint (#403)
- Assistants: add streaming support (#367)
- Audio: add support for timestamp_granularities (#374)
- Fix default fake data for meta information (#332)
- ThreadRun: Add "usage" property to the response (#330)
- ThreadRunStep: "content" missing in response if result has not been submitted (#319)
- Files: "bytes" in retrieve response is may null (#325)
- Add support for Assistants and Threads endpoint (#271)
- Add stream support for Text To Speech (#235)
- Add test resources for Assistants and Threads (#279)
- Remove thread messages delete endpoint (#309)
- Handle x-request-id in meta information (#283)
- Handle meta information from azure headers (#307)
- Add missing default system_fingerprint to chat create response fixture (#308)
- Convert headers to lower case before creation meta information (#306)
- Remove threads list endpoint from README.md (#275)
- Clarify assistants files docs (#278)
- Fix image creation example (#297)
- Fix outdated links (#299)
- Add troubleshooting section and explain how to configure HTTP client timeouts
- Add support for Assistants and Threads endpoint (#271)
- Remove
list()
from Threads resource
- instruction on ThreadRunResponse may be nullable
- Add RetrieveJobResponseError and batch_size, learning_rate_multiplier parameters on RetrieveJobResponseHyperparameters for fine-tuning endpoint (#255)
- Add revised_prompt property to CreateResponseData on the image create endpoint (#257)
- Fix model in one of the examples
- Add support for Assistants and Threads endpoint (#243)
- Add support for GTP-4 vision on the chat completion endpoint (#241)
- Add support for tool calls on the chat completion endpoint (#239)
- Add support for the audio speech endpoint (#237)
- Update Models endpoint response object to the latest API changes (#235)
- Update FineTuning job id names (#230)
- Use Chat resource as the primary example
- nEpochs on RetrieveJobResponseHyperparameters may be string
- processingMs ond MetaInformationOpenAI may be null (#218)
- Add "has_more" to fine-tuning jobs and events list responses (#206)
- Add parameters to the fine-tuning jobs list request to filter the results (#206)
- error_code may be int
- Missing openai-version header from Azure
- Typo in class name MetaInformationOpenAI
- Add support for the fine-tuning API (#199)
- Provide access to header / meta information for all responses (#195)
- Mark
FineTunes
resource as deprecated - Mark
Edits
resource as deprecated - Add missing moderation enums (#178)
- Chat completion create response with function calling on Azure (#184)
- Breaking change on OpenAI API regarding "transient" field in Audio translations (#168)
- Docs: fix OpenAI URL
- Breaking change on OpenAI API regarding "transient" field in Audio (#160)
- Error handling: use error code as exception message if error message is empty (#150)
- Error handling: Catch error in stream responses (#150)
- Error handling: Handle errors where message is an array (#150)
- Chat/CreateResponse faking with function_call (#145)
- Add support for function calling in the Chat Completions API (#144)
- Exception handling for server error with non default content type header (#134)
- Faking embedding responses for multidimensional vectors (#131)
- Add support for psr/http-message ^2.0 (#130)
- fix: stream broken after checking for errors (regression of #113)
- Support for HTTP base uri (#106)
- unify exception handling between HTTP client implementations (#113)
- fix toArray() on
CreateStreamedResponseDelta
to match the original API response (#108)
- explain usage for "OpenAI on Azure" (#109)
- Testing support (#71)
- Trim ApiKey before sending it to the API (#101)
- Nullable fields on error response (#102)
- Stream suppport (#84)
- Removed dependency for
guzzlehttp/guzzle
and use PSR-18 client discovery instead (#75) - Add Client factory which allows for a custom HTTP client
- Client factory further accepts custom HTTP headers, query parameters and API URI
status_details
can be a string in file responses. Affects Files and FineTunes resources (#68)
Audio
resource to turn audio into text powered bywhisper-1
(#62)
Chat
resource aka ChatGPT powered bygpt-3.5-turbo
(#60)
- Missing
events
on FineTunesRetrieveResponse
(#41)
OpenAI::client()
first argument changed fromapiToken
toapiKey
(#25)
- Getting contents from Guzzle's response causing issues with middleware (#33)
- FineTunes create response:
batch_size
,learning_rate
andfine_tuned_model
are nullable (#16) - File responses: add missing fields
status
andstatus_details
- Add
images()
resource to interact with DALL-E
- Parse completions create response with logprobs correctly
- First version