Skip to content

release: 1.99.0 #2504

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Aug 5, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "1.98.0"
".": "1.99.0"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 111
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-721e6ccaa72205ee14c71f8163129920464fb814b95d3df9567a9476bbd9b7fb.yml
openapi_spec_hash: 2115413a21df8b5bf9e4552a74df4312
config_hash: 9606bb315a193bfd8da0459040143242
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-d6a16b25b969c3e5382e7d413de15bf83d5f7534d5c3ecce64d3a7e847418f9e.yml
openapi_spec_hash: 0c0bcf4aee9ca2a948dd14b890dfe728
config_hash: aeff9289bd7f8c8482e4d738c3c2fde1
19 changes: 19 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,24 @@
# Changelog

## 1.99.0 (2025-08-05)

Full Changelog: [v1.98.0...v1.99.0](https://github.com/openai/openai-python/compare/v1.98.0...v1.99.0)

### Features

* **api:** manual updates ([d4aa726](https://github.com/openai/openai-python/commit/d4aa72602bf489ef270154b881b3967d497d4220))
* **client:** support file upload requests ([0772e6e](https://github.com/openai/openai-python/commit/0772e6ed8310e15539610b003dd73f72f474ec0c))


### Bug Fixes

* add missing prompt_cache_key & prompt_cache_key params ([00b49ae](https://github.com/openai/openai-python/commit/00b49ae8d44ea396ac0536fc3ce4658fc669e2f5))


### Chores

* **internal:** fix ruff target version ([aa6b252](https://github.com/openai/openai-python/commit/aa6b252ae0f25f195dede15755e05dd2f542f42d))

## 1.98.0 (2025-07-30)

Full Changelog: [v1.97.2...v1.98.0](https://github.com/openai/openai-python/compare/v1.97.2...v1.98.0)
Expand Down
4 changes: 2 additions & 2 deletions api.md
Original file line number Diff line number Diff line change
Expand Up @@ -792,12 +792,12 @@ from openai.types.responses import (
ResponsePrompt,
ResponseQueuedEvent,
ResponseReasoningItem,
ResponseReasoningSummaryDeltaEvent,
ResponseReasoningSummaryDoneEvent,
ResponseReasoningSummaryPartAddedEvent,
ResponseReasoningSummaryPartDoneEvent,
ResponseReasoningSummaryTextDeltaEvent,
ResponseReasoningSummaryTextDoneEvent,
ResponseReasoningTextDeltaEvent,
ResponseReasoningTextDoneEvent,
ResponseRefusalDeltaEvent,
ResponseRefusalDoneEvent,
ResponseStatus,
Expand Down
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "openai"
version = "1.98.0"
version = "1.99.0"
description = "The official Python library for the openai API"
dynamic = ["readme"]
license = "Apache-2.0"
Expand Down Expand Up @@ -177,7 +177,7 @@ reportPrivateUsage = false
[tool.ruff]
line-length = 120
output-format = "grouped"
target-version = "py37"
target-version = "py38"

[tool.ruff.format]
docstring-code-format = true
Expand Down
5 changes: 4 additions & 1 deletion src/openai/_base_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -534,7 +534,10 @@ def _build_request(
is_body_allowed = options.method.lower() != "get"

if is_body_allowed:
kwargs["json"] = json_data if is_given(json_data) else None
if isinstance(json_data, bytes):
kwargs["content"] = json_data
else:
kwargs["json"] = json_data if is_given(json_data) else None
kwargs["files"] = files
else:
headers.pop("Content-Type", None)
Expand Down
8 changes: 4 additions & 4 deletions src/openai/_files.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,12 +69,12 @@ def _transform_file(file: FileTypes) -> HttpxFileTypes:
return file

if is_tuple_t(file):
return (file[0], _read_file_content(file[1]), *file[2:])
return (file[0], read_file_content(file[1]), *file[2:])

raise TypeError(f"Expected file types input to be a FileContent type or to be a tuple")


def _read_file_content(file: FileContent) -> HttpxFileContent:
def read_file_content(file: FileContent) -> HttpxFileContent:
if isinstance(file, os.PathLike):
return pathlib.Path(file).read_bytes()
return file
Expand Down Expand Up @@ -111,12 +111,12 @@ async def _async_transform_file(file: FileTypes) -> HttpxFileTypes:
return file

if is_tuple_t(file):
return (file[0], await _async_read_file_content(file[1]), *file[2:])
return (file[0], await async_read_file_content(file[1]), *file[2:])

raise TypeError(f"Expected file types input to be a FileContent type or to be a tuple")


async def _async_read_file_content(file: FileContent) -> HttpxFileContent:
async def async_read_file_content(file: FileContent) -> HttpxFileContent:
if isinstance(file, os.PathLike):
return await anyio.Path(file).read_bytes()

Expand Down
2 changes: 1 addition & 1 deletion src/openai/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "openai"
__version__ = "1.98.0" # x-release-please-version
__version__ = "1.99.0" # x-release-please-version
16 changes: 16 additions & 0 deletions src/openai/resources/chat/completions/completions.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,9 @@ def parse(
parallel_tool_calls: bool | NotGiven = NOT_GIVEN,
prediction: Optional[ChatCompletionPredictionContentParam] | NotGiven = NOT_GIVEN,
presence_penalty: Optional[float] | NotGiven = NOT_GIVEN,
prompt_cache_key: str | NotGiven = NOT_GIVEN,
reasoning_effort: Optional[ReasoningEffort] | NotGiven = NOT_GIVEN,
safety_identifier: str | NotGiven = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex", "scale", "priority"]] | NotGiven = NOT_GIVEN,
stop: Union[Optional[str], List[str], None] | NotGiven = NOT_GIVEN,
Expand Down Expand Up @@ -197,8 +199,10 @@ def parser(raw_completion: ChatCompletion) -> ParsedChatCompletion[ResponseForma
"parallel_tool_calls": parallel_tool_calls,
"prediction": prediction,
"presence_penalty": presence_penalty,
"prompt_cache_key": prompt_cache_key,
"reasoning_effort": reasoning_effort,
"response_format": _type_to_response_format(response_format),
"safety_identifier": safety_identifier,
"seed": seed,
"service_tier": service_tier,
"stop": stop,
Expand Down Expand Up @@ -1378,7 +1382,9 @@ def stream(
parallel_tool_calls: bool | NotGiven = NOT_GIVEN,
prediction: Optional[ChatCompletionPredictionContentParam] | NotGiven = NOT_GIVEN,
presence_penalty: Optional[float] | NotGiven = NOT_GIVEN,
prompt_cache_key: str | NotGiven = NOT_GIVEN,
reasoning_effort: Optional[ReasoningEffort] | NotGiven = NOT_GIVEN,
safety_identifier: str | NotGiven = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex", "scale", "priority"]] | NotGiven = NOT_GIVEN,
stop: Union[Optional[str], List[str], None] | NotGiven = NOT_GIVEN,
Expand Down Expand Up @@ -1445,7 +1451,9 @@ def stream(
parallel_tool_calls=parallel_tool_calls,
prediction=prediction,
presence_penalty=presence_penalty,
prompt_cache_key=prompt_cache_key,
reasoning_effort=reasoning_effort,
safety_identifier=safety_identifier,
seed=seed,
service_tier=service_tier,
store=store,
Expand Down Expand Up @@ -1514,7 +1522,9 @@ async def parse(
parallel_tool_calls: bool | NotGiven = NOT_GIVEN,
prediction: Optional[ChatCompletionPredictionContentParam] | NotGiven = NOT_GIVEN,
presence_penalty: Optional[float] | NotGiven = NOT_GIVEN,
prompt_cache_key: str | NotGiven = NOT_GIVEN,
reasoning_effort: Optional[ReasoningEffort] | NotGiven = NOT_GIVEN,
safety_identifier: str | NotGiven = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex", "scale", "priority"]] | NotGiven = NOT_GIVEN,
stop: Union[Optional[str], List[str], None] | NotGiven = NOT_GIVEN,
Expand Down Expand Up @@ -1610,8 +1620,10 @@ def parser(raw_completion: ChatCompletion) -> ParsedChatCompletion[ResponseForma
"parallel_tool_calls": parallel_tool_calls,
"prediction": prediction,
"presence_penalty": presence_penalty,
"prompt_cache_key": prompt_cache_key,
"reasoning_effort": reasoning_effort,
"response_format": _type_to_response_format(response_format),
"safety_identifier": safety_identifier,
"seed": seed,
"service_tier": service_tier,
"store": store,
Expand Down Expand Up @@ -2791,7 +2803,9 @@ def stream(
parallel_tool_calls: bool | NotGiven = NOT_GIVEN,
prediction: Optional[ChatCompletionPredictionContentParam] | NotGiven = NOT_GIVEN,
presence_penalty: Optional[float] | NotGiven = NOT_GIVEN,
prompt_cache_key: str | NotGiven = NOT_GIVEN,
reasoning_effort: Optional[ReasoningEffort] | NotGiven = NOT_GIVEN,
safety_identifier: str | NotGiven = NOT_GIVEN,
seed: Optional[int] | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex", "scale", "priority"]] | NotGiven = NOT_GIVEN,
stop: Union[Optional[str], List[str], None] | NotGiven = NOT_GIVEN,
Expand Down Expand Up @@ -2859,7 +2873,9 @@ def stream(
parallel_tool_calls=parallel_tool_calls,
prediction=prediction,
presence_penalty=presence_penalty,
prompt_cache_key=prompt_cache_key,
reasoning_effort=reasoning_effort,
safety_identifier=safety_identifier,
seed=seed,
service_tier=service_tier,
stop=stop,
Expand Down
8 changes: 8 additions & 0 deletions src/openai/resources/responses/responses.py
Original file line number Diff line number Diff line change
Expand Up @@ -1001,7 +1001,9 @@ def parse(
parallel_tool_calls: Optional[bool] | NotGiven = NOT_GIVEN,
previous_response_id: Optional[str] | NotGiven = NOT_GIVEN,
prompt: Optional[ResponsePromptParam] | NotGiven = NOT_GIVEN,
prompt_cache_key: str | NotGiven = NOT_GIVEN,
reasoning: Optional[Reasoning] | NotGiven = NOT_GIVEN,
safety_identifier: str | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex", "scale", "priority"]] | NotGiven = NOT_GIVEN,
store: Optional[bool] | NotGiven = NOT_GIVEN,
stream: Optional[Literal[False]] | Literal[True] | NotGiven = NOT_GIVEN,
Expand Down Expand Up @@ -1053,7 +1055,9 @@ def parser(raw_response: Response) -> ParsedResponse[TextFormatT]:
"parallel_tool_calls": parallel_tool_calls,
"previous_response_id": previous_response_id,
"prompt": prompt,
"prompt_cache_key": prompt_cache_key,
"reasoning": reasoning,
"safety_identifier": safety_identifier,
"service_tier": service_tier,
"store": store,
"stream": stream,
Expand Down Expand Up @@ -2316,7 +2320,9 @@ async def parse(
parallel_tool_calls: Optional[bool] | NotGiven = NOT_GIVEN,
previous_response_id: Optional[str] | NotGiven = NOT_GIVEN,
prompt: Optional[ResponsePromptParam] | NotGiven = NOT_GIVEN,
prompt_cache_key: str | NotGiven = NOT_GIVEN,
reasoning: Optional[Reasoning] | NotGiven = NOT_GIVEN,
safety_identifier: str | NotGiven = NOT_GIVEN,
service_tier: Optional[Literal["auto", "default", "flex", "scale", "priority"]] | NotGiven = NOT_GIVEN,
store: Optional[bool] | NotGiven = NOT_GIVEN,
stream: Optional[Literal[False]] | Literal[True] | NotGiven = NOT_GIVEN,
Expand Down Expand Up @@ -2368,7 +2374,9 @@ def parser(raw_response: Response) -> ParsedResponse[TextFormatT]:
"parallel_tool_calls": parallel_tool_calls,
"previous_response_id": previous_response_id,
"prompt": prompt,
"prompt_cache_key": prompt_cache_key,
"reasoning": reasoning,
"safety_identifier": safety_identifier,
"service_tier": service_tier,
"store": store,
"stream": stream,
Expand Down
8 changes: 2 additions & 6 deletions src/openai/types/responses/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,24 +94,20 @@
from .response_function_tool_call_param import ResponseFunctionToolCallParam as ResponseFunctionToolCallParam
from .response_mcp_call_completed_event import ResponseMcpCallCompletedEvent as ResponseMcpCallCompletedEvent
from .response_function_web_search_param import ResponseFunctionWebSearchParam as ResponseFunctionWebSearchParam
from .response_reasoning_text_done_event import ResponseReasoningTextDoneEvent as ResponseReasoningTextDoneEvent
from .response_code_interpreter_tool_call import ResponseCodeInterpreterToolCall as ResponseCodeInterpreterToolCall
from .response_input_message_content_list import ResponseInputMessageContentList as ResponseInputMessageContentList
from .response_mcp_call_in_progress_event import ResponseMcpCallInProgressEvent as ResponseMcpCallInProgressEvent
from .response_reasoning_text_delta_event import ResponseReasoningTextDeltaEvent as ResponseReasoningTextDeltaEvent
from .response_audio_transcript_done_event import ResponseAudioTranscriptDoneEvent as ResponseAudioTranscriptDoneEvent
from .response_file_search_tool_call_param import ResponseFileSearchToolCallParam as ResponseFileSearchToolCallParam
from .response_mcp_list_tools_failed_event import ResponseMcpListToolsFailedEvent as ResponseMcpListToolsFailedEvent
from .response_audio_transcript_delta_event import (
ResponseAudioTranscriptDeltaEvent as ResponseAudioTranscriptDeltaEvent,
)
from .response_reasoning_summary_done_event import (
ResponseReasoningSummaryDoneEvent as ResponseReasoningSummaryDoneEvent,
)
from .response_mcp_call_arguments_done_event import (
ResponseMcpCallArgumentsDoneEvent as ResponseMcpCallArgumentsDoneEvent,
)
from .response_reasoning_summary_delta_event import (
ResponseReasoningSummaryDeltaEvent as ResponseReasoningSummaryDeltaEvent,
)
from .response_computer_tool_call_output_item import (
ResponseComputerToolCallOutputItem as ResponseComputerToolCallOutputItem,
)
Expand Down
19 changes: 14 additions & 5 deletions src/openai/types/responses/response_reasoning_item.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,29 +5,38 @@

from ..._models import BaseModel

__all__ = ["ResponseReasoningItem", "Summary"]
__all__ = ["ResponseReasoningItem", "Summary", "Content"]


class Summary(BaseModel):
text: str
"""
A short summary of the reasoning used by the model when generating the response.
"""
"""A summary of the reasoning output from the model so far."""

type: Literal["summary_text"]
"""The type of the object. Always `summary_text`."""


class Content(BaseModel):
text: str
"""Reasoning text output from the model."""

type: Literal["reasoning_text"]
"""The type of the object. Always `reasoning_text`."""


class ResponseReasoningItem(BaseModel):
id: str
"""The unique identifier of the reasoning content."""

summary: List[Summary]
"""Reasoning text contents."""
"""Reasoning summary content."""

type: Literal["reasoning"]
"""The type of the object. Always `reasoning`."""

content: Optional[List[Content]] = None
"""Reasoning text content."""

encrypted_content: Optional[str] = None
"""
The encrypted content of the reasoning item - populated when a response is
Expand Down
19 changes: 14 additions & 5 deletions src/openai/types/responses/response_reasoning_item_param.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,29 +5,38 @@
from typing import Iterable, Optional
from typing_extensions import Literal, Required, TypedDict

__all__ = ["ResponseReasoningItemParam", "Summary"]
__all__ = ["ResponseReasoningItemParam", "Summary", "Content"]


class Summary(TypedDict, total=False):
text: Required[str]
"""
A short summary of the reasoning used by the model when generating the response.
"""
"""A summary of the reasoning output from the model so far."""

type: Required[Literal["summary_text"]]
"""The type of the object. Always `summary_text`."""


class Content(TypedDict, total=False):
text: Required[str]
"""Reasoning text output from the model."""

type: Required[Literal["reasoning_text"]]
"""The type of the object. Always `reasoning_text`."""


class ResponseReasoningItemParam(TypedDict, total=False):
id: Required[str]
"""The unique identifier of the reasoning content."""

summary: Required[Iterable[Summary]]
"""Reasoning text contents."""
"""Reasoning summary content."""

type: Required[Literal["reasoning"]]
"""The type of the object. Always `reasoning`."""

content: Iterable[Content]
"""Reasoning text content."""

encrypted_content: Optional[str]
"""
The encrypted content of the reasoning item - populated when a response is
Expand Down

This file was deleted.

Loading
Loading