Skip to content

release: 0.5.1 #152

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.5.0"
".": "0.5.1"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 109
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-d4bcffecf0cdadf746faa6708ed1ec81fac451f9b857deabbab26f0a343b9314.yml
openapi_spec_hash: 7c54a18b4381248bda7cc34c52142615
config_hash: d23f847b9ebb3f427d0f198035bd3e9f
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-2bcc845d8635bf93ddcf9ee723af4d7928248412a417bee5fc10d863a1e13867.yml
openapi_spec_hash: 865230cb3abeb01bd85de05891af23c4
config_hash: ed1e6b3c5f93d12b80d31167f55c557c
8 changes: 8 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
# Changelog

## 0.5.1 (2025-06-02)

Full Changelog: [v0.5.0...v0.5.1](https://github.com/openai/openai-ruby/compare/v0.5.0...v0.5.1)

### Bug Fixes

* **api:** Fix evals and code interpreter interfaces ([24a9100](https://github.com/openai/openai-ruby/commit/24a910015e6885fc19a2ad689fe70a148bed5787))

## 0.5.0 (2025-05-29)

Full Changelog: [v0.4.1...v0.5.0](https://github.com/openai/openai-ruby/compare/v0.4.1...v0.5.0)
Expand Down
2 changes: 1 addition & 1 deletion Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ GIT
PATH
remote: .
specs:
openai (0.5.0)
openai (0.5.1)
connection_pool

GEM
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ To use this gem, install via Bundler by adding the following to your application
<!-- x-release-please-start-version -->

```ruby
gem "openai", "~> 0.5.0"
gem "openai", "~> 0.5.1"
```

<!-- x-release-please-end -->
Expand Down
6 changes: 3 additions & 3 deletions lib/openai/models/audio/transcription_text_delta_event.rb
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,8 @@ class Logprob < OpenAI::Internal::Type::BaseModel
# @!attribute bytes
# The bytes that were used to generate the log probability.
#
# @return [Array<Object>, nil]
optional :bytes, OpenAI::Internal::Type::ArrayOf[OpenAI::Internal::Type::Unknown]
# @return [Array<Integer>, nil]
optional :bytes, OpenAI::Internal::Type::ArrayOf[Integer]

# @!attribute logprob
# The log probability of the token.
Expand All @@ -65,7 +65,7 @@ class Logprob < OpenAI::Internal::Type::BaseModel
#
# @param token [String] The token that was used to generate the log probability.
#
# @param bytes [Array<Object>] The bytes that were used to generate the log probability.
# @param bytes [Array<Integer>] The bytes that were used to generate the log probability.
#
# @param logprob [Float] The log probability of the token.
end
Expand Down
6 changes: 3 additions & 3 deletions lib/openai/models/audio/transcription_text_done_event.rb
Original file line number Diff line number Diff line change
Expand Up @@ -51,8 +51,8 @@ class Logprob < OpenAI::Internal::Type::BaseModel
# @!attribute bytes
# The bytes that were used to generate the log probability.
#
# @return [Array<Object>, nil]
optional :bytes, OpenAI::Internal::Type::ArrayOf[OpenAI::Internal::Type::Unknown]
# @return [Array<Integer>, nil]
optional :bytes, OpenAI::Internal::Type::ArrayOf[Integer]

# @!attribute logprob
# The log probability of the token.
Expand All @@ -66,7 +66,7 @@ class Logprob < OpenAI::Internal::Type::BaseModel
#
# @param token [String] The token that was used to generate the log probability.
#
# @param bytes [Array<Object>] The bytes that were used to generate the log probability.
# @param bytes [Array<Integer>] The bytes that were used to generate the log probability.
#
# @param logprob [Float] The log probability of the token.
end
Expand Down
8 changes: 4 additions & 4 deletions lib/openai/models/chat/chat_completion.rb
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,9 @@ class ChatCompletion < OpenAI::Internal::Type::BaseModel
# utilize scale tier credits until they are exhausted.
# - If set to 'auto', and the Project is not Scale tier enabled, the request will
# be processed using the default service tier with a lower uptime SLA and no
# latency guarentee.
# latency guarantee.
# - If set to 'default', the request will be processed using the default service
# tier with a lower uptime SLA and no latency guarentee.
# tier with a lower uptime SLA and no latency guarantee.
# - If set to 'flex', the request will be processed with the Flex Processing
# service tier.
# [Learn more](https://platform.openai.com/docs/guides/flex-processing).
Expand Down Expand Up @@ -195,9 +195,9 @@ class Logprobs < OpenAI::Internal::Type::BaseModel
# utilize scale tier credits until they are exhausted.
# - If set to 'auto', and the Project is not Scale tier enabled, the request will
# be processed using the default service tier with a lower uptime SLA and no
# latency guarentee.
# latency guarantee.
# - If set to 'default', the request will be processed using the default service
# tier with a lower uptime SLA and no latency guarentee.
# tier with a lower uptime SLA and no latency guarantee.
# - If set to 'flex', the request will be processed with the Flex Processing
# service tier.
# [Learn more](https://platform.openai.com/docs/guides/flex-processing).
Expand Down
8 changes: 4 additions & 4 deletions lib/openai/models/chat/chat_completion_chunk.rb
Original file line number Diff line number Diff line change
Expand Up @@ -45,9 +45,9 @@ class ChatCompletionChunk < OpenAI::Internal::Type::BaseModel
# utilize scale tier credits until they are exhausted.
# - If set to 'auto', and the Project is not Scale tier enabled, the request will
# be processed using the default service tier with a lower uptime SLA and no
# latency guarentee.
# latency guarantee.
# - If set to 'default', the request will be processed using the default service
# tier with a lower uptime SLA and no latency guarentee.
# tier with a lower uptime SLA and no latency guarantee.
# - If set to 'flex', the request will be processed with the Flex Processing
# service tier.
# [Learn more](https://platform.openai.com/docs/guides/flex-processing).
Expand Down Expand Up @@ -378,9 +378,9 @@ class Logprobs < OpenAI::Internal::Type::BaseModel
# utilize scale tier credits until they are exhausted.
# - If set to 'auto', and the Project is not Scale tier enabled, the request will
# be processed using the default service tier with a lower uptime SLA and no
# latency guarentee.
# latency guarantee.
# - If set to 'default', the request will be processed using the default service
# tier with a lower uptime SLA and no latency guarentee.
# tier with a lower uptime SLA and no latency guarantee.
# - If set to 'flex', the request will be processed with the Flex Processing
# service tier.
# [Learn more](https://platform.openai.com/docs/guides/flex-processing).
Expand Down
8 changes: 4 additions & 4 deletions lib/openai/models/chat/completion_create_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -226,9 +226,9 @@ class CompletionCreateParams < OpenAI::Internal::Type::BaseModel
# utilize scale tier credits until they are exhausted.
# - If set to 'auto', and the Project is not Scale tier enabled, the request will
# be processed using the default service tier with a lower uptime SLA and no
# latency guarentee.
# latency guarantee.
# - If set to 'default', the request will be processed using the default service
# tier with a lower uptime SLA and no latency guarentee.
# tier with a lower uptime SLA and no latency guarantee.
# - If set to 'flex', the request will be processed with the Flex Processing
# service tier.
# [Learn more](https://platform.openai.com/docs/guides/flex-processing).
Expand Down Expand Up @@ -553,9 +553,9 @@ module ResponseFormat
# utilize scale tier credits until they are exhausted.
# - If set to 'auto', and the Project is not Scale tier enabled, the request will
# be processed using the default service tier with a lower uptime SLA and no
# latency guarentee.
# latency guarantee.
# - If set to 'default', the request will be processed using the default service
# tier with a lower uptime SLA and no latency guarentee.
# tier with a lower uptime SLA and no latency guarantee.
# - If set to 'flex', the request will be processed with the Flex Processing
# service tier.
# [Learn more](https://platform.openai.com/docs/guides/flex-processing).
Expand Down
47 changes: 17 additions & 30 deletions lib/openai/models/fine_tuning/alpha/grader_run_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -16,26 +16,32 @@ class GraderRunParams < OpenAI::Internal::Type::BaseModel
required :grader, union: -> { OpenAI::FineTuning::Alpha::GraderRunParams::Grader }

# @!attribute model_sample
# The model sample to be evaluated.
# The model sample to be evaluated. This value will be used to populate the
# `sample` namespace. See
# [the guide](https://platform.openai.com/docs/guides/graders) for more details.
# The `output_json` variable will be populated if the model sample is a valid JSON
# string.
#
# @return [String]
required :model_sample, String

# @!attribute reference_answer
# The reference answer for the evaluation.
# @!attribute item
# The dataset item provided to the grader. This will be used to populate the
# `item` namespace. See
# [the guide](https://platform.openai.com/docs/guides/graders) for more details.
#
# @return [String, Object, Array<Object>, Float]
required :reference_answer,
union: -> {
OpenAI::FineTuning::Alpha::GraderRunParams::ReferenceAnswer
}
# @return [Object, nil]
optional :item, OpenAI::Internal::Type::Unknown

# @!method initialize(grader:, model_sample:, reference_answer:, request_options: {})
# @!method initialize(grader:, model_sample:, item: nil, request_options: {})
# Some parameter documentations has been truncated, see
# {OpenAI::Models::FineTuning::Alpha::GraderRunParams} for more details.
#
# @param grader [OpenAI::Models::Graders::StringCheckGrader, OpenAI::Models::Graders::TextSimilarityGrader, OpenAI::Models::Graders::PythonGrader, OpenAI::Models::Graders::ScoreModelGrader, OpenAI::Models::Graders::MultiGrader] The grader used for the fine-tuning job.
#
# @param model_sample [String] The model sample to be evaluated.
# @param model_sample [String] The model sample to be evaluated. This value will be used to populate
#
# @param reference_answer [String, Object, Array<Object>, Float] The reference answer for the evaluation.
# @param item [Object] The dataset item provided to the grader. This will be used to populate
#
# @param request_options [OpenAI::RequestOptions, Hash{Symbol=>Object}]

Expand Down Expand Up @@ -63,25 +69,6 @@ module Grader
# @!method self.variants
# @return [Array(OpenAI::Models::Graders::StringCheckGrader, OpenAI::Models::Graders::TextSimilarityGrader, OpenAI::Models::Graders::PythonGrader, OpenAI::Models::Graders::ScoreModelGrader, OpenAI::Models::Graders::MultiGrader)]
end

# The reference answer for the evaluation.
module ReferenceAnswer
extend OpenAI::Internal::Type::Union

variant String

variant OpenAI::Internal::Type::Unknown

variant -> { OpenAI::Models::FineTuning::Alpha::GraderRunParams::ReferenceAnswer::UnionMember2Array }

variant Float

# @!method self.variants
# @return [Array(String, Object, Array<Object>, Float)]

# @type [OpenAI::Internal::Type::Converter]
UnionMember2Array = OpenAI::Internal::Type::ArrayOf[OpenAI::Internal::Type::Unknown]
end
end
end
end
Expand Down
8 changes: 3 additions & 5 deletions lib/openai/models/fine_tuning/fine_tuning_job.rb
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,7 @@ class Hyperparameters < OpenAI::Internal::Type::BaseModel
# Number of examples in each batch. A larger batch size means that model
# parameters are updated less frequently, but with lower variance.
#
# @return [Object, Symbol, :auto, Integer, nil]
# @return [Symbol, :auto, Integer, nil]
optional :batch_size,
union: -> { OpenAI::FineTuning::FineTuningJob::Hyperparameters::BatchSize },
nil?: true
Expand All @@ -253,7 +253,7 @@ class Hyperparameters < OpenAI::Internal::Type::BaseModel
# The hyperparameters used for the fine-tuning job. This value will only be
# returned when running `supervised` jobs.
#
# @param batch_size [Object, Symbol, :auto, Integer, nil] Number of examples in each batch. A larger batch size means that model parameter
# @param batch_size [Symbol, :auto, Integer, nil] Number of examples in each batch. A larger batch size means that model parameter
#
# @param learning_rate_multiplier [Symbol, :auto, Float] Scaling factor for the learning rate. A smaller learning rate may be useful to a
#
Expand All @@ -266,14 +266,12 @@ class Hyperparameters < OpenAI::Internal::Type::BaseModel
module BatchSize
extend OpenAI::Internal::Type::Union

variant OpenAI::Internal::Type::Unknown

variant const: :auto

variant Integer

# @!method self.variants
# @return [Array(Object, Symbol, :auto, Integer)]
# @return [Array(Symbol, :auto, Integer)]
end

# Scaling factor for the learning rate. A smaller learning rate may be useful to
Expand Down
15 changes: 11 additions & 4 deletions lib/openai/models/graders/multi_grader.rb
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,11 @@ class MultiGrader < OpenAI::Internal::Type::BaseModel
required :calculate_output, String

# @!attribute graders
# A StringCheckGrader object that performs a string comparison between input and
# reference using a specified operation.
#
# @return [Hash{Symbol=>OpenAI::Models::Graders::StringCheckGrader, OpenAI::Models::Graders::TextSimilarityGrader, OpenAI::Models::Graders::PythonGrader, OpenAI::Models::Graders::ScoreModelGrader, OpenAI::Models::Graders::LabelModelGrader}]
required :graders, -> { OpenAI::Internal::Type::HashOf[union: OpenAI::Graders::MultiGrader::Grader] }
# @return [OpenAI::Models::Graders::StringCheckGrader, OpenAI::Models::Graders::TextSimilarityGrader, OpenAI::Models::Graders::PythonGrader, OpenAI::Models::Graders::ScoreModelGrader, OpenAI::Models::Graders::LabelModelGrader]
required :graders, union: -> { OpenAI::Graders::MultiGrader::Graders }

# @!attribute name
# The name of the grader.
Expand All @@ -28,20 +30,25 @@ class MultiGrader < OpenAI::Internal::Type::BaseModel
required :type, const: :multi

# @!method initialize(calculate_output:, graders:, name:, type: :multi)
# Some parameter documentations has been truncated, see
# {OpenAI::Models::Graders::MultiGrader} for more details.
#
# A MultiGrader object combines the output of multiple graders to produce a single
# score.
#
# @param calculate_output [String] A formula to calculate the output based on grader results.
#
# @param graders [Hash{Symbol=>OpenAI::Models::Graders::StringCheckGrader, OpenAI::Models::Graders::TextSimilarityGrader, OpenAI::Models::Graders::PythonGrader, OpenAI::Models::Graders::ScoreModelGrader, OpenAI::Models::Graders::LabelModelGrader}]
# @param graders [OpenAI::Models::Graders::StringCheckGrader, OpenAI::Models::Graders::TextSimilarityGrader, OpenAI::Models::Graders::PythonGrader, OpenAI::Models::Graders::ScoreModelGrader, OpenAI::Models::Graders::LabelModelGrader] A StringCheckGrader object that performs a string comparison between input and r
#
# @param name [String] The name of the grader.
#
# @param type [Symbol, :multi] The object type, which is always `multi`.

# A StringCheckGrader object that performs a string comparison between input and
# reference using a specified operation.
module Grader
#
# @see OpenAI::Models::Graders::MultiGrader#graders
module Graders
extend OpenAI::Internal::Type::Union

# A StringCheckGrader object that performs a string comparison between input and reference using a specified operation.
Expand Down
4 changes: 2 additions & 2 deletions lib/openai/models/image_edit_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ class ImageEditParams < OpenAI::Internal::Type::BaseModel
# The image(s) to edit. Must be a supported image file or an array of images.
#
# For `gpt-image-1`, each image should be a `png`, `webp`, or `jpg` file less than
# 25MB. You can provide up to 16 images.
# 50MB. You can provide up to 16 images.
#
# For `dall-e-2`, you can only provide one image, and it should be a square `png`
# file less than 4MB.
Expand Down Expand Up @@ -123,7 +123,7 @@ class ImageEditParams < OpenAI::Internal::Type::BaseModel
# The image(s) to edit. Must be a supported image file or an array of images.
#
# For `gpt-image-1`, each image should be a `png`, `webp`, or `jpg` file less than
# 25MB. You can provide up to 16 images.
# 50MB. You can provide up to 16 images.
#
# For `dall-e-2`, you can only provide one image, and it should be a square `png`
# file less than 4MB.
Expand Down
8 changes: 4 additions & 4 deletions lib/openai/models/responses/response.rb
Original file line number Diff line number Diff line change
Expand Up @@ -173,9 +173,9 @@ class Response < OpenAI::Internal::Type::BaseModel
# utilize scale tier credits until they are exhausted.
# - If set to 'auto', and the Project is not Scale tier enabled, the request will
# be processed using the default service tier with a lower uptime SLA and no
# latency guarentee.
# latency guarantee.
# - If set to 'default', the request will be processed using the default service
# tier with a lower uptime SLA and no latency guarentee.
# tier with a lower uptime SLA and no latency guarantee.
# - If set to 'flex', the request will be processed with the Flex Processing
# service tier.
# [Learn more](https://platform.openai.com/docs/guides/flex-processing).
Expand Down Expand Up @@ -346,9 +346,9 @@ module ToolChoice
# utilize scale tier credits until they are exhausted.
# - If set to 'auto', and the Project is not Scale tier enabled, the request will
# be processed using the default service tier with a lower uptime SLA and no
# latency guarentee.
# latency guarantee.
# - If set to 'default', the request will be processed using the default service
# tier with a lower uptime SLA and no latency guarentee.
# tier with a lower uptime SLA and no latency guarantee.
# - If set to 'flex', the request will be processed with the Flex Processing
# service tier.
# [Learn more](https://platform.openai.com/docs/guides/flex-processing).
Expand Down
Loading
Loading