Skip to content

release: 0.18.0 #179

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Aug 18, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.17.1"
".": "0.18.0"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 109
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-6a1bfd4738fff02ef5becc3fdb2bf0cd6c026f2c924d4147a2a515474477dd9a.yml
openapi_spec_hash: 3eb8d86c06f0bb5e1190983e5acfc9ba
config_hash: a67c5e195a59855fe8a5db0dc61a3e7f
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-24be531010b354303d741fc9247c1f84f75978f9f7de68aca92cb4f240a04722.yml
openapi_spec_hash: 3e46f439f6a863beadc71577eb4efa15
config_hash: ed87b9139ac595a04a2162d754df2fed
22 changes: 22 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,27 @@
# Changelog

## 0.18.0 (2025-08-15)

Full Changelog: [v0.17.1...v0.18.0](https://github.com/openai/openai-ruby/compare/v0.17.1...v0.18.0)

### ⚠ BREAKING CHANGES

* structured output desc should go on array items not array itself ([#799](https://github.com/openai/openai-ruby/issues/799))

### Features

* **api:** add new text parameters, expiration options ([f318432](https://github.com/openai/openai-ruby/commit/f318432b19800ab42d5b0c5f179f0cdd02dbf596))


### Bug Fixes

* structured output desc should go on array items not array itself ([#799](https://github.com/openai/openai-ruby/issues/799)) ([ff507d0](https://github.com/openai/openai-ruby/commit/ff507d095ff703ba3b44ab82b06eb4314688d4eb))


### Chores

* **internal:** update test skipping reason ([c815703](https://github.com/openai/openai-ruby/commit/c815703062ce79d2cb14f252ee5d23cf4ebf15ca))

## 0.17.1 (2025-08-09)

Full Changelog: [v0.17.0...v0.17.1](https://github.com/openai/openai-ruby/compare/v0.17.0...v0.17.1)
Expand Down
2 changes: 1 addition & 1 deletion Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ GIT
PATH
remote: .
specs:
openai (0.17.1)
openai (0.18.0)
connection_pool

GEM
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ To use this gem, install via Bundler by adding the following to your application
<!-- x-release-please-start-version -->

```ruby
gem "openai", "~> 0.17.1"
gem "openai", "~> 0.18.0"
```

<!-- x-release-please-end -->
Expand Down
2 changes: 1 addition & 1 deletion examples/structured_outputs_chat_completions.rb
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ class CalendarEvent < OpenAI::BaseModel
required :name, String
required :date, String
required :participants, OpenAI::ArrayOf[Participant]
required :optional_participants, OpenAI::ArrayOf[Participant], nil?: true
required :optional_participants, OpenAI::ArrayOf[Participant, doc: "who might not show up"], nil?: true
required :is_virtual, OpenAI::Boolean
required :location,
OpenAI::UnionOf[String, Location],
Expand Down
2 changes: 1 addition & 1 deletion examples/structured_outputs_responses.rb
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ class CalendarEvent < OpenAI::BaseModel
required :name, String
required :date, String
required :participants, OpenAI::ArrayOf[Participant]
required :optional_participants, OpenAI::ArrayOf[Participant], nil?: true
required :optional_participants, OpenAI::ArrayOf[Participant, doc: "who might not show up"], nil?: true
required :is_virtual, OpenAI::Boolean
required :location,
OpenAI::UnionOf[String, Location],
Expand Down
12 changes: 2 additions & 10 deletions lib/openai/helpers/structured_output/array_of.rb
Original file line number Diff line number Diff line change
Expand Up @@ -30,19 +30,11 @@ def to_json_schema_inner(state:)
state: state
)
items = OpenAI::Helpers::StructuredOutput::JsonSchemaConverter.to_nilable(items) if nilable?
OpenAI::Helpers::StructuredOutput::JsonSchemaConverter.assoc_meta!(items, meta: @meta)

schema = {type: "array", items: items}
description.nil? ? schema : schema.update(description: description)
{type: "array", items: items}
end
end

# @return [String, nil]
attr_reader :description

def initialize(type_info, spec = {})
super
@description = [type_info, spec].grep(Hash).filter_map { _1[:doc] }.first
end
end
end
end
Expand Down
15 changes: 4 additions & 11 deletions lib/openai/helpers/structured_output/base_model.rb
Original file line number Diff line number Diff line change
Expand Up @@ -28,22 +28,22 @@ def to_json_schema_inner(state:)
OpenAI::Helpers::StructuredOutput::JsonSchemaConverter.cache_def!(state, type: self) do
path = state.fetch(:path)
properties = fields.to_h do |name, field|
type, nilable = field.fetch_values(:type, :nilable)
type, nilable, meta = field.fetch_values(:type, :nilable, :meta)
new_state = {**state, path: [*path, ".#{name}"]}

schema =
case type
in {"$ref": String}
type
in OpenAI::Helpers::StructuredOutput::JsonSchemaConverter
type.to_json_schema_inner(state: new_state).update(field.slice(:description))
type.to_json_schema_inner(state: new_state)
else
OpenAI::Helpers::StructuredOutput::JsonSchemaConverter.to_json_schema_inner(
type,
state: new_state
)
end
schema = OpenAI::Helpers::StructuredOutput::JsonSchemaConverter.to_nilable(schema) if nilable
OpenAI::Helpers::StructuredOutput::JsonSchemaConverter.assoc_meta!(schema, meta: meta)

[name, schema]
end

Expand All @@ -58,13 +58,6 @@ def to_json_schema_inner(state:)
end

class << self
def required(name_sym, type_info, spec = {})
super

doc = [type_info, spec].grep(Hash).filter_map { _1[:doc] }.first
known_fields.fetch(name_sym).update(description: doc) unless doc.nil?
end

def optional(...)
# rubocop:disable Layout/LineLength
message = "`optional` is not supported for structured output APIs, use `#required` with `nil?: true` instead"
Expand Down
22 changes: 19 additions & 3 deletions lib/openai/helpers/structured_output/json_schema_converter.rb
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ def to_nilable(schema)
in {"$ref": String}
{
anyOf: [
schema.update(OpenAI::Helpers::StructuredOutput::JsonSchemaConverter::NO_REF => true),
schema.merge!(OpenAI::Helpers::StructuredOutput::JsonSchemaConverter::NO_REF => true),
{type: null}
]
}
Expand All @@ -60,6 +60,17 @@ def to_nilable(schema)
end
end

# @api private
#
# @param schema [Hash{Symbol=>Object}]
def assoc_meta!(schema, meta:)
xformed = meta.transform_keys(doc: :description)
if schema.key?(:$ref) && !xformed.empty?
schema.merge!(OpenAI::Helpers::StructuredOutput::JsonSchemaConverter::NO_REF => true)
end
schema.merge!(xformed)
end

# @api private
#
# @param state [Hash{Symbol=>Object}]
Expand Down Expand Up @@ -116,12 +127,17 @@ def to_json_schema(type)

case refs
in [ref]
ref.replace(sch)
ref.replace(ref.except(:$ref).merge(sch))
in [_, ref, *]
reused_defs.store(ref.fetch(:$ref), sch)
refs.each do
unless (meta = _1.except(:$ref)).empty?
_1.replace(allOf: [_1.slice(:$ref), meta])
end
end
else
end
no_refs.each { _1.replace(sch) }
no_refs.each { _1.replace(_1.except(:$ref).merge(sch)) }
end

xformed = reused_defs.transform_keys { _1.delete_prefix("#/$defs/") }
Expand Down
12 changes: 2 additions & 10 deletions lib/openai/helpers/structured_output/union_of.rb
Original file line number Diff line number Diff line change
Expand Up @@ -56,16 +56,8 @@ def self.[](...) = new(...)

# @param variants [Array<generic<Member>>]
def initialize(*variants)
case variants
in [Symbol => d, Hash => vs]
discriminator(d)
vs.each do |k, v|
v.is_a?(Proc) ? variant(k, v) : variant(k, -> { v })
end
else
variants.each do |v|
v.is_a?(Proc) ? variant(v) : variant(-> { v })
end
variants.each do |v|
v.is_a?(Proc) ? variant(v) : variant(-> { v })
end
end
end
Expand Down
39 changes: 38 additions & 1 deletion lib/openai/models/batch_create_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,14 @@ class BatchCreateParams < OpenAI::Internal::Type::BaseModel
# @return [Hash{Symbol=>String}, nil]
optional :metadata, OpenAI::Internal::Type::HashOf[String], nil?: true

# @!method initialize(completion_window:, endpoint:, input_file_id:, metadata: nil, request_options: {})
# @!attribute output_expires_after
# The expiration policy for the output and/or error file that are generated for a
# batch.
#
# @return [OpenAI::Models::BatchCreateParams::OutputExpiresAfter, nil]
optional :output_expires_after, -> { OpenAI::BatchCreateParams::OutputExpiresAfter }

# @!method initialize(completion_window:, endpoint:, input_file_id:, metadata: nil, output_expires_after: nil, request_options: {})
# Some parameter documentations has been truncated, see
# {OpenAI::Models::BatchCreateParams} for more details.
#
Expand All @@ -60,6 +67,8 @@ class BatchCreateParams < OpenAI::Internal::Type::BaseModel
#
# @param metadata [Hash{Symbol=>String}, nil] Set of 16 key-value pairs that can be attached to an object. This can be
#
# @param output_expires_after [OpenAI::Models::BatchCreateParams::OutputExpiresAfter] The expiration policy for the output and/or error file that are generated for a
#
# @param request_options [OpenAI::RequestOptions, Hash{Symbol=>Object}]

# The time frame within which the batch should be processed. Currently only `24h`
Expand Down Expand Up @@ -88,6 +97,34 @@ module Endpoint
# @!method self.values
# @return [Array<Symbol>]
end

class OutputExpiresAfter < OpenAI::Internal::Type::BaseModel
# @!attribute anchor
# Anchor timestamp after which the expiration policy applies. Supported anchors:
# `created_at`. Note that the anchor is the file creation time, not the time the
# batch is created.
#
# @return [Symbol, :created_at]
required :anchor, const: :created_at

# @!attribute seconds
# The number of seconds after the anchor time that the file will expire. Must be
# between 3600 (1 hour) and 2592000 (30 days).
#
# @return [Integer]
required :seconds, Integer

# @!method initialize(seconds:, anchor: :created_at)
# Some parameter documentations has been truncated, see
# {OpenAI::Models::BatchCreateParams::OutputExpiresAfter} for more details.
#
# The expiration policy for the output and/or error file that are generated for a
# batch.
#
# @param seconds [Integer] The number of seconds after the anchor time that the file will expire. Must be b
#
# @param anchor [Symbol, :created_at] Anchor timestamp after which the expiration policy applies. Supported anchors: `
end
end
end
end
4 changes: 2 additions & 2 deletions lib/openai/models/beta/thread_create_and_run_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ class ThreadCreateAndRunParams < OpenAI::Internal::Type::BaseModel

# @!attribute truncation_strategy
# Controls for how a thread will be truncated prior to the run. Use this to
# control the intial context window of the run.
# control the initial context window of the run.
#
# @return [OpenAI::Models::Beta::ThreadCreateAndRunParams::TruncationStrategy, nil]
optional :truncation_strategy,
Expand Down Expand Up @@ -694,7 +694,7 @@ class TruncationStrategy < OpenAI::Internal::Type::BaseModel
# details.
#
# Controls for how a thread will be truncated prior to the run. Use this to
# control the intial context window of the run.
# control the initial context window of the run.
#
# @param type [Symbol, OpenAI::Models::Beta::ThreadCreateAndRunParams::TruncationStrategy::Type] The truncation strategy to use for the thread. The default is `auto`. If set to
#
Expand Down
4 changes: 2 additions & 2 deletions lib/openai/models/beta/threads/run.rb
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@ class Run < OpenAI::Internal::Type::BaseModel

# @!attribute truncation_strategy
# Controls for how a thread will be truncated prior to the run. Use this to
# control the intial context window of the run.
# control the initial context window of the run.
#
# @return [OpenAI::Models::Beta::Threads::Run::TruncationStrategy, nil]
required :truncation_strategy, -> { OpenAI::Beta::Threads::Run::TruncationStrategy }, nil?: true
Expand Down Expand Up @@ -415,7 +415,7 @@ class TruncationStrategy < OpenAI::Internal::Type::BaseModel
# {OpenAI::Models::Beta::Threads::Run::TruncationStrategy} for more details.
#
# Controls for how a thread will be truncated prior to the run. Use this to
# control the intial context window of the run.
# control the initial context window of the run.
#
# @param type [Symbol, OpenAI::Models::Beta::Threads::Run::TruncationStrategy::Type] The truncation strategy to use for the thread. The default is `auto`. If set to
#
Expand Down
4 changes: 2 additions & 2 deletions lib/openai/models/beta/threads/run_create_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ class RunCreateParams < OpenAI::Internal::Type::BaseModel

# @!attribute truncation_strategy
# Controls for how a thread will be truncated prior to the run. Use this to
# control the intial context window of the run.
# control the initial context window of the run.
#
# @return [OpenAI::Models::Beta::Threads::RunCreateParams::TruncationStrategy, nil]
optional :truncation_strategy,
Expand Down Expand Up @@ -413,7 +413,7 @@ class TruncationStrategy < OpenAI::Internal::Type::BaseModel
# details.
#
# Controls for how a thread will be truncated prior to the run. Use this to
# control the intial context window of the run.
# control the initial context window of the run.
#
# @param type [Symbol, OpenAI::Models::Beta::Threads::RunCreateParams::TruncationStrategy::Type] The truncation strategy to use for the thread. The default is `auto`. If set to
#
Expand Down
12 changes: 6 additions & 6 deletions lib/openai/models/chat/chat_completion.rb
Original file line number Diff line number Diff line change
Expand Up @@ -47,9 +47,8 @@ class ChatCompletion < OpenAI::Internal::Type::BaseModel
# - If set to 'default', then the request will be processed with the standard
# pricing and performance for the selected model.
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
# 'priority', then the request will be processed with the corresponding service
# tier. [Contact sales](https://openai.com/contact-sales) to learn more about
# Priority processing.
# '[priority](https://openai.com/api-priority-processing/)', then the request
# will be processed with the corresponding service tier.
# - When not set, the default behavior is 'auto'.
#
# When the `service_tier` parameter is set, the response body will include the
Expand All @@ -61,6 +60,8 @@ class ChatCompletion < OpenAI::Internal::Type::BaseModel
optional :service_tier, enum: -> { OpenAI::Chat::ChatCompletion::ServiceTier }, nil?: true

# @!attribute system_fingerprint
# @deprecated
#
# This fingerprint represents the backend configuration that the model runs with.
#
# Can be used in conjunction with the `seed` request parameter to understand when
Expand Down Expand Up @@ -196,9 +197,8 @@ class Logprobs < OpenAI::Internal::Type::BaseModel
# - If set to 'default', then the request will be processed with the standard
# pricing and performance for the selected model.
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
# 'priority', then the request will be processed with the corresponding service
# tier. [Contact sales](https://openai.com/contact-sales) to learn more about
# Priority processing.
# '[priority](https://openai.com/api-priority-processing/)', then the request
# will be processed with the corresponding service tier.
# - When not set, the default behavior is 'auto'.
#
# When the `service_tier` parameter is set, the response body will include the
Expand Down
12 changes: 6 additions & 6 deletions lib/openai/models/chat/chat_completion_chunk.rb
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,8 @@ class ChatCompletionChunk < OpenAI::Internal::Type::BaseModel
# - If set to 'default', then the request will be processed with the standard
# pricing and performance for the selected model.
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
# 'priority', then the request will be processed with the corresponding service
# tier. [Contact sales](https://openai.com/contact-sales) to learn more about
# Priority processing.
# '[priority](https://openai.com/api-priority-processing/)', then the request
# will be processed with the corresponding service tier.
# - When not set, the default behavior is 'auto'.
#
# When the `service_tier` parameter is set, the response body will include the
Expand All @@ -60,6 +59,8 @@ class ChatCompletionChunk < OpenAI::Internal::Type::BaseModel
optional :service_tier, enum: -> { OpenAI::Chat::ChatCompletionChunk::ServiceTier }, nil?: true

# @!attribute system_fingerprint
# @deprecated
#
# This fingerprint represents the backend configuration that the model runs with.
# Can be used in conjunction with the `seed` request parameter to understand when
# backend changes have been made that might impact determinism.
Expand Down Expand Up @@ -379,9 +380,8 @@ class Logprobs < OpenAI::Internal::Type::BaseModel
# - If set to 'default', then the request will be processed with the standard
# pricing and performance for the selected model.
# - If set to '[flex](https://platform.openai.com/docs/guides/flex-processing)' or
# 'priority', then the request will be processed with the corresponding service
# tier. [Contact sales](https://openai.com/contact-sales) to learn more about
# Priority processing.
# '[priority](https://openai.com/api-priority-processing/)', then the request
# will be processed with the corresponding service tier.
# - When not set, the default behavior is 'auto'.
#
# When the `service_tier` parameter is set, the response body will include the
Expand Down
Loading