Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "1.99.9"
".": "1.100.0"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 111
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-9cadfad609f94f20ebf74fdc06a80302f1a324dc69700a309a8056aabca82fd2.yml
openapi_spec_hash: 3eb8d86c06f0bb5e1190983e5acfc9ba
config_hash: 68337b532875626269c304372a669f67
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-24be531010b354303d741fc9247c1f84f75978f9f7de68aca92cb4f240a04722.yml
openapi_spec_hash: 3e46f439f6a863beadc71577eb4efa15
config_hash: ed87b9139ac595a04a2162d754df2fed
8 changes: 8 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
# Changelog

## 1.100.0 (2025-08-18)

Full Changelog: [v1.99.9...v1.100.0](https://github.com/openai/openai-python/compare/v1.99.9...v1.100.0)

### Features

* **api:** add new text parameters, expiration options ([e3dfa7c](https://github.com/openai/openai-python/commit/e3dfa7c417b8c750ff62d98650e75e72ad9b1477))

## 1.99.9 (2025-08-12)

Full Changelog: [v1.99.8...v1.99.9](https://github.com/openai/openai-python/compare/v1.99.8...v1.99.9)
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "openai"
version = "1.99.9"
version = "1.100.0"
description = "The official Python library for the openai API"
dynamic = ["readme"]
license = "Apache-2.0"
Expand Down
2 changes: 1 addition & 1 deletion src/openai/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "openai"
__version__ = "1.99.9" # x-release-please-version
__version__ = "1.100.0" # x-release-please-version
10 changes: 10 additions & 0 deletions src/openai/resources/batches.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ def create(
endpoint: Literal["/v1/responses", "/v1/chat/completions", "/v1/embeddings", "/v1/completions"],
input_file_id: str,
metadata: Optional[Metadata] | NotGiven = NOT_GIVEN,
output_expires_after: batch_create_params.OutputExpiresAfter | NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
# The extra values given here take precedence over values defined on the client or passed to this method.
extra_headers: Headers | None = None,
Expand Down Expand Up @@ -85,6 +86,9 @@ def create(
Keys are strings with a maximum length of 64 characters. Values are strings with
a maximum length of 512 characters.

output_expires_after: The expiration policy for the output and/or error file that are generated for a
batch.

extra_headers: Send extra headers

extra_query: Add additional query parameters to the request
Expand All @@ -101,6 +105,7 @@ def create(
"endpoint": endpoint,
"input_file_id": input_file_id,
"metadata": metadata,
"output_expires_after": output_expires_after,
},
batch_create_params.BatchCreateParams,
),
Expand Down Expand Up @@ -259,6 +264,7 @@ async def create(
endpoint: Literal["/v1/responses", "/v1/chat/completions", "/v1/embeddings", "/v1/completions"],
input_file_id: str,
metadata: Optional[Metadata] | NotGiven = NOT_GIVEN,
output_expires_after: batch_create_params.OutputExpiresAfter | NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
# The extra values given here take precedence over values defined on the client or passed to this method.
extra_headers: Headers | None = None,
Expand Down Expand Up @@ -295,6 +301,9 @@ async def create(
Keys are strings with a maximum length of 64 characters. Values are strings with
a maximum length of 512 characters.

output_expires_after: The expiration policy for the output and/or error file that are generated for a
batch.

extra_headers: Send extra headers

extra_query: Add additional query parameters to the request
Expand All @@ -311,6 +320,7 @@ async def create(
"endpoint": endpoint,
"input_file_id": input_file_id,
"metadata": metadata,
"output_expires_after": output_expires_after,
},
batch_create_params.BatchCreateParams,
),
Expand Down
8 changes: 4 additions & 4 deletions src/openai/resources/beta/realtime/realtime.py
Original file line number Diff line number Diff line change
Expand Up @@ -652,8 +652,8 @@ def cancel(self, *, event_id: str | NotGiven = NOT_GIVEN, response_id: str | Not
"""Send this event to cancel an in-progress response.

The server will respond
with a `response.cancelled` event or an error if there is no response to
cancel.
with a `response.done` event with a status of `response.status=cancelled`. If
there is no response to cancel, the server will respond with an error.
"""
self._connection.send(
cast(
Expand Down Expand Up @@ -904,8 +904,8 @@ async def cancel(self, *, event_id: str | NotGiven = NOT_GIVEN, response_id: str
"""Send this event to cancel an in-progress response.

The server will respond
with a `response.cancelled` event or an error if there is no response to
cancel.
with a `response.done` event with a status of `response.status=cancelled`. If
there is no response to cancel, the server will respond with an error.
"""
await self._connection.send(
cast(
Expand Down
4 changes: 2 additions & 2 deletions src/openai/resources/beta/realtime/sessions.py
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ def create(
set to `null` to turn off, in which case the client must manually trigger model
response. Server VAD means that the model will detect the start and end of
speech based on audio volume and respond at the end of user speech. Semantic VAD
is more advanced and uses a turn detection model (in conjuction with VAD) to
is more advanced and uses a turn detection model (in conjunction with VAD) to
semantically estimate whether the user has finished speaking, then dynamically
sets a timeout based on this probability. For example, if user audio trails off
with "uhhm", the model will score a low probability of turn end and wait longer
Expand Down Expand Up @@ -334,7 +334,7 @@ async def create(
set to `null` to turn off, in which case the client must manually trigger model
response. Server VAD means that the model will detect the start and end of
speech based on audio volume and respond at the end of user speech. Semantic VAD
is more advanced and uses a turn detection model (in conjuction with VAD) to
is more advanced and uses a turn detection model (in conjunction with VAD) to
semantically estimate whether the user has finished speaking, then dynamically
sets a timeout based on this probability. For example, if user audio trails off
with "uhhm", the model will score a low probability of turn end and wait longer
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ def create(
set to `null` to turn off, in which case the client must manually trigger model
response. Server VAD means that the model will detect the start and end of
speech based on audio volume and respond at the end of user speech. Semantic VAD
is more advanced and uses a turn detection model (in conjuction with VAD) to
is more advanced and uses a turn detection model (in conjunction with VAD) to
semantically estimate whether the user has finished speaking, then dynamically
sets a timeout based on this probability. For example, if user audio trails off
with "uhhm", the model will score a low probability of turn end and wait longer
Expand Down Expand Up @@ -209,7 +209,7 @@ async def create(
set to `null` to turn off, in which case the client must manually trigger model
response. Server VAD means that the model will detect the start and end of
speech based on audio volume and respond at the end of user speech. Semantic VAD
is more advanced and uses a turn detection model (in conjuction with VAD) to
is more advanced and uses a turn detection model (in conjunction with VAD) to
semantically estimate whether the user has finished speaking, then dynamically
sets a timeout based on this probability. For example, if user audio trails off
with "uhhm", the model will score a low probability of turn end and wait longer
Expand Down
12 changes: 6 additions & 6 deletions src/openai/resources/beta/threads/runs/runs.py
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,7 @@ def create(
We generally recommend altering this or temperature but not both.

truncation_strategy: Controls for how a thread will be truncated prior to the run. Use this to
control the intial context window of the run.
control the initial context window of the run.

extra_headers: Send extra headers

Expand Down Expand Up @@ -370,7 +370,7 @@ def create(
We generally recommend altering this or temperature but not both.

truncation_strategy: Controls for how a thread will be truncated prior to the run. Use this to
control the intial context window of the run.
control the initial context window of the run.

extra_headers: Send extra headers

Expand Down Expand Up @@ -520,7 +520,7 @@ def create(
We generally recommend altering this or temperature but not both.

truncation_strategy: Controls for how a thread will be truncated prior to the run. Use this to
control the intial context window of the run.
control the initial context window of the run.

extra_headers: Send extra headers

Expand Down Expand Up @@ -1650,7 +1650,7 @@ async def create(
We generally recommend altering this or temperature but not both.

truncation_strategy: Controls for how a thread will be truncated prior to the run. Use this to
control the intial context window of the run.
control the initial context window of the run.

extra_headers: Send extra headers

Expand Down Expand Up @@ -1800,7 +1800,7 @@ async def create(
We generally recommend altering this or temperature but not both.

truncation_strategy: Controls for how a thread will be truncated prior to the run. Use this to
control the intial context window of the run.
control the initial context window of the run.

extra_headers: Send extra headers

Expand Down Expand Up @@ -1950,7 +1950,7 @@ async def create(
We generally recommend altering this or temperature but not both.

truncation_strategy: Controls for how a thread will be truncated prior to the run. Use this to
control the intial context window of the run.
control the initial context window of the run.

extra_headers: Send extra headers

Expand Down
12 changes: 6 additions & 6 deletions src/openai/resources/beta/threads/threads.py
Original file line number Diff line number Diff line change
Expand Up @@ -393,7 +393,7 @@ def create_and_run(
We generally recommend altering this or temperature but not both.

truncation_strategy: Controls for how a thread will be truncated prior to the run. Use this to
control the intial context window of the run.
control the initial context window of the run.

extra_headers: Send extra headers

Expand Down Expand Up @@ -527,7 +527,7 @@ def create_and_run(
We generally recommend altering this or temperature but not both.

truncation_strategy: Controls for how a thread will be truncated prior to the run. Use this to
control the intial context window of the run.
control the initial context window of the run.

extra_headers: Send extra headers

Expand Down Expand Up @@ -661,7 +661,7 @@ def create_and_run(
We generally recommend altering this or temperature but not both.

truncation_strategy: Controls for how a thread will be truncated prior to the run. Use this to
control the intial context window of the run.
control the initial context window of the run.

extra_headers: Send extra headers

Expand Down Expand Up @@ -1251,7 +1251,7 @@ async def create_and_run(
We generally recommend altering this or temperature but not both.

truncation_strategy: Controls for how a thread will be truncated prior to the run. Use this to
control the intial context window of the run.
control the initial context window of the run.

extra_headers: Send extra headers

Expand Down Expand Up @@ -1385,7 +1385,7 @@ async def create_and_run(
We generally recommend altering this or temperature but not both.

truncation_strategy: Controls for how a thread will be truncated prior to the run. Use this to
control the intial context window of the run.
control the initial context window of the run.

extra_headers: Send extra headers

Expand Down Expand Up @@ -1519,7 +1519,7 @@ async def create_and_run(
We generally recommend altering this or temperature but not both.

truncation_strategy: Controls for how a thread will be truncated prior to the run. Use this to
control the intial context window of the run.
control the initial context window of the run.

extra_headers: Send extra headers

Expand Down
Loading