Skip to content

release: 1.15.0 #1274

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Apr 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "1.14.3"
".": "1.15.0"
}
25 changes: 25 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,30 @@
# Changelog

## 1.15.0 (2024-03-31)

Full Changelog: [v1.14.3...v1.15.0](https://github.com/openai/openai-python/compare/v1.14.3...v1.15.0)

### Features

* **api:** adding temperature parameter ([#1282](https://github.com/openai/openai-python/issues/1282)) ([0e68fd3](https://github.com/openai/openai-python/commit/0e68fd3690155785d1fb0ee9a8604f51e6701b1d))
* **client:** increase default HTTP max_connections to 1000 and max_keepalive_connections to 100 ([#1281](https://github.com/openai/openai-python/issues/1281)) ([340d139](https://github.com/openai/openai-python/commit/340d1391e3071a265ed12c0a8d70d4d73a860bd8))
* **package:** export default constants ([#1275](https://github.com/openai/openai-python/issues/1275)) ([fdc126e](https://github.com/openai/openai-python/commit/fdc126e428320f1bed5eabd3eed229f08ab9effa))


### Bug Fixes

* **project:** use absolute github links on PyPi ([#1280](https://github.com/openai/openai-python/issues/1280)) ([94cd528](https://github.com/openai/openai-python/commit/94cd52837650e5b7e115119d69e6b1c7ba1f6bf1))


### Chores

* **internal:** bump dependencies ([#1273](https://github.com/openai/openai-python/issues/1273)) ([18dcd65](https://github.com/openai/openai-python/commit/18dcd654d9f54628b5fe21a499d1fef500e15f7f))


### Documentation

* **readme:** change undocumented params wording ([#1284](https://github.com/openai/openai-python/issues/1284)) ([7498ef1](https://github.com/openai/openai-python/commit/7498ef1e9568200086ba3efb99ea100feb05e3f0))

## 1.14.3 (2024-03-25)

Full Changelog: [v1.14.2...v1.14.3](https://github.com/openai/openai-python/compare/v1.14.2...v1.14.3)
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -511,12 +511,12 @@ response = client.post(
print(response.headers.get("x-foo"))
```

#### Undocumented params
#### Undocumented request params

If you want to explicitly send an extra param, you can do so with the `extra_query`, `extra_body`, and `extra_headers` request
options.

#### Undocumented properties
#### Undocumented response properties

To access undocumented response properties, you can access the extra fields like `response.unknown_prop`. You
can also get all the extra fields on the Pydantic model as a dict with
Expand Down
17 changes: 14 additions & 3 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
[project]
name = "openai"
version = "1.14.3"
version = "1.15.0"
description = "The official Python library for the openai API"
readme = "README.md"
dynamic = ["readme"]
license = "Apache-2.0"
authors = [
{ name = "OpenAI", email = "[email protected]" },
Expand Down Expand Up @@ -93,7 +93,7 @@ typecheck = { chain = [
"typecheck:mypy" = "mypy ."

[build-system]
requires = ["hatchling"]
requires = ["hatchling", "hatch-fancy-pypi-readme"]
build-backend = "hatchling.build"

[tool.hatch.build]
Expand All @@ -104,6 +104,17 @@ include = [
[tool.hatch.build.targets.wheel]
packages = ["src/openai"]

[tool.hatch.metadata.hooks.fancy-pypi-readme]
content-type = "text/markdown"

[[tool.hatch.metadata.hooks.fancy-pypi-readme.fragments]]
path = "README.md"

[[tool.hatch.metadata.hooks.fancy-pypi-readme.substitutions]]
# replace relative links with absolute links
pattern = '\[(.+?)\]\(((?!https?://)\S+?)\)'
replacement = '[\1](https://github.com/openai/openai-python/tree/main/\g<2>)'

[tool.black]
line-length = 120
target-version = ["py37"]
Expand Down
4 changes: 2 additions & 2 deletions requirements-dev.lock
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ attrs==23.1.0
azure-core==1.30.1
# via azure-identity
azure-identity==1.15.0
black==24.2.0
black==24.3.0
# via inline-snapshot
certifi==2023.7.22
# via httpcore
Expand Down Expand Up @@ -67,7 +67,7 @@ importlib-metadata==7.0.0
iniconfig==2.0.0
# via pytest
inline-snapshot==0.7.0
msal==1.27.0
msal==1.28.0
# via azure-identity
# via msal-extensions
msal-extensions==1.1.0
Expand Down
6 changes: 3 additions & 3 deletions requirements.lock
Original file line number Diff line number Diff line change
Expand Up @@ -33,15 +33,15 @@ numpy==1.26.4
# via openai
# via pandas
# via pandas-stubs
pandas==2.2.0
pandas==2.2.1
# via openai
pandas-stubs==2.2.0.240218
pandas-stubs==2.2.1.240316
# via openai
pydantic==2.4.2
# via openai
pydantic-core==2.10.1
# via pydantic
python-dateutil==2.8.2
python-dateutil==2.9.0.post0
# via pandas
pytz==2024.1
# via pandas
Expand Down
4 changes: 4 additions & 0 deletions src/openai/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
from ._models import BaseModel
from ._version import __title__, __version__
from ._response import APIResponse as APIResponse, AsyncAPIResponse as AsyncAPIResponse
from ._constants import DEFAULT_TIMEOUT, DEFAULT_MAX_RETRIES, DEFAULT_CONNECTION_LIMITS
from ._exceptions import (
APIError,
OpenAIError,
Expand Down Expand Up @@ -63,6 +64,9 @@
"AsyncOpenAI",
"file_from_path",
"BaseModel",
"DEFAULT_TIMEOUT",
"DEFAULT_MAX_RETRIES",
"DEFAULT_CONNECTION_LIMITS",
]

from .lib import azure as _azure
Expand Down
6 changes: 3 additions & 3 deletions src/openai/_base_client.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,13 +71,13 @@
extract_response_type,
)
from ._constants import (
DEFAULT_LIMITS,
DEFAULT_TIMEOUT,
MAX_RETRY_DELAY,
DEFAULT_MAX_RETRIES,
INITIAL_RETRY_DELAY,
RAW_RESPONSE_HEADER,
OVERRIDE_CAST_TO_HEADER,
DEFAULT_CONNECTION_LIMITS,
)
from ._streaming import Stream, SSEDecoder, AsyncStream, SSEBytesDecoder
from ._exceptions import (
Expand Down Expand Up @@ -747,7 +747,7 @@ def __init__(
if http_client is not None:
raise ValueError("The `http_client` argument is mutually exclusive with `connection_pool_limits`")
else:
limits = DEFAULT_LIMITS
limits = DEFAULT_CONNECTION_LIMITS

if transport is not None:
warnings.warn(
Expand Down Expand Up @@ -1294,7 +1294,7 @@ def __init__(
if http_client is not None:
raise ValueError("The `http_client` argument is mutually exclusive with `connection_pool_limits`")
else:
limits = DEFAULT_LIMITS
limits = DEFAULT_CONNECTION_LIMITS

if transport is not None:
warnings.warn(
Expand Down
2 changes: 1 addition & 1 deletion src/openai/_constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
# default timeout is 10 minutes
DEFAULT_TIMEOUT = httpx.Timeout(timeout=600.0, connect=5.0)
DEFAULT_MAX_RETRIES = 2
DEFAULT_LIMITS = httpx.Limits(max_connections=100, max_keepalive_connections=20)
DEFAULT_CONNECTION_LIMITS = httpx.Limits(max_connections=1000, max_keepalive_connections=100)

INITIAL_RETRY_DELAY = 0.5
MAX_RETRY_DELAY = 8.0
2 changes: 1 addition & 1 deletion src/openai/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "openai"
__version__ = "1.14.3" # x-release-please-version
__version__ = "1.15.0" # x-release-please-version
22 changes: 16 additions & 6 deletions src/openai/resources/beta/threads/messages/messages.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ def create(
thread_id: str,
*,
content: str,
role: Literal["user"],
role: Literal["user", "assistant"],
file_ids: List[str] | NotGiven = NOT_GIVEN,
metadata: Optional[object] | NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
Expand All @@ -68,8 +68,13 @@ def create(
Args:
content: The content of the message.

role: The role of the entity that is creating the message. Currently only `user` is
supported.
role:
The role of the entity that is creating the message. Allowed values include:

- `user`: Indicates the message is sent by an actual user and should be used in
most cases to represent user-generated messages.
- `assistant`: Indicates the message is generated by the assistant. Use this
value to insert messages from the assistant into the conversation.

file_ids: A list of [File](https://platform.openai.com/docs/api-reference/files) IDs that
the message should use. There can be a maximum of 10 files attached to a
Expand Down Expand Up @@ -276,7 +281,7 @@ async def create(
thread_id: str,
*,
content: str,
role: Literal["user"],
role: Literal["user", "assistant"],
file_ids: List[str] | NotGiven = NOT_GIVEN,
metadata: Optional[object] | NotGiven = NOT_GIVEN,
# Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
Expand All @@ -292,8 +297,13 @@ async def create(
Args:
content: The content of the message.

role: The role of the entity that is creating the message. Currently only `user` is
supported.
role:
The role of the entity that is creating the message. Allowed values include:

- `user`: Indicates the message is sent by an actual user and should be used in
most cases to represent user-generated messages.
- `assistant`: Indicates the message is generated by the assistant. Use this
value to insert messages from the assistant into the conversation.

file_ids: A list of [File](https://platform.openai.com/docs/api-reference/files) IDs that
the message should use. There can be a maximum of 10 files attached to a
Expand Down
Loading