Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release: 0.32.0 #604

Merged
merged 9 commits into from
Jul 29, 2024
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.31.2"
".": "0.32.0"
}
2 changes: 1 addition & 1 deletion .stats.yml
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
configured_endpoints: 2
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/anthropic-e2a51f04a202c13736b6fa2061a89a0c443f99ab166d965d702baf371eb1ca8f.yml
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/anthropic-5903ec2fd4efd7f261908bc4ec8ecd6b19cb9efa79637ad273583f1b763f80fd.yml
27 changes: 27 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,32 @@
# Changelog

## 0.32.0 (2024-07-29)

Full Changelog: [v0.31.2...v0.32.0](https://github.com/anthropics/anthropic-sdk-python/compare/v0.31.2...v0.32.0)

### Features

* add back compat alias for InputJsonDelta ([25a5b6c](https://github.com/anthropics/anthropic-sdk-python/commit/25a5b6c81ffb5996ef697aab22a22d8be5751bc1))


### Bug Fixes

* change signatures for the stream function ([c9eb11b](https://github.com/anthropics/anthropic-sdk-python/commit/c9eb11b1f9656202ee88e9869e59160bc37f5434))
* **client:** correctly apply client level timeout for messages ([#615](https://github.com/anthropics/anthropic-sdk-python/issues/615)) ([5f8d88f](https://github.com/anthropics/anthropic-sdk-python/commit/5f8d88f6fcc2ba05cd9fc6f8ae7aa8c61dc6b0d0))


### Chores

* **docs:** document how to do per-request http client customization ([#603](https://github.com/anthropics/anthropic-sdk-python/issues/603)) ([5161f62](https://github.com/anthropics/anthropic-sdk-python/commit/5161f626a0bec757b96217dc0f81e8908546f29a))
* **internal:** add type construction helper ([#613](https://github.com/anthropics/anthropic-sdk-python/issues/613)) ([5e36940](https://github.com/anthropics/anthropic-sdk-python/commit/5e36940a42e401c3f0c1e42aa248d431fdf7192c))
* sync spec ([#605](https://github.com/anthropics/anthropic-sdk-python/issues/605)) ([6b7707f](https://github.com/anthropics/anthropic-sdk-python/commit/6b7707f62788fca2e166209e82935a2a2fa8204a))
* **tests:** update prism version ([#607](https://github.com/anthropics/anthropic-sdk-python/issues/607)) ([1797dc6](https://github.com/anthropics/anthropic-sdk-python/commit/1797dc6139ffaca6436ed897972471e67ba1b828))


### Refactors

* extract model out to a named type and rename partialjson ([#612](https://github.com/anthropics/anthropic-sdk-python/issues/612)) ([c53efc7](https://github.com/anthropics/anthropic-sdk-python/commit/c53efc786fa95831a398f37740a81b42f7b64c94))

## 0.31.2 (2024-07-17)

Full Changelog: [v0.31.1...v0.31.2](https://github.com/anthropics/anthropic-sdk-python/compare/v0.31.1...v0.31.2)
Expand Down
6 changes: 6 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -508,6 +508,12 @@ client = Anthropic(
)
```

You can also customize the client on a per-request basis by using `with_options()`:

```python
client.with_options(http_client=DefaultHttpxClient(...))
```

### Managing HTTP resources

By default the library closes underlying HTTP connections whenever the client is [garbage collected](https://docs.python.org/3/reference/datamodel.html#object.__del__). You can manually close the client using the `.close()` method if desired, or with a context manager that closes when exiting.
Expand Down
3 changes: 2 additions & 1 deletion api.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,14 +9,15 @@ from anthropic.types import (
ContentBlockStartEvent,
ContentBlockStopEvent,
ImageBlockParam,
InputJsonDelta,
InputJSONDelta,
Message,
MessageDeltaEvent,
MessageDeltaUsage,
MessageParam,
MessageStartEvent,
MessageStopEvent,
MessageStreamEvent,
Model,
RawContentBlockDeltaEvent,
RawContentBlockStartEvent,
RawContentBlockStopEvent,
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "anthropic"
version = "0.31.2"
version = "0.32.0"
description = "The official Python library for the anthropic API"
dynamic = ["readme"]
license = "MIT"
Expand Down
4 changes: 2 additions & 2 deletions scripts/mock
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ echo "==> Starting mock server with URL ${URL}"

# Run prism mock on the given spec
if [ "$1" == "--daemon" ]; then
npm exec --package=@stoplight/prism-cli@~5.8 -- prism mock "$URL" &> .prism.log &
npm exec --package=@stainless-api/prism-cli@5.8.4 -- prism mock "$URL" &> .prism.log &

# Wait for server to come online
echo -n "Waiting for server"
Expand All @@ -37,5 +37,5 @@ if [ "$1" == "--daemon" ]; then

echo
else
npm exec --package=@stoplight/prism-cli@~5.8 -- prism mock "$URL"
npm exec --package=@stainless-api/prism-cli@5.8.4 -- prism mock "$URL"
fi
9 changes: 9 additions & 0 deletions src/anthropic/_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -406,6 +406,15 @@ def build(
return cast(_BaseModelT, construct_type(type_=base_model_cls, value=kwargs))


def construct_type_unchecked(*, value: object, type_: type[_T]) -> _T:
"""Loose coercion to the expected type with construction of nested values.

Note: the returned value from this function is not guaranteed to match the
given type.
"""
return cast(_T, construct_type(value=value, type_=type_))


def construct_type(*, value: object, type_: object) -> object:
"""Loose coercion to the expected type with construction of nested values.

Expand Down
2 changes: 1 addition & 1 deletion src/anthropic/_utils/_reflection.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ def assert_signatures_in_sync(

if custom_param.annotation != source_param.annotation:
errors.append(
f"types for the `{name}` param are do not match; source={repr(source_param.annotation)} checking={repr(source_param.annotation)}"
f"types for the `{name}` param are do not match; source={repr(source_param.annotation)} checking={repr(custom_param.annotation)}"
)
continue

Expand Down
2 changes: 1 addition & 1 deletion src/anthropic/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "anthropic"
__version__ = "0.31.2" # x-release-please-version
__version__ = "0.32.0" # x-release-please-version
71 changes: 36 additions & 35 deletions src/anthropic/resources/completions.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

from __future__ import annotations

from typing import List, Union, overload
from typing import List, overload
from typing_extensions import Literal

import httpx
Expand All @@ -11,16 +11,19 @@
from ..types import completion_create_params
from .._types import NOT_GIVEN, Body, Query, Headers, NotGiven
from .._utils import (
is_given,
required_args,
maybe_transform,
async_maybe_transform,
)
from .._compat import cached_property
from .._resource import SyncAPIResource, AsyncAPIResource
from .._response import to_streamed_response_wrapper, async_to_streamed_response_wrapper
from .._constants import DEFAULT_TIMEOUT
from .._streaming import Stream, AsyncStream
from .._base_client import make_request_options
from ..types.completion import Completion
from ..types.model_param import ModelParam

__all__ = ["Completions", "AsyncCompletions"]

Expand All @@ -39,7 +42,7 @@ def create(
self,
*,
max_tokens_to_sample: int,
model: Union[str, Literal["claude-2.0", "claude-2.1", "claude-instant-1.2"]],
model: ModelParam,
prompt: str,
metadata: completion_create_params.Metadata | NotGiven = NOT_GIVEN,
stop_sequences: List[str] | NotGiven = NOT_GIVEN,
Expand All @@ -52,7 +55,7 @@ def create(
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = 600,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) -> Completion:
"""[Legacy] Create a Text Completion.

Expand All @@ -71,9 +74,8 @@ def create(
Note that our models may stop _before_ reaching this maximum. This parameter
only specifies the absolute maximum number of tokens to generate.

model: The model that will complete your prompt.

See [models](https://docs.anthropic.com/en/docs/models-overview) for additional
model: The model that will complete your prompt.\n\nSee
[models](https://docs.anthropic.com/en/docs/models-overview) for additional
details and options.

prompt: The prompt that you want Claude to complete.
Expand Down Expand Up @@ -144,7 +146,7 @@ def create(
self,
*,
max_tokens_to_sample: int,
model: Union[str, Literal["claude-2.0", "claude-2.1", "claude-instant-1.2"]],
model: ModelParam,
prompt: str,
stream: Literal[True],
metadata: completion_create_params.Metadata | NotGiven = NOT_GIVEN,
Expand All @@ -157,7 +159,7 @@ def create(
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = 600,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) -> Stream[Completion]:
"""[Legacy] Create a Text Completion.

Expand All @@ -176,9 +178,8 @@ def create(
Note that our models may stop _before_ reaching this maximum. This parameter
only specifies the absolute maximum number of tokens to generate.

model: The model that will complete your prompt.

See [models](https://docs.anthropic.com/en/docs/models-overview) for additional
model: The model that will complete your prompt.\n\nSee
[models](https://docs.anthropic.com/en/docs/models-overview) for additional
details and options.

prompt: The prompt that you want Claude to complete.
Expand Down Expand Up @@ -249,7 +250,7 @@ def create(
self,
*,
max_tokens_to_sample: int,
model: Union[str, Literal["claude-2.0", "claude-2.1", "claude-instant-1.2"]],
model: ModelParam,
prompt: str,
stream: bool,
metadata: completion_create_params.Metadata | NotGiven = NOT_GIVEN,
Expand All @@ -262,7 +263,7 @@ def create(
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = 600,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) -> Completion | Stream[Completion]:
"""[Legacy] Create a Text Completion.

Expand All @@ -281,9 +282,8 @@ def create(
Note that our models may stop _before_ reaching this maximum. This parameter
only specifies the absolute maximum number of tokens to generate.

model: The model that will complete your prompt.

See [models](https://docs.anthropic.com/en/docs/models-overview) for additional
model: The model that will complete your prompt.\n\nSee
[models](https://docs.anthropic.com/en/docs/models-overview) for additional
details and options.

prompt: The prompt that you want Claude to complete.
Expand Down Expand Up @@ -354,7 +354,7 @@ def create(
self,
*,
max_tokens_to_sample: int,
model: Union[str, Literal["claude-2.0", "claude-2.1", "claude-instant-1.2"]],
model: ModelParam,
prompt: str,
metadata: completion_create_params.Metadata | NotGiven = NOT_GIVEN,
stop_sequences: List[str] | NotGiven = NOT_GIVEN,
Expand All @@ -367,8 +367,10 @@ def create(
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = 600,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) -> Completion | Stream[Completion]:
if not is_given(timeout) and self._client.timeout == DEFAULT_TIMEOUT:
timeout = 600
return self._post(
"/v1/complete",
body=maybe_transform(
Expand Down Expand Up @@ -408,7 +410,7 @@ async def create(
self,
*,
max_tokens_to_sample: int,
model: Union[str, Literal["claude-2.0", "claude-2.1", "claude-instant-1.2"]],
model: ModelParam,
prompt: str,
metadata: completion_create_params.Metadata | NotGiven = NOT_GIVEN,
stop_sequences: List[str] | NotGiven = NOT_GIVEN,
Expand All @@ -421,7 +423,7 @@ async def create(
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = 600,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) -> Completion:
"""[Legacy] Create a Text Completion.

Expand All @@ -440,9 +442,8 @@ async def create(
Note that our models may stop _before_ reaching this maximum. This parameter
only specifies the absolute maximum number of tokens to generate.

model: The model that will complete your prompt.

See [models](https://docs.anthropic.com/en/docs/models-overview) for additional
model: The model that will complete your prompt.\n\nSee
[models](https://docs.anthropic.com/en/docs/models-overview) for additional
details and options.

prompt: The prompt that you want Claude to complete.
Expand Down Expand Up @@ -513,7 +514,7 @@ async def create(
self,
*,
max_tokens_to_sample: int,
model: Union[str, Literal["claude-2.0", "claude-2.1", "claude-instant-1.2"]],
model: ModelParam,
prompt: str,
stream: Literal[True],
metadata: completion_create_params.Metadata | NotGiven = NOT_GIVEN,
Expand All @@ -526,7 +527,7 @@ async def create(
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = 600,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) -> AsyncStream[Completion]:
"""[Legacy] Create a Text Completion.

Expand All @@ -545,9 +546,8 @@ async def create(
Note that our models may stop _before_ reaching this maximum. This parameter
only specifies the absolute maximum number of tokens to generate.

model: The model that will complete your prompt.

See [models](https://docs.anthropic.com/en/docs/models-overview) for additional
model: The model that will complete your prompt.\n\nSee
[models](https://docs.anthropic.com/en/docs/models-overview) for additional
details and options.

prompt: The prompt that you want Claude to complete.
Expand Down Expand Up @@ -618,7 +618,7 @@ async def create(
self,
*,
max_tokens_to_sample: int,
model: Union[str, Literal["claude-2.0", "claude-2.1", "claude-instant-1.2"]],
model: ModelParam,
prompt: str,
stream: bool,
metadata: completion_create_params.Metadata | NotGiven = NOT_GIVEN,
Expand All @@ -631,7 +631,7 @@ async def create(
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = 600,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) -> Completion | AsyncStream[Completion]:
"""[Legacy] Create a Text Completion.

Expand All @@ -650,9 +650,8 @@ async def create(
Note that our models may stop _before_ reaching this maximum. This parameter
only specifies the absolute maximum number of tokens to generate.

model: The model that will complete your prompt.

See [models](https://docs.anthropic.com/en/docs/models-overview) for additional
model: The model that will complete your prompt.\n\nSee
[models](https://docs.anthropic.com/en/docs/models-overview) for additional
details and options.

prompt: The prompt that you want Claude to complete.
Expand Down Expand Up @@ -723,7 +722,7 @@ async def create(
self,
*,
max_tokens_to_sample: int,
model: Union[str, Literal["claude-2.0", "claude-2.1", "claude-instant-1.2"]],
model: ModelParam,
prompt: str,
metadata: completion_create_params.Metadata | NotGiven = NOT_GIVEN,
stop_sequences: List[str] | NotGiven = NOT_GIVEN,
Expand All @@ -736,8 +735,10 @@ async def create(
extra_headers: Headers | None = None,
extra_query: Query | None = None,
extra_body: Body | None = None,
timeout: float | httpx.Timeout | None | NotGiven = 600,
timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
) -> Completion | AsyncStream[Completion]:
if not is_given(timeout) and self._client.timeout == DEFAULT_TIMEOUT:
timeout = 600
return await self._post(
"/v1/complete",
body=await async_maybe_transform(
Expand Down
Loading
Loading