Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AnthropicVertex stream chat generation is taking too much time #564

Closed
DhruvThu opened this issue Jun 27, 2024 · 7 comments
Closed

AnthropicVertex stream chat generation is taking too much time #564

DhruvThu opened this issue Jun 27, 2024 · 7 comments
Assignees

Comments

@DhruvThu
Copy link

Recently, i have started using AnthropicVertex instead of direct anthropic. When I try to generate some data through AnthropicVertex client, it is taking around 2s to start streaming. However, in case of direct anthropic, it is not taking this much time. Also 2s duration is random, sometime it takes quite large amount of time and goes upto 6-10s. In worse case, it goes upto 20s. So, is there any que kind of stuff? I am using same code given in vertex ai anthropic notebook to generate responses. Is there any workaround which i need to complete to get response as fast as direct anthropic? If someone could guide me on this, it would be really helpful.

Thanks !!

@aaron-lerner aaron-lerner self-assigned this Jun 28, 2024
@aaron-lerner
Copy link
Contributor

Hey @DhruvThu, can you share a few ids from the responses you get back on vertex requests? Or share a few request ids? This will help us debug.

@DhruvThu
Copy link
Author

DhruvThu commented Jun 29, 2024

Thanks for responding. Also, I am using streaming response from vertex anthropic and these are some of the message ids which i got in first chunk. msg_01D2jNpu4rUZMXUvwtpipMnx, msg_01CMjRdPAhDQaWELbrgSirS8

@aaron-lerner
Copy link
Contributor

Hmm, message ids from vertex should look like msg_vrtx_.... The ids you shared are for the direct (1P) Anthropic API.

@DhruvThu
Copy link
Author

DhruvThu commented Jul 2, 2024

Could you check for this? msg_vrtx_01AaDL52fwpTrqFftLMxxQ1e. Sorry for the previous one. In this message, it took around 2.4s to start streaming.
The streaming response through direct Anthropic API took around 0.89s. The message id for that is msg_01AWgnspZ2w5NhzE92uL7VZ9.

The code I am using is as follows,

class AnthropicLLM:
    def __init__(self, anthropic_client : Anthropic, anthropic_vertex_client : AnthropicVertex) -> None:
        self.anthropic = anthropic_client
        credentials, project_id = google.auth.load_credentials_from_dict(google_credentials_info, scopes=["https://www.googleapis.com/auth/cloud-platform"],)
        anthropic_vertex_client._credentials = credentials
        self.vertex_anthropic = anthropic_vertex_client
        self.messages = Messages(self)

class Messages:
    def __init__(self, client : AnthropicLLM) -> None:
        self.client = client

    def create(self, model : str, messages : list[Message], temperature : float, system : str, stream : bool, max_tokens : int, tool_choice : str, tools : list[dict]):
        model = model.replace("@", "-")
        import time
        st = time.time()
        if(tools == []):
            response = self.client.vertex_anthropic.messages.create(
                    model=model,
                    messages=messages,
                    temperature=temperature,
                    system=system,
                    stream=True,
                    max_tokens=max_tokens
                )
            print(response)
            print(time.time()-st)
            return response
        else:
            return self.client.vertex_anthropic.messages.create(
                    model=model,
                    messages=messages,
                    temperature=temperature,
                    system=system,
                    stream=stream,
                    max_tokens=max_tokens,
                    tool_choice=tool_choice,
                    tools=tools
                )

@RobertCraigie
Copy link
Collaborator

Hey @DhruvThu, we've identified the root cause of this issue. While we work on a fix you can workaround this issue by explicitly passing an access_token, e.g. AnthropicVertex(access_token=access_token)

@DhruvThu
Copy link
Author

DhruvThu commented Jul 4, 2024

Hey, thanks for the response. I try with access token

@RobertCraigie
Copy link
Collaborator

RobertCraigie commented Jul 8, 2024

This will be fixed in the next release, v0.30.2, #573.

Note that you will still see a delay when making the very first request with an AnthropicVertex instance as we need to fetch the access token but subsequent requests will use the cached token.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants