Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Static type checking issues with completion and acompletion methods. #8304

Open
miraclebakelaser opened this issue Feb 6, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@miraclebakelaser
Copy link
Contributor

What happened?

Issue:

When using litellm with pyright, the linter complains about type errors.

Examples:

Dictionary access linting error:

import litellm

response = litellm.completion(
    model="gpt-3.5-turbo",
    messages=[{"content": "Hello, how are you?", "role": "user"}],
    max_tokens=10,
)

print(response["choices"][0]["message"]["content"])
# Error: "__getitem__" method not defined on type "CustomStreamWrapper"
# (variable) response: ModelResponse | CustomStreamWrapper

Attribute access linting error:

print(response.choices[0].message.content)
# Cannot access attribute "choices" for class "CustomStreamWrapper"
# Error: 
# - Attribute "choices" is unknown
# (variable) choices: List[Choices | StreamingChoices] | Unknown
# The list of completion choices the model generated for the input prompt.
# - Cannot access attribute "message" for class "StreamingChoices"
#   Attribute "message" is unknown

Streaming linting error:

import asyncio
import os
import traceback

from litellm import acompletion

async def completion_call():
    try:
        print("test completion + streaming")
        response = await acompletion(
            model="gpt-3.5-turbo",
            messages=[{"content": "Hello, how are you?", "role": "user"}],
            stream=True,
        )
        async for chunk in response:
            # Error: "ModelResponse" is not iterable
            #   "__aiter__" method not defined
            # (variable) response: ModelResponse | CustomStreamWrapper
            print(chunk)
    except:
        print(f"error occurred: {traceback.format_exc()}")
        pass

asyncio.run(completion_call())

Related Issues:
#2006

Relevant log output

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

v1.60.5

Twitter / LinkedIn details

No response

@miraclebakelaser miraclebakelaser added the bug Something isn't working label Feb 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant