Skip to content

Commit

Permalink
upgrade litellm 1.52.8 for Snyk (#360)
Browse files Browse the repository at this point in the history
* upgrade litellm 1.52.8

* separate litellm test in test workflow

* test openai compatiblity in CI

* create separate magentic perplexity test and use log10 callback

* tmp skip anthropic completions test

* try to show color for pytest in gh action

* switch to perplexity chat model in tests

* test only run test_magentic_perplexity.py

* add 3sec delay to check logs

* found that magentic perplexity tests passes when only run those
test to run the rest of the tests

* ignore test_magentic_perplexity.py

* run test_magentic_perplexity.py separately

* pass perplexity model with openai_compatibility_model
  • Loading branch information
wenzhe-log10 authored Nov 20, 2024
1 parent 636c2a9 commit 331015c
Show file tree
Hide file tree
Showing 5 changed files with 58 additions and 12 deletions.
10 changes: 6 additions & 4 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ jobs:
GOOGLE_API_KEY : ${{ secrets.GOOGLE_API_KEY }}
PERPLEXITYAI_API_KEY: ${{ secrets.PERPLEXITYAI_API_KEY }}
CEREBRAS_API_KEY: ${{ secrets.CEREBRAS_API_KEY }}
PYTEST_ADDOPTS: "--color=yes"
steps:
- uses: actions/checkout@v4
- name: Install poetry
Expand Down Expand Up @@ -131,15 +132,16 @@ jobs:
if $empty_inputs; then
echo "All variables are empty"
poetry run pytest -vv tests/ --ignore=tests/test_cli.py
poetry run pytest -vv tests/ --ignore=tests/test_cli.py --ignore=tests/test_litellm.py --ignore=tests/test_magentic_perplexity.py
poetry run pytest -vv tests/test_litellm.py
poetry run pytest --llm_provider=anthropic -vv tests/test_magentic.py
poetry run pytest --llm_provider=litellm --openai_compatibility_model=perplexity/llama-3.1-sonar-small-128k-chat -vv tests/test_magentic.py -m chat
poetry run pytest tests/test_magentic_perplexity.py -vv
fi
- name: Run scheduled llm tests
if: ${{ github.event_name == 'schedule' }}
run: |
echo "This is a schedule event"
poetry run pytest -vv tests/ --ignore=tests/test_cli.py
poetry run pytest -vv tests/ --ignore=tests/test_cli.py --ignore=tests/test_litellm.py --ignore=tests/test_magentic_perplexity.py
poetry run pytest --openai_model=gpt-4o -m chat -vv tests/test_openai.py
poetry run pytest --llm_provider=litellm --openai_compatibility_model=perplexity/llama-3.1-sonar-small-128k-chat -vv tests/test_magentic.py -m chat
poetry run pytest tests/test_magentic_perplexity.py -vv
14 changes: 7 additions & 7 deletions poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ pytest-metadata = ">=1.0.0"
langchain = {version = "^0.2.10", optional = true}
langchain-community = {version = "^0.2.19", optional = true}
magentic = {version = ">=0.17.0", optional = true, markers = "python_version >= '3.10'"}
litellm = {version = "^1.41.12", optional = true}
litellm = {version = ">=1.49.6", optional = true}
lamini = {version = "^2.1.8", optional = true}
google-cloud-aiplatform = {version = ">=1.44.0", optional = true}
mistralai = {version = "^0.1.5", optional = true}
Expand Down
5 changes: 5 additions & 0 deletions tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,11 @@ def magentic_models(request):
}


@pytest.fixture
def openai_compatibility_model(request):
return request.config.getoption("--openai_compatibility_model")


@pytest.fixture
def session():
with log10_session() as session:
Expand Down
39 changes: 39 additions & 0 deletions tests/test_magentic_perplexity.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
import time

import litellm
import pytest
from magentic import StreamedStr, prompt
from magentic.chat_model.litellm_chat_model import LitellmChatModel

from log10.litellm import Log10LitellmLogger
from tests.utils import _LogAssertion


log10_handler = Log10LitellmLogger(tags=["litellm_perplexity"])
litellm.callbacks = [log10_handler]


@pytest.mark.chat
def test_prompt(session, openai_compatibility_model):
@prompt("What happened on this day?", model=LitellmChatModel(model=openai_compatibility_model))
def llm() -> str: ...

output = llm()
assert isinstance(output, str)

time.sleep(3)

_LogAssertion(completion_id=session.last_completion_id(), message_content=output).assert_chat_response()


@pytest.mark.chat
@pytest.mark.stream
def test_prompt_stream(session, openai_compatibility_model):
@prompt("What happened on this day?", model=LitellmChatModel(model=openai_compatibility_model))
def llm() -> StreamedStr: ...

response = llm()
output = ""
for chunk in response:
output += chunk
_LogAssertion(completion_id=session.last_completion_id(), message_content=output).assert_chat_response()

0 comments on commit 331015c

Please sign in to comment.