Skip to content

Commit

Permalink
LiteLLM Minor Fixes & Improvements (11/19/2024) (#6820)
Browse files Browse the repository at this point in the history
* fix(anthropic/chat/transformation.py): add json schema as values: json_schema

fixes passing pydantic obj to anthropic

Fixes #6766

* (feat): Add timestamp_granularities parameter to transcription API (#6457)

* Add timestamp_granularities parameter to transcription API

* add param to the local test

* fix(databricks/chat.py): handle max_retries optional param handling for openai-like calls

Fixes issue with calling finetuned vertex ai models via databricks route

* build(ui/): add team admins via proxy ui

* fix: fix linting error

* test: fix test

* docs(vertex.md): refactor docs

* test: handle overloaded anthropic model error

* test: remove duplicate test

* test: fix test

* test: update test to handle model overloaded error

---------

Co-authored-by: Show <35062952+BrunooShow@users.noreply.github.com>
  • Loading branch information
2 people authored and ishaan-jaff committed Nov 22, 2024
1 parent 216e7f5 commit 1223394
Show file tree
Hide file tree
Showing 11 changed files with 146 additions and 143 deletions.
181 changes: 90 additions & 91 deletions docs/my-website/docs/providers/vertex.md
Original file line number Diff line number Diff line change
Expand Up @@ -572,6 +572,96 @@ Here's how to use Vertex AI with the LiteLLM Proxy Server

</Tabs>


## Authentication - vertex_project, vertex_location, etc.

Set your vertex credentials via:
- dynamic params
OR
- env vars


### **Dynamic Params**

You can set:
- `vertex_credentials` (str) - can be a json string or filepath to your vertex ai service account.json
- `vertex_location` (str) - place where vertex model is deployed (us-central1, asia-southeast1, etc.)
- `vertex_project` Optional[str] - use if vertex project different from the one in vertex_credentials

as dynamic params for a `litellm.completion` call.

<Tabs>
<TabItem value="sdk" label="SDK">

```python
from litellm import completion
import json

## GET CREDENTIALS
file_path = 'path/to/vertex_ai_service_account.json'

# Load the JSON file
with open(file_path, 'r') as file:
vertex_credentials = json.load(file)

# Convert to JSON string
vertex_credentials_json = json.dumps(vertex_credentials)


response = completion(
model="vertex_ai/gemini-pro",
messages=[{"content": "You are a good bot.","role": "system"}, {"content": "Hello, how are you?","role": "user"}],
vertex_credentials=vertex_credentials_json,
vertex_project="my-special-project",
vertex_location="my-special-location"
)
```

</TabItem>
<TabItem value="proxy" label="PROXY">

```yaml
model_list:
- model_name: gemini-1.5-pro
litellm_params:
model: gemini-1.5-pro
vertex_credentials: os.environ/VERTEX_FILE_PATH_ENV_VAR # os.environ["VERTEX_FILE_PATH_ENV_VAR"] = "/path/to/service_account.json"
vertex_project: "my-special-project"
vertex_location: "my-special-location:
```
</TabItem>
</Tabs>
### **Environment Variables**
You can set:
- `GOOGLE_APPLICATION_CREDENTIALS` - store the filepath for your service_account.json in here (used by vertex sdk directly).
- VERTEXAI_LOCATION - place where vertex model is deployed (us-central1, asia-southeast1, etc.)
- VERTEXAI_PROJECT - Optional[str] - use if vertex project different from the one in vertex_credentials
1. GOOGLE_APPLICATION_CREDENTIALS
```bash
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service_account.json"
```
2. VERTEXAI_LOCATION
```bash
export VERTEXAI_LOCATION="us-central1" # can be any vertex location
```

3. VERTEXAI_PROJECT

```bash
export VERTEXAI_PROJECT="my-test-project" # ONLY use if model project is different from service account project
```


## Specifying Safety Settings
In certain use-cases you may need to make calls to the models and pass [safety settigns](https://ai.google.dev/docs/safety_setting_gemini) different from the defaults. To do so, simple pass the `safety_settings` argument to `completion` or `acompletion`. For example:

Expand Down Expand Up @@ -2303,97 +2393,6 @@ print("response from proxy", response)
</TabItem>
</Tabs>



## Authentication - vertex_project, vertex_location, etc.

Set your vertex credentials via:
- dynamic params
OR
- env vars


### **Dynamic Params**

You can set:
- `vertex_credentials` (str) - can be a json string or filepath to your vertex ai service account.json
- `vertex_location` (str) - place where vertex model is deployed (us-central1, asia-southeast1, etc.)
- `vertex_project` Optional[str] - use if vertex project different from the one in vertex_credentials

as dynamic params for a `litellm.completion` call.

<Tabs>
<TabItem value="sdk" label="SDK">

```python
from litellm import completion
import json

## GET CREDENTIALS
file_path = 'path/to/vertex_ai_service_account.json'

# Load the JSON file
with open(file_path, 'r') as file:
vertex_credentials = json.load(file)

# Convert to JSON string
vertex_credentials_json = json.dumps(vertex_credentials)


response = completion(
model="vertex_ai/gemini-pro",
messages=[{"content": "You are a good bot.","role": "system"}, {"content": "Hello, how are you?","role": "user"}],
vertex_credentials=vertex_credentials_json,
vertex_project="my-special-project",
vertex_location="my-special-location"
)
```

</TabItem>
<TabItem value="proxy" label="PROXY">

```yaml
model_list:
- model_name: gemini-1.5-pro
litellm_params:
model: gemini-1.5-pro
vertex_credentials: os.environ/VERTEX_FILE_PATH_ENV_VAR # os.environ["VERTEX_FILE_PATH_ENV_VAR"] = "/path/to/service_account.json"
vertex_project: "my-special-project"
vertex_location: "my-special-location:
```
</TabItem>
</Tabs>
### **Environment Variables**
You can set:
- `GOOGLE_APPLICATION_CREDENTIALS` - store the filepath for your service_account.json in here (used by vertex sdk directly).
- VERTEXAI_LOCATION - place where vertex model is deployed (us-central1, asia-southeast1, etc.)
- VERTEXAI_PROJECT - Optional[str] - use if vertex project different from the one in vertex_credentials
1. GOOGLE_APPLICATION_CREDENTIALS
```bash
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service_account.json"
```
2. VERTEXAI_LOCATION
```bash
export VERTEXAI_LOCATION="us-central1" # can be any vertex location
```

3. VERTEXAI_PROJECT

```bash
export VERTEXAI_PROJECT="my-test-project" # ONLY use if model project is different from service account project
```


## Extra

### Using `GOOGLE_APPLICATION_CREDENTIALS`
Expand Down
3 changes: 3 additions & 0 deletions litellm/llms/databricks/chat.py
Original file line number Diff line number Diff line change
Expand Up @@ -470,6 +470,9 @@ def completion(
optional_params[k] = v

stream: bool = optional_params.get("stream", None) or False
optional_params.pop(
"max_retries", None
) # [TODO] add max retry support at llm api call level
optional_params["stream"] = stream

data = {
Expand Down
2 changes: 2 additions & 0 deletions litellm/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -4729,6 +4729,7 @@ def transcription(
response_format: Optional[
Literal["json", "text", "srt", "verbose_json", "vtt"]
] = None,
timestamp_granularities: Optional[List[Literal["word", "segment"]]] = None,
temperature: Optional[int] = None, # openai defaults this to 0
## LITELLM PARAMS ##
user: Optional[str] = None,
Expand Down Expand Up @@ -4778,6 +4779,7 @@ def transcription(
language=language,
prompt=prompt,
response_format=response_format,
timestamp_granularities=timestamp_granularities,
temperature=temperature,
custom_llm_provider=custom_llm_provider,
drop_params=drop_params,
Expand Down
1 change: 1 addition & 0 deletions litellm/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -2125,6 +2125,7 @@ def get_optional_params_transcription(
prompt: Optional[str] = None,
response_format: Optional[str] = None,
temperature: Optional[int] = None,
timestamp_granularities: Optional[List[Literal["word", "segment"]]] = None,
custom_llm_provider: Optional[str] = None,
drop_params: Optional[bool] = None,
**kwargs,
Expand Down
2 changes: 1 addition & 1 deletion tests/llm_translation/test_anthropic_completion.py
Original file line number Diff line number Diff line change
Expand Up @@ -657,7 +657,7 @@ def test_create_json_tool_call_for_response_format():
_input_schema = tool.get("input_schema")
assert _input_schema is not None
assert _input_schema.get("type") == "object"
assert _input_schema.get("properties") == custom_schema
assert _input_schema.get("properties") == {"values": custom_schema}
assert "additionalProperties" not in _input_schema


Expand Down
12 changes: 11 additions & 1 deletion tests/llm_translation/test_optional_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -923,14 +923,14 @@ def test_watsonx_text_top_k():
assert optional_params["top_k"] == 10



def test_together_ai_model_params():
optional_params = get_optional_params(
model="together_ai", custom_llm_provider="together_ai", logprobs=1
)
print(optional_params)
assert optional_params["logprobs"] == 1


def test_forward_user_param():
from litellm.utils import get_supported_openai_params, get_optional_params

Expand All @@ -942,3 +942,13 @@ def test_forward_user_param():
)

assert optional_params["metadata"]["user_id"] == "test_user"


def test_lm_studio_embedding_params():
optional_params = get_optional_params_embeddings(
model="lm_studio/gemma2-9b-it",
custom_llm_provider="lm_studio",
dimensions=1024,
drop_params=True,
)
assert len(optional_params) == 0
6 changes: 5 additions & 1 deletion tests/local_testing/test_amazing_vertex_completion.py
Original file line number Diff line number Diff line change
Expand Up @@ -3129,9 +3129,12 @@ async def test_vertexai_embedding_finetuned(respx_mock: MockRouter):
assert all(isinstance(x, float) for x in embedding["embedding"])


@pytest.mark.parametrize("max_retries", [None, 3])
@pytest.mark.asyncio
@pytest.mark.respx
async def test_vertexai_model_garden_model_completion(respx_mock: MockRouter):
async def test_vertexai_model_garden_model_completion(
respx_mock: MockRouter, max_retries
):
"""
Relevant issue: https://github.com/BerriAI/litellm/issues/6480
Expand Down Expand Up @@ -3189,6 +3192,7 @@ async def test_vertexai_model_garden_model_completion(respx_mock: MockRouter):
messages=messages,
vertex_project="633608382793",
vertex_location="us-central1",
max_retries=max_retries,
)

# Assert request was made correctly
Expand Down
57 changes: 18 additions & 39 deletions tests/local_testing/test_completion.py
Original file line number Diff line number Diff line change
Expand Up @@ -1222,32 +1222,6 @@ def test_completion_mistral_api_modified_input():
pytest.fail(f"Error occurred: {e}")


def test_completion_claude2_1():
try:
litellm.set_verbose = True
print("claude2.1 test request")
messages = [
{
"role": "system",
"content": "Your goal is generate a joke on the topic user gives.",
},
{"role": "user", "content": "Generate a 3 liner joke for me"},
]
# test without max tokens
response = completion(model="claude-2.1", messages=messages)
# Add any assertions here to check the response
print(response)
print(response.usage)
print(response.usage.completion_tokens)
print(response["usage"]["completion_tokens"])
# print("new cost tracking")
except Exception as e:
pytest.fail(f"Error occurred: {e}")


# test_completion_claude2_1()


@pytest.mark.asyncio
async def test_acompletion_claude2_1():
try:
Expand All @@ -1268,6 +1242,8 @@ async def test_acompletion_claude2_1():
print(response.usage.completion_tokens)
print(response["usage"]["completion_tokens"])
# print("new cost tracking")
except litellm.InternalServerError:
pytest.skip("model is overloaded.")
except Exception as e:
pytest.fail(f"Error occurred: {e}")

Expand Down Expand Up @@ -4514,19 +4490,22 @@ async def test_dynamic_azure_params(stream, sync_mode):
@pytest.mark.flaky(retries=3, delay=1)
async def test_completion_ai21_chat():
litellm.set_verbose = True
response = await litellm.acompletion(
model="jamba-1.5-large",
user="ishaan",
tool_choice="auto",
seed=123,
messages=[{"role": "user", "content": "what does the document say"}],
documents=[
{
"content": "hello world",
"metadata": {"source": "google", "author": "ishaan"},
}
],
)
try:
response = await litellm.acompletion(
model="jamba-1.5-large",
user="ishaan",
tool_choice="auto",
seed=123,
messages=[{"role": "user", "content": "what does the document say"}],
documents=[
{
"content": "hello world",
"metadata": {"source": "google", "author": "ishaan"},
}
],
)
except litellm.InternalServerError:
pytest.skip("Model is overloaded")


@pytest.mark.parametrize(
Expand Down
10 changes: 8 additions & 2 deletions tests/local_testing/test_whisper.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,17 +51,23 @@
),
],
)
@pytest.mark.parametrize("response_format", ["json", "vtt"])
@pytest.mark.parametrize(
"response_format, timestamp_granularities",
[("json", None), ("vtt", None), ("verbose_json", ["word"])],
)
@pytest.mark.parametrize("sync_mode", [True, False])
@pytest.mark.asyncio
async def test_transcription(model, api_key, api_base, response_format, sync_mode):
async def test_transcription(
model, api_key, api_base, response_format, sync_mode, timestamp_granularities
):
if sync_mode:
transcript = litellm.transcription(
model=model,
file=audio_file,
api_key=api_key,
api_base=api_base,
response_format=response_format,
timestamp_granularities=timestamp_granularities,
drop_params=True,
)
else:
Expand Down
Loading

0 comments on commit 1223394

Please sign in to comment.