You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In UI panel >> router settings >> add falbacks >> test
Success
In UI panel >> router settings >> delete falbacks >> add (updated) fallback >> test
Error
Relevant log output
Error occurred while generating model response. Please try again. Error: Error: 500 litellm.InternalServerError: This is a mock exception for model=gemini-pro, to trigger a fallback. Fallbacks=[{}, {'gemini-pro': ['gpt-4o-mini']}] Received Model Group=gemini-pro Available Model Group Fallbacks=None Error doing the fallback: list index out of range
Are you a ML Ops Team?
Yes
What LiteLLM version are you on ?
latest
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered:
What happened?
We're testing litllm proxy server to helped us with fallbacks, and we're facing a problem when trying to delete a fallback rule and recreating it.
config.yaml
How to replicate:
In UI panel >> router settings >> add falbacks >> test
Success
In UI panel >> router settings >> delete falbacks >> add (updated) fallback >> test
Error
Relevant log output
Are you a ML Ops Team?
Yes
What LiteLLM version are you on ?
latest
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: