Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caching: Use reentrant locks, don't discard callables (or any other unhashable object in the future) from the cache key #1905

Merged
merged 24 commits into from
Dec 8, 2024
Prev Previous commit
Next Next commit
fix
Signed-off-by: dbczumar <corey.zumar@databricks.com>
  • Loading branch information
dbczumar committed Dec 8, 2024

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
commit 2952810a995c949de04ead645aa4502da4769da3
2 changes: 1 addition & 1 deletion tests/clients/test_lm.py
Original file line number Diff line number Diff line change
@@ -59,7 +59,7 @@ def test_lm_calls_support_callables(litellm_test_server):
# Define a callable kwarg for the LM to use during inference
azure_ad_token_provider=lambda *args, **kwargs: None,
)
# Invoke the LM twice
# Invoke the LM twice; the second call should be cached in memory
lm_with_callable("Query")
lm_with_callable("Query")

Loading