-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add LiteLLM+instructor (for structured output) backend for curator #141
Add LiteLLM+instructor (for structured output) backend for curator #141
Conversation
Works for Claude |
…at I get the response and completion objects.
…tribution' into CURATOR-28-add-a-lite-llm-backend-for-curator
…tribution' into CURATOR-28-add-a-lite-llm-backend-for-curator
…ewer-distribution' into CURATOR-28-add-a-lite-llm-backend-for-curator
#159 is been merged, now costs have been appropriately logged. litellm also supports cost logging now. |
TODO
|
Need to add a better default timeout. |
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
Request review / approval @RyanMarten @vutrung96 |
src/bespokelabs/curator/request_processor/base_online_request_processor.py
Outdated
Show resolved
Hide resolved
src/bespokelabs/curator/request_processor/openai_online_request_processor.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small changes - let me know when they are addressed and I'll do another review
src/bespokelabs/curator/request_processor/base_online_request_processor.py
Show resolved
Hide resolved
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
Closes: #74
Closes #179
Closes #164
Changes:
/examples/litellm_recipe_prompting.py
/examples/litellm_recipe_structured_output.py
(Note: need to put OpenAI and Anthropic keys in environment).backend
parameter when using prompter code link, right now defaults to OpenAI.litellm.completion_cost
, if model cost is in the community-maintained mapping here)estimate_total_tokens
that includesestimate_output_tokens
which derives fromget_max_tokens
that gets max output token of the specified model. code linkx-ratelimit-limit-requests
, andx-ratelimit-limit-tokens
for rpm and tpm.OnlineRequestProcessor
.check_structured_output_support
.Future Works:
Example
curator-viewer
's view:Tested on the following models, all working for litellm + instructor structured output.
Note that the following models does not support structured output (i.e.
response_format
in Prompter)