-
Notifications
You must be signed in to change notification settings - Fork 15.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
split up batch llm calls into separate runs #5804
Conversation
Co-authored-by: Nuno Campos <nuno@boringbits.io>
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Ignored Deployment
|
langchain/chat_models/base.py
Outdated
if new_arg_supported | ||
else self._generate(m, stop=stop) | ||
for m in messages | ||
] | ||
except (KeyboardInterrupt, Exception) as e: | ||
run_manager.on_llm_error(e) | ||
for run_manager in run_managers: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comment as in js. We can do better error handling and send the error only into the one that errored
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added for chat models
691930f
to
17f4006
Compare
Is there any explanation available for the goal of this PR? My app breaks when I update from 0.0.214 to 0.0.215, so I'm assuming this PR is the cause. I'm just getting into trying to figure out why, but all I have to go on here is the title of the PR. UPDATE: the problem seems to be that llm_output is not set when I'm making a batch request to |
Fixes # (issue)
Before submitting
Who can review?
Tag maintainers/contributors who might be interested: