Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

split up batch llm calls into separate runs #5804

Merged
merged 22 commits into from
Jun 25, 2023
Merged

Conversation

agola11
Copy link
Collaborator

@agola11 agola11 commented Jun 7, 2023

Fixes # (issue)

Before submitting

Who can review?

Tag maintainers/contributors who might be interested:

langchain/schema.py Outdated Show resolved Hide resolved
@agola11 agola11 marked this pull request as ready for review June 10, 2023 23:43
@vercel
Copy link

vercel bot commented Jun 18, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Ignored Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Jun 24, 2023 11:05pm

langchain/llms/base.py Outdated Show resolved Hide resolved
langchain/llms/base.py Outdated Show resolved Hide resolved
if new_arg_supported
else self._generate(m, stop=stop)
for m in messages
]
except (KeyboardInterrupt, Exception) as e:
run_manager.on_llm_error(e)
for run_manager in run_managers:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment as in js. We can do better error handling and send the error only into the one that errored

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added for chat models

@vowelparrot vowelparrot force-pushed the ankush/batch-llm-fix branch from 691930f to 17f4006 Compare June 24, 2023 23:05
@vowelparrot vowelparrot merged commit e1b801b into master Jun 25, 2023
@vowelparrot vowelparrot deleted the ankush/batch-llm-fix branch June 25, 2023 04:03
This was referenced Jun 25, 2023
@nbrustein
Copy link

nbrustein commented Jul 6, 2023

Is there any explanation available for the goal of this PR? My app breaks when I update from 0.0.214 to 0.0.215, so I'm assuming this PR is the cause.

I'm just getting into trying to figure out why, but all I have to go on here is the title of the PR.

UPDATE: the problem seems to be that llm_output is not set when I'm making a batch request to ChatOpenAI#agenerate, and that's causing an error in my custom on_llm_end handling

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants