Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core: Fix AsyncCallbackManager to honor run_inline attribute and prevent context loss #26885

Merged
merged 14 commits into from
Oct 7, 2024

Conversation

parambharat
Copy link
Contributor

Description

This PR fixes the context loss issue in AsyncCallbackManager, specifically in on_llm_start and on_chat_model_start methods. It properly honors the run_inline attribute of callback handlers, preventing race conditions and ordering issues.

Key changes:

  1. Separate handlers into inline and non-inline groups.
  2. Execute inline handlers sequentially for each prompt.
  3. Execute non-inline handlers concurrently across all prompts.
  4. Preserve context for stateful handlers.
  5. Maintain performance benefits for non-inline handlers.

These changes are implemented in AsyncCallbackManager rather than ahandle_event because the issue occurs at the prompt and message_list levels, not within individual events.

Testing

Related Issues

Dependencies

No new dependencies are required.


@eyurtsev: This PR implements the discussed changes to respect run_inline in AsyncCallbackManager. Please review and advise on any needed changes.

Twitter handle: @parambharat

parambharat and others added 7 commits September 25, 2024 20:31
…hain-ai#23909)

- Add failing test to demonstrate issue fixed in PR langchain-ai#23909
- Show `AsyncCallbackManager` doesn't honor run_inline attribute
- Use `shared_stack` to capture non-deterministic execution order
- Illustrate why `contextvars` alone can't detect this issue
- Test expects to fail until fix for langchain-ai#23909 is implemented
- Separate inline and non-inline handlers in `on_llm_start` and `on_chat_model_start`
- Execute inline handlers sequentially
- Run non-inline handlers concurrently
- Maintain context integrity for stateful handlers
- Address issue langchain-ai#23909 and fix test in PR langchain-ai#26857
Copy link

vercel bot commented Sep 26, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Skipped Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Visit Preview Oct 4, 2024 8:57pm

@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. Ɑ: core Related to langchain-core 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Sep 26, 2024
@eyurtsev eyurtsev self-assigned this Sep 26, 2024
@parambharat
Copy link
Contributor Author

hi @eyurtsev . Just checking in. Any update on merging this here ?

Copy link
Collaborator

@eyurtsev eyurtsev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@parambharat please verify that you're OK with the changes, if so I'll merge on Monday

@dosubot dosubot bot added the lgtm PR looks good. Use to confirm that a PR is ready for merging. label Oct 4, 2024
@eyurtsev eyurtsev merged commit 931ce8d into langchain-ai:master Oct 7, 2024
85 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: core Related to langchain-core lgtm PR looks good. Use to confirm that a PR is ready for merging. size:L This PR changes 100-499 lines, ignoring generated files.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

2 participants