Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(langchain): add support for streamed calls #10672

Open
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

sabrenner
Copy link
Contributor

@sabrenner sabrenner commented Sep 16, 2024

What does this PR do?

Adds support for .(a)stream(...) calls on LangChain LCEL chains, chat models, and completion (LLM) models. It accomplishes this by:

  1. starting the span when the stream function is called
  2. as the stream is consumed, add the chunk to a list of chunks
  3. when the stream is consumed, finish the span, and concatenate the results into an output string

There is one caveat to this, where it's possible the last step in a chain's stream call is a JSONOutputParser. In this case, the stream is already concatenated for us, and we use that result instead.

A few additional notes:

  • The async versions, astream, aren't actually async functions, they just return async generators. This is reflected in shared patching functions and test snapshots.
  • This whole process is accomplished through a shared_stream, which returns a compatible iterable for sync and async stream managers. It utilizes on_span_started and on_span_finished functions to coordinate tags to add in the different cases, such as chain, chat, or llm.
  • Since the stream methods do not invoke the underlying generate methods we trace, there's not easy path for code re-use. Thus, I tried to mimic the tags we add for the relevant chain.invoke, model.generate, and llm.invoke patched functions.

Note: The version of vcrpy which we pinned for reduced flakiness did not like streamed calls when using the LangChain library. As such, I introduced a new fixture that returns a stub for the HTTP transport the OpenAI client uses. Then, the client with that transport specified can be used on the langchain_openai instance. This approach uses text files with just the response data, which is why the "cassettes" added for the tests written aren't the usual yaml format.

Checklist

  • PR author has checked that all the criteria below are met
  • The PR description includes an overview of the change
  • The PR description articulates the motivation for the change
  • The change includes tests OR the PR description describes a testing strategy
  • The PR description notes risks associated with the change, if any
  • Newly-added code is easy to change
  • The change follows the library release note guidelines
  • The change includes or references documentation updates if necessary
  • Backport labels are set (if applicable)

Reviewer Checklist

  • Reviewer has checked that all the criteria below are met
  • Title is accurate
  • All changes are related to the pull request's stated goal
  • Avoids breaking API changes
  • Testing strategy adequately addresses listed risks
  • Newly-added code is easy to change
  • Release note makes sense to a user of the library
  • If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment
  • Backport labels are set in a manner that is consistent with the release branch maintenance policy

Copy link
Contributor

github-actions bot commented Sep 16, 2024

CODEOWNERS have been resolved as:

ddtrace/contrib/internal/langchain/utils.py                             @DataDog/ml-observability
releasenotes/notes/langchain-lcel-stream-calls-bff85c974a72cceb.yaml    @DataDog/apm-python
tests/contrib/langchain/cassettes/langchain_community/lcel_openai_chat_streamed_response.txt  @DataDog/ml-observability
tests/contrib/langchain/cassettes/langchain_community/lcel_openai_chat_streamed_response_json_output_parser.txt  @DataDog/ml-observability
tests/contrib/langchain/cassettes/langchain_community/lcel_openai_llm_streamed_response.txt  @DataDog/ml-observability
tests/snapshots/tests.contrib.langchain.test_langchain_community.test_streamed_chain.json  @DataDog/apm-python
tests/snapshots/tests.contrib.langchain.test_langchain_community.test_streamed_chat.json  @DataDog/apm-python
tests/snapshots/tests.contrib.langchain.test_langchain_community.test_streamed_json_output_parser.json  @DataDog/apm-python
tests/snapshots/tests.contrib.langchain.test_langchain_community.test_streamed_llm.json  @DataDog/apm-python
ddtrace/contrib/internal/langchain/patch.py                             @DataDog/ml-observability
tests/contrib/langchain/conftest.py                                     @DataDog/ml-observability
tests/contrib/langchain/test_langchain_community.py                     @DataDog/ml-observability

@datadog-dd-trace-py-rkomorn
Copy link

datadog-dd-trace-py-rkomorn bot commented Sep 16, 2024

Datadog Report

Branch report: sabrenner/langchain-stream
Commit report: aaddc40
Test service: dd-trace-py

✅ 0 Failed, 2812 Passed, 2729 Skipped, 34m 24.39s Total duration (59m 53.06s time saved)

@pr-commenter
Copy link

pr-commenter bot commented Sep 16, 2024

Benchmarks

Benchmark execution time: 2024-09-16 19:16:23

Comparing candidate commit aaddc40 in PR branch sabrenner/langchain-stream with baseline commit 9550700 in branch main.

Found 0 performance improvements and 0 performance regressions! Performance is the same for 354 metrics, 46 unstable metrics.

chat_messages = get_argument_value(args, kwargs, 0, "input")
if not isinstance(chat_messages, list):
chat_messages = [chat_messages]
for message_idx, message in enumerate(chat_messages):
Copy link
Contributor Author

@sabrenner sabrenner Sep 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added all of this logic because, unlike model.generate, which just takes in a list of BaseMessage types (ie HumanMessage, SystemMessage, etc.), model.stream can take in:

  1. a single string
  2. a single dict
  3. a list of strings
  4. a list of dicts
  5. a list of BaseMessage types
  6. a PromptValue type, which has a messages property of BaseMessage types

ref

Do we care about all of these different types? Or, do we just want to listify and str each element (this logic would also carry over to LLMObs spans, future PR)? It would make the code a lot simpler, but maybe the view of each tag not as nice.

@sabrenner sabrenner marked this pull request as ready for review September 16, 2024 18:36
@sabrenner sabrenner requested review from a team as code owners September 16, 2024 18:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant