-
Notifications
You must be signed in to change notification settings - Fork 15.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
StreamlitCallbackHandler #6315
StreamlitCallbackHandler #6315
Conversation
@tconkling is attempting to deploy a commit to the LangChain Team on Vercel. A member of the Team first needs to authorize it. |
* master: (101 commits) add FunctionMessage support to `_convert_dict_to_message()` in OpenAI chat model (langchain-ai#6382) bump version to 206 (langchain-ai#6465) fix neo4j schema query (langchain-ai#6381) Update serpapi.py Support baidu list type answer_box (langchain-ai#6386) fix: llm caching for replicate (langchain-ai#6396) feat: use latest duckduckgo_search API to call (langchain-ai#6409) Harrison/unstructured page number (langchain-ai#6464) Improve error message (langchain-ai#6275) Fix the issue where ANTHROPIC_API_URL set in environment is not takin… (langchain-ai#6400) Fix broken links in autonomous agents docs (langchain-ai#6398) Update SinglStoreDB vectorstore (langchain-ai#6423) Fix for langchain-ai#6431 - chatprompt template with partial variables giing validation error (langchain-ai#6456) Harrison/functions in retrieval (langchain-ai#6463) Incorrect argument count handling (langchain-ai#5543) Fixed a link typo /-/route -> /-/routes. and change endpoint format (langchain-ai#6186) docs `retrievers` fixes (langchain-ai#6299) Update introduction.mdx (langchain-ai#6425) Fix Custom LLM Agent example (langchain-ai#6429) Remove backticks without clear purpose from docs (langchain-ai#6442) Update web_base.ipynb (langchain-ai#6430) ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this on a high level! Some things that come to mind as I skim this:
- It's explicitly targeting agents (or rather the agent executor state machine). That's great, I think, especially at first, but I wonder if we want to indicate this in any way via the callback naming since it doesn't support arbitrary "chain" or other workflows.
- I may be misreading, but it doesn't seem like it supports at most one thought at a time, right? I think that's fine to start, just want to understand the limits a bit here.
Re: testing, an integration test would be great. If we could mock a couple of the imports, that would be even better, but I'm not sure we are quite that rigorous on the callback integrations yet.
cc @agola11
from enum import Enum | ||
from typing import Any, NamedTuple | ||
|
||
from streamlit.delta_generator import DeltaGenerator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we'll have to lazy import these
Thanks @vowelparrot! I have been working on this with Tim. On the high level feedback:
Yep, we mentioned that in the docstring and would want to call it out in the docs & examples we share too.
I'm not sure I understand what you mean by this, could you share a little more detail or an example of what you mean? The callback integration should support one thought or arbitrarily many thoughts (as seen in the example app). Although one thought / no tool use would render fine but be a bit awkward - related to the chain comment above - very open to ideas on how to support this. Thank you! |
* master: (28 commits) [Feature][VectorStore] Support StarRocks as vector db (langchain-ai#6119) Relax string input mapper check (langchain-ai#6544) bump to ver 208 (langchain-ai#6540) Harrison/multi tool (langchain-ai#6518) Infino integration for simplified logs, metrics & search across LLM data & token usage (langchain-ai#6218) Update model token mappings/cost to include 0613 models (langchain-ai#6122) Fix issue with non-list `To` header in GmailSendMessage Tool (langchain-ai#6242) Integrate Rockset as Vectorstore (langchain-ai#6216) Feat: Add a prompt template parameter to qa with structure chains (langchain-ai#6495) Add async support for HuggingFaceTextGenInference (langchain-ai#6507) Be able to use Codey models on Vertex AI (langchain-ai#6354) Add KuzuQAChain (langchain-ai#6454) Update index.mdx (langchain-ai#6326) Export trajectory eval fn (langchain-ai#6509) typo(llamacpp.ipynb): 'condiser' -> 'consider' (langchain-ai#6474) Fix typo in docstring of format_tool_to_openai_function (langchain-ai#6479) Make streamlit import optional (langchain-ai#6510) Fixed: 'readible' -> readable (langchain-ai#6492) Documentation Fix: Correct the example code output in the prompt templates doc (langchain-ai#6496) Fix link (langchain-ai#6501) ...
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Ignored Deployment
|
Thanks for the review, @vowelparrot! I'll take care of the nits today, and get started on integration tests. I'm not able to add Streamlit as an optional dependency because our supported Python version range excludes Python 3.9.7 due to some issue we had with it, and Poetry won't resolve Streamlit because of that 3.9.7 exclusion. |
* master: MD header text splitter returns Documents (langchain-ai#6571) Fix callback forwarding in async plan method for OpenAI function agent (langchain-ai#6584) bump 209 (langchain-ai#6593) Clarifai integration (langchain-ai#5954) Add missing word in comment (langchain-ai#6587) Add AzureML endpoint LLM wrapper (langchain-ai#6580) Add OpenLLM wrapper(langchain-ai#6578) feat: interfaces for async embeddings, implement async openai (langchain-ai#6563) Upgrade the version of AwaDB and add some new interfaces (langchain-ai#6565) add motherduck docs (langchain-ai#6572) Detailed using the Twilio tool to send messages with 3rd party apps incl. WhatsApp (langchain-ai#6562) Change Data Loader Namespace (langchain-ai#6568) Remove duplicate databricks entries in ecosystem integrations (langchain-ai#6569) Fix whatsappchatloader - enable parsing new datetime format on WhatsApp chat (langchain-ai#6555) Wait for all futures (langchain-ai#6554) feat: faiss filter from list (langchain-ai#6537) update pr tmpl (langchain-ai#6552) Remove unintended double negation in docstring (langchain-ai#6541) Minor Grammar Fixes in Docs and Comments (langchain-ai#6536)
@vowelparrot (and cc @sfc-gh-jcarroll) - I'm removing the draft status from this PR, and would love another review if you have a chance. Some notes:
Thanks! |
Will review in an hour! |
@sfc-gh-jcarroll I think this looks good as is, and with the current name. In terms of other non-agent flows that are common - I'd say:
You know your users better than I, so maybe the first two are less important for visualizing in streamlit. Perhaps verifying this works reasonably well with custom agents that don't use the existing agent executor would be good, even if it doesn't impact this PR directly |
Awesome!! Anything else needed on our end before this can be merged and released? We will give those other workflows a try and see what viz makes sense with this callback handler in Streamlit (future work). Thank you!! |
**Description:** Add a documentation page for the Streamlit Callback Handler integration (#6315) Notes: - Implemented as a markdown file instead of a notebook since example code runs in a Streamlit app (happy to discuss / consider alternatives now or later) - Contains an embedded Streamlit app -> https://mrkl-minimal.streamlit.app/ Currently this app is hosted out of a Streamlit repo but we're working to migrate the code to a LangChain owned repo ![streamlit_docs](https://github.com/hwchase17/langchain/assets/116604821/0b7a6239-361f-470c-8539-f22c40098d1a) cc @dev2049 @tconkling
Add our LangChain `StreamlitCallbackHandler` (also present in the [LangChain repo](langchain-ai/langchain#6315)), along with some Streamlit-specific tests. When used from LangChain, this callback handler is an "auto-updating API". That is, a LangChain user can do ```python from langchain.callbacks.streamlit import StreamlitCallbackHandler callback = StreamlitCallbackHandler(st.container()) ``` and if they have a recent version of Streamlit installed in their environment, Streamlit's copy of the callback handler will be used instead of the LangChain-internal one. This allows us to update and improve `StreamlitCallbackHandler` independently of LangChain, and LangChain users of the callback will see those changes automatically. In other words, while `StreamlitCallbackHandler` is not part of the public Streamlit `st` API, it _is_ part of LangChain's public API, and we need to keep it stable. (This PR contains a few tests that assert its API stability.)
**Description:** Add a documentation page for the Streamlit Callback Handler integration (#6315) Notes: - Implemented as a markdown file instead of a notebook since example code runs in a Streamlit app (happy to discuss / consider alternatives now or later) - Contains an embedded Streamlit app -> https://mrkl-minimal.streamlit.app/ Currently this app is hosted out of a Streamlit repo but we're working to migrate the code to a LangChain owned repo ![streamlit_docs](https://github.com/hwchase17/langchain/assets/116604821/0b7a6239-361f-470c-8539-f22c40098d1a) cc @dev2049 @tconkling
Add our LangChain `StreamlitCallbackHandler` (also present in the [LangChain repo](langchain-ai/langchain#6315)), along with some Streamlit-specific tests. When used from LangChain, this callback handler is an "auto-updating API". That is, a LangChain user can do ```python from langchain.callbacks.streamlit import StreamlitCallbackHandler callback = StreamlitCallbackHandler(st.container()) ``` and if they have a recent version of Streamlit installed in their environment, Streamlit's copy of the callback handler will be used instead of the LangChain-internal one. This allows us to update and improve `StreamlitCallbackHandler` independently of LangChain, and LangChain users of the callback will see those changes automatically. In other words, while `StreamlitCallbackHandler` is not part of the public Streamlit `st` API, it _is_ part of LangChain's public API, and we need to keep it stable. (This PR contains a few tests that assert its API stability.)
Add our LangChain `StreamlitCallbackHandler` (also present in the [LangChain repo](langchain-ai/langchain#6315)), along with some Streamlit-specific tests. When used from LangChain, this callback handler is an "auto-updating API". That is, a LangChain user can do ```python from langchain.callbacks.streamlit import StreamlitCallbackHandler callback = StreamlitCallbackHandler(st.container()) ``` and if they have a recent version of Streamlit installed in their environment, Streamlit's copy of the callback handler will be used instead of the LangChain-internal one. This allows us to update and improve `StreamlitCallbackHandler` independently of LangChain, and LangChain users of the callback will see those changes automatically. In other words, while `StreamlitCallbackHandler` is not part of the public Streamlit `st` API, it _is_ part of LangChain's public API, and we need to keep it stable. (This PR contains a few tests that assert its API stability.)
Add our LangChain `StreamlitCallbackHandler` (also present in the [LangChain repo](langchain-ai/langchain#6315)), along with some Streamlit-specific tests. When used from LangChain, this callback handler is an "auto-updating API". That is, a LangChain user can do ```python from langchain.callbacks.streamlit import StreamlitCallbackHandler callback = StreamlitCallbackHandler(st.container()) ``` and if they have a recent version of Streamlit installed in their environment, Streamlit's copy of the callback handler will be used instead of the LangChain-internal one. This allows us to update and improve `StreamlitCallbackHandler` independently of LangChain, and LangChain users of the callback will see those changes automatically. In other words, while `StreamlitCallbackHandler` is not part of the public Streamlit `st` API, it _is_ part of LangChain's public API, and we need to keep it stable. (This PR contains a few tests that assert its API stability.)
A new implementation of
StreamlitCallbackHandler
. It formats Agent thoughts into Streamlit expanders.You can see the handler in action here: https://langchain-mrkl.streamlit.app/
Per a discussion with Harrison, we'll be adding a
StreamlitCallbackHandler
implementation to an upcoming Streamlit release as well, and will be updating it as we add new LLM- and LangChain-specific features to Streamlit.The idea with this PR is that the LangChain
StreamlitCallbackHandler
will "auto-update" in a way that keeps it forward- (and backward-) compatible with Streamlit. If the user has an older Streamlit version installed, the LangChainStreamlitCallbackHandler
will be used; if they have a newer Streamlit version that has an updatedStreamlitCallbackHandler
, that implementation will be used instead.(I'm opening this as a draft to get the conversation going and make sure we're on the same page. We're really excited to land this into LangChain!)
Who can review?
@agola11, @hwchase17