-
Notifications
You must be signed in to change notification settings - Fork 231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added basic integration with LangGraph #501
base: main
Are you sure you want to change the base?
Added basic integration with LangGraph #501
Conversation
Super cool, thank for raising @SoumilRathi ! Will take a look |
@SoumilRathi thanks for the PR! I see that its working but I noticed 2 LLMEvents for the same conversation as denoted by the ChatViewer -
Second LLMEvent - This is the code I used to test (adapted from Langfuse): from typing import Annotated
import agentops
from dotenv import load_dotenv
import os
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from typing_extensions import TypedDict
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from agentops.partners.langgraph_callback_handler import LanggraphCallbackHandler
load_dotenv()
agentops.init(api_key=os.getenv("AGENTOPS_API_KEY"), default_tags=["langgraph-test-v1"])
class State(TypedDict):
# Messages have the type "list". The `add_messages` function in the annotation defines how this state key should be updated
# (in this case, it appends messages to the list, rather than overwriting them)
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
llm = ChatOpenAI(model = "gpt-4o", temperature = 0.2)
# The chatbot node function takes the current State as input and returns an updated messages list. This is the basic pattern for all LangGraph node functions.
def chatbot(state: State):
return {"messages": [llm.invoke(state["messages"])]}
# Add a "chatbot" node. Nodes represent units of work. They are typically regular python functions.
graph_builder.add_node("chatbot", chatbot)
# Add an entry point. This tells our graph where to start its work each time we run it.
graph_builder.set_entry_point("chatbot")
# Set a finish point. This instructs the graph "any time this node is run, you can exit."
graph_builder.set_finish_point("chatbot")
# To be able to run our graph, call "compile()" on the graph builder. This creates a "CompiledGraph" we can use invoke on our state.
graph = graph_builder.compile()
# Initialize AgentOps CallbackHandler for Langchain (tracing)
handler = LanggraphCallbackHandler()
for s in graph.stream({"messages": [HumanMessage(content = "What is Langfuse?")]},
config={"callbacks": [handler]}):
print(s) I would request you to resolve this issue and also create an example notebook having different examples (say the code like above and a more complex agent architecture) to check if everything's working. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This works but its better to make these changes to ensure consistency.
if self.ao_client.session_count == 0: | ||
self.ao_client.configure( | ||
**{k: v for k, v in client_params.items() if v is not None}, | ||
instrument_llm_calls=False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add default_tags
here.
if self.ao_client.session_count == 0: | ||
self.ao_client.configure( | ||
**{k: v for k, v in client_params.items() if v is not None}, | ||
instrument_llm_calls=False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add default_tags
here.
|
||
if response.llm_output and "token_usage" in response.llm_output: | ||
event.prompt_tokens = response.llm_output["token_usage"].get("prompt_tokens") | ||
event.completion_tokens = response.llm_output["token_usage"].get("completion_tokens") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These should have default values for safety if the attributes are absent.
error_event = ErrorEvent( | ||
error_type="ChainError", | ||
exception=error, | ||
params=kwargs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logs
or details
not params
.
error_event = ErrorEvent( | ||
error_type="ToolError", | ||
exception=error, | ||
params=kwargs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logs
or details
not params
.
error_event = ErrorEvent( | ||
error_type="RetrieverError", | ||
exception=error, | ||
params=kwargs, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logs
or details
not params
.
return ErrorEvent( | ||
error_type=error_type, | ||
exception=error, | ||
params=kwargs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
logs
or details
not params
.
if run_id_str in event_dict: | ||
del event_dict[run_id_str] | ||
|
||
def _get_event_or_create_error( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function should be defined outside the class to work with both the LanggraphCallbackHandler
and AsyncLanggraphCallbackHandler
.
def current_session_ids(self): | ||
return self.ao_client.current_session_ids | ||
|
||
def _cleanup_event(self, event_dict: Dict[str, Any], run_id: UUID) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function should be defined outside the class to work with both the LanggraphCallbackHandler
and AsyncLanggraphCallbackHandler
.
|
||
if response.llm_output and "token_usage" in response.llm_output: | ||
event.prompt_tokens = response.llm_output["token_usage"].get("prompt_tokens") | ||
event.completion_tokens = response.llm_output["token_usage"].get("completion_tokens") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These should have default values for safety if the attribute is absent.
@SoumilRathi I have tested the
Please resolve the inconsistencies in the ChatML format for the |
🔍 Review Summary
Purpose:
Introduce a basic integration with Langgraph to enhance system capabilities for handling complex agent interactions and improve debugging and monitoring processes.
Key Changes:
Impact:
Enhances the system's ability to handle complex agent interactions and improves debugging and monitoring processes.
Original Description
No existing description found