Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError('Object of type CallbackManagerForToolRun is not JSON serializable') on Coder agent #24621

Closed
5 tasks done
arthur-lachini-advisia opened this issue Jul 24, 2024 · 20 comments · Fixed by #28824
Closed
5 tasks done
Labels
Ɑ: agent Related to agents module 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: Runnables Related to Runnables

Comments

@arthur-lachini-advisia
Copy link

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangGraph/LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangGraph/LangChain rather than my code.
  • I am sure this is better as an issue rather than a GitHub discussion, since this is a LangGraph bug and not a design question.

Example Code

for s in graph.stream(
    {
        "messages": [
            HumanMessage(content="Code hello world and print it to the terminal")
        ]
    }
):
    if "__end__" not in s:
        print(s)
        print("----")

Error Message and Stack Trace (if applicable)

TypeError('Object of type CallbackManagerForToolRun is not JSON serializable')Traceback (most recent call last):


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\pregel\__init__.py", line 946, in stream
    _panic_or_proceed(done, inflight, loop.step)


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\pregel\__init__.py", line 1347, in _panic_or_proceed
    raise exc


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\pregel\executor.py", line 60, in done
    task.result()


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\concurrent\futures\_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\concurrent\futures\_base.py", line 401, in __get_result
    raise self._exception


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\concurrent\futures\thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\pregel\retry.py", line 25, in run_with_retry
    task.proc.invoke(task.input, task.config)


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 2873, in invoke
    input = step.invoke(input, config, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langgraph\utils.py", line 102, in invoke
    ret = context.run(self.func, input, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


  File "C:\Users\arthur.lachini\AppData\Local\Temp\ipykernel_8788\519499601.py", line 3, in agent_node
    result = agent.invoke(state)
             ^^^^^^^^^^^^^^^^^^^


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\chains\base.py", line 166, in invoke
    raise e


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\chains\base.py", line 156, in invoke
    self._call(inputs, run_manager=run_manager)


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\agents\agent.py", line 1612, in _call
    next_step_output = self._take_next_step(
                       ^^^^^^^^^^^^^^^^^^^^^


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\agents\agent.py", line 1318, in _take_next_step
    [


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\agents\agent.py", line 1346, in _iter_next_step
    output = self.agent.plan(
             ^^^^^^^^^^^^^^^^


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain\agents\agent.py", line 580, in plan
    for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 3253, in stream
    yield from self.transform(iter([input]), config, **kwargs)


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 3240, in transform
    yield from self._transform_stream_with_config(


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 2053, in _transform_stream_with_config
    chunk: Output = context.run(next, iterator)  # type: ignore
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 3202, in _transform
    for output in final_pipeline:


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 1271, in transform
    for ichunk in input:


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 5264, in transform
    yield from self.bound.transform(


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\runnables\base.py", line 1289, in transform
    yield from self.stream(final, config, **kwargs)


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\language_models\chat_models.py", line 365, in stream
    raise e


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_core\language_models\chat_models.py", line 345, in stream
    for chunk in self._stream(messages, stop=stop, **kwargs):


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_openai\chat_models\base.py", line 513, in _stream
    payload = self._get_request_payload(messages, stop=stop, **kwargs)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_openai\chat_models\base.py", line 604, in _get_request_payload
    "messages": [_convert_message_to_dict(m) for m in messages],
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_openai\chat_models\base.py", line 199, in _convert_message_to_dict
    _lc_tool_call_to_openai_tool_call(tc) for tc in message.tool_calls
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\site-packages\langchain_openai\chat_models\base.py", line 1777, in _lc_tool_call_to_openai_tool_call
    "arguments": json.dumps(tool_call["args"]),
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\json\__init__.py", line 231, in dumps
    return _default_encoder.encode(obj)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\json\encoder.py", line 200, in encode
    chunks = self.iterencode(o, _one_shot=True)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\json\encoder.py", line 258, in iterencode
    return _iterencode(o, 0)
           ^^^^^^^^^^^^^^^^^


  File "c:\Users\arthur.lachini\AppData\Local\Programs\Python\Python312\Lib\json\encoder.py", line 180, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '


TypeError: Object of type CallbackManagerForToolRun is not JSON serializable

Description

I tried to replicate the tutorial in my local machine, but the coder function does not works as it suposed to. The ressearcher function works just fine and can do multiple consecutive researchers but as soone as the coder agent is called, it breakes the function. I've annexed prints of the langsmith dashboard to provide further insight on the error.

Sem título
Sem título2
Sem título3

System Info

Windows 10
Python 3.12.4

aiohttp==3.9.5
aiosignal==1.3.1
annotated-types==0.7.0
anyio==4.4.0
asttokens==2.4.1
attrs==23.2.0
certifi==2024.7.4
charset-normalizer==3.3.2
colorama==0.4.6
comm==0.2.2
contourpy==1.2.1
cycler==0.12.1
dataclasses-json==0.6.7
debugpy==1.8.2
decorator==5.1.1
distro==1.9.0
executing==2.0.1
fonttools==4.53.1
frozenlist==1.4.1
greenlet==3.0.3
h11==0.14.0
httpcore==1.0.5
httpx==0.27.0
idna==3.7
ipykernel==6.29.5
ipython==8.26.0
jedi==0.19.1
jsonpatch==1.33
jsonpointer==3.0.0
jupyter_client==8.6.2
jupyter_core==5.7.2
kiwisolver==1.4.5
langchain==0.2.11
langchain-community==0.2.10
langchain-core==0.2.23
langchain-experimental==0.0.63
langchain-openai==0.1.17
langchain-text-splitters==0.2.2
langchainhub==0.1.20
langgraph==0.1.10
langsmith==0.1.93
marshmallow==3.21.3
matplotlib==3.9.1
matplotlib-inline==0.1.7
multidict==6.0.5
mypy-extensions==1.0.0
nest-asyncio==1.6.0
numpy==1.26.4
openai==1.37.0
orjson==3.10.6
packaging==24.1
parso==0.8.4
pillow==10.4.0
platformdirs==4.2.2
prompt_toolkit==3.0.47
psutil==6.0.0
pure_eval==0.2.3
pydantic==2.8.2
pydantic_core==2.20.1
Pygments==2.18.0
pyparsing==3.1.2
python-dateutil==2.9.0.post0
pywin32==306
PyYAML==6.0.1
pyzmq==26.0.3
regex==2024.5.15
requests==2.32.3
six==1.16.0
sniffio==1.3.1
SQLAlchemy==2.0.31
stack-data==0.6.3
tenacity==8.5.0
tiktoken==0.7.0
tornado==6.4.1
tqdm==4.66.4
traitlets==5.14.3
types-requests==2.32.0.20240712
typing-inspect==0.9.0
typing_extensions==4.12.2
urllib3==2.2.2
wcwidth==0.2.13
yarl==1.9.4

@hwchase17
Copy link
Contributor

sorry when you say "coder agent" and "tutorial" - you are referring to this one right? https://langchain-ai.github.io/langgraph/tutorials/multi_agent/agent_supervisor/

@arthur-lachini-advisia
Copy link
Author

Yes, sorry for the lack of information on that part

@vbarda
Copy link
Contributor

vbarda commented Jul 24, 2024

This is an issue in langchain, not in langgraph, so going to transfer. here is a minimal reproducible example:

from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langchain_experimental.tools import PythonREPLTool
from langchain.agents import AgentExecutor, create_openai_tools_agent

prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            "You are a helpful assistant",
        ),
        MessagesPlaceholder(variable_name="messages"),
        MessagesPlaceholder(variable_name="agent_scratchpad"),
    ]
)

@tool
def get_weather(location: str):
    """Get weather for location"""
    return "Sunny and 75 degrees"


llm = ChatOpenAI(model="gpt-4o-mini")
# this works
tools = [get_weather]
agent = create_openai_tools_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
executor.invoke({"messages": [("human", "what's the weather in sf")]})

# this doesn't
tools = [PythonREPLTool()]
agent = create_openai_tools_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools)
executor.invoke({"messages": [("human", "Code hello world and print it to the terminal")]})

that being said, it works w/ langgraph's create_react_agent, so will just update the notebook

@vbarda vbarda transferred this issue from langchain-ai/langgraph Jul 24, 2024
@dosubot dosubot bot added Ɑ: Runnables Related to Runnables Ɑ: agent Related to agents module 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Jul 24, 2024
@ShubhamMaddhashiya-bidgely

I'm also facing the same issue.

@geod-dev
Copy link

that being said, it works w/ langgraph's create_react_agent, so will just update the notebook.

I'm facing the same issue and use the create_react_agent function.

@lucifer2288
Copy link

facing the same issue

@wulifu2hao
Copy link
Contributor

seems to be related to the PR #24038 which adds to run_manager to the tool arg https://github.com/langchain-ai/langchain/blob/7dd6b32991e81582cb30588b84871af04ecdc76c/libs/core/langchain_core/tools.py#L603 and make it fail to serialize

going back to pip install langchain-core==0.2.12 seems to fix it for me

cc @baskaryan

@zero-github
Copy link

zero-github commented Jul 25, 2024

@wulifu2hao

site-packages\langchain_core\tools.py
tool_kwargs and tool_input are same variable, tool_input will be changed after line 603.

Code Snippet, line 601:

tool_args, tool_kwargs = self._to_args_and_kwargs(tool_input)
if signature(self._run).parameters.get("run_manager"):
    tool_kwargs["run_manager"] = run_manager

workaround solution, line 530:

def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]:
    tool_input = self._parse_input(tool_input)
    # For backwards compatibility, if run_input is a string,
    # pass as a positional argument.
    if isinstance(tool_input, str):
        return (tool_input,), {}
    else:
        return (), dict(tool_input)

@monuminu
Copy link

monuminu commented Aug 4, 2024

facing same issue

@ShohamD1121
Copy link

ShohamD1121 commented Aug 6, 2024

going back to pip install langchain-core==0.2.12 seems to fix

Thanks for the saving my day! my guy

@MrZoidberg
Copy link

Will it be fixed?

@rhlarora84
Copy link

sorry when you say "coder agent" and "tutorial" - you are referring to this one right? https://langchain-ai.github.io/langgraph/tutorials/multi_agent/agent_supervisor/

Just a basic AgentExecutor with a BaseTool without args_schema should be good enough to reproduce this. While new features like (strict mode) are being rolled out, but basic regression like these will prevent from upgrades. Will this be fixed soon?

@jekriske-lilly
Copy link

Related issue: #24614
Example tutorial: https://python.langchain.com/v0.2/docs/integrations/tools/bing_search/

I'd rather not downgrade, but I'm having trouble moving any further while this is still an issue. Does anyone have a workaround besides downgrading or a time frame to a fix?

@marcdown
Copy link

marcdown commented Aug 13, 2024

Using the Python REPL example from the multi-agent collaboration notebook resolved the issue for me:

# from langchain_experimental.tools import PythonREPLTool
from langchain_core.tools import tool
from langchain_experimental.utilities import PythonREPL

# python_repl_tool = PythonREPLTool()
repl = PythonREPL()

@tool
def python_repl_tool(
    code: Annotated[str, "The python code to execute to generate your chart."],
):
    """Use this to execute python code. If you want to see the output of a value,
    you should print it out with `print(...)`. This is visible to the user."""
    try:
        result = repl.run(code)
    except BaseException as e:
        return f"Failed to execute. Error: {repr(e)}"
    result_str = f"Successfully executed:\n```python\n{code}\n```\nStdout: {result}"
    return (
        result_str + "\n\nIf you have completed all tasks, respond with FINAL ANSWER."
    )

@karthikcs
Copy link

@wulifu2hao

site-packages\langchain_core\tools.py tool_kwargs and tool_input are same variable, tool_input will be changed after line 603.

Code Snippet, line 601:

tool_args, tool_kwargs = self._to_args_and_kwargs(tool_input)
if signature(self._run).parameters.get("run_manager"):
    tool_kwargs["run_manager"] = run_manager

workaround solution, line 530:

def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]:
    tool_input = self._parse_input(tool_input)
    # For backwards compatibility, if run_input is a string,
    # pass as a positional argument.
    if isinstance(tool_input, str):
        return (tool_input,), {}
    else:
        return (), dict(tool_input)

This has worked for me.. Thanks a lot

@jekriske-lilly
Copy link

jekriske-lilly commented Aug 14, 2024

@karthikcs nice wrapping tool_input with dict() seemed to work for me as well, the location has been changed to
https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/tools/base.py#L477 instead of 530

Edit: It's not the only place in tools.py with issues parsing tool_input such as in the _parse_input function.

@karthikcs karthikcs mentioned this issue Aug 14, 2024
3 tasks
@tahsinalamin
Copy link

Using the Python REPL example from the multi-agent collaboration notebook resolved the issue for me:

# from langchain_experimental.tools import PythonREPLTool
from langchain_core.tools import tool
from langchain_experimental.utilities import PythonREPL

# python_repl_tool = PythonREPLTool()
repl = PythonREPL()

@tool
def python_repl_tool(
    code: Annotated[str, "The python code to execute to generate your chart."],
):
    """Use this to execute python code. If you want to see the output of a value,
    you should print it out with `print(...)`. This is visible to the user."""
    try:
        result = repl.run(code)
    except BaseException as e:
        return f"Failed to execute. Error: {repr(e)}"
    result_str = f"Successfully executed:\n```python\n{code}\n```\nStdout: {result}"
    return (
        result_str + "\n\nIf you have completed all tasks, respond with FINAL ANSWER."
    )

This is the answer! Thanks. They have it correct on this notebook -https://github.com/langchain-ai/langgraph/blob/fc95028738232572c05827a074f8d0c606f5c0ca/examples/multi_agent/multi-agent-collaboration.ipynb

@jekriske-lilly
Copy link

@tahsinalamin It's cool that that worked for you, but there is a much larger issue that doesn't have anything to do with PythonREPL and is how the library is handling data types in langchain_core/tools/base.py.

Multiple tutorials are broken even with the latest release 0.2.33

@darknight2163
Copy link

seems to be related to the PR #24038 which adds to run_manager to the tool arg https://github.com/langchain-ai/langchain/blob/7dd6b32991e81582cb30588b84871af04ecdc76c/libs/core/langchain_core/tools.py#L603 and make it fail to serialize

going back to pip install langchain-core==0.2.12 seems to fix it for me

cc @baskaryan

Thanks it worked for me !

@CrasCris
Copy link

CrasCris commented Oct 2, 2024

Using the Python REPL example from the multi-agent collaboration notebook resolved the issue for me:

# from langchain_experimental.tools import PythonREPLTool
from langchain_core.tools import tool
from langchain_experimental.utilities import PythonREPL

# python_repl_tool = PythonREPLTool()
repl = PythonREPL()

@tool
def python_repl_tool(
    code: Annotated[str, "The python code to execute to generate your chart."],
):
    """Use this to execute python code. If you want to see the output of a value,
    you should print it out with `print(...)`. This is visible to the user."""
    try:
        result = repl.run(code)
    except BaseException as e:
        return f"Failed to execute. Error: {repr(e)}"
    result_str = f"Successfully executed:\n```python\n{code}\n```\nStdout: {result}"
    return (
        result_str + "\n\nIf you have completed all tasks, respond with FINAL ANSWER."
    )

Thanks for the answer

@efriis efriis closed this as completed in 6a37899 Dec 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Ɑ: agent Related to agents module 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: Runnables Related to Runnables
Projects
None yet