-
Notifications
You must be signed in to change notification settings - Fork 673
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Bug Report: OpenAI Requests Not Traced When Sent from a LangGraph Node #2271
Comments
Thanks for reporting @jemo21k! You wrote that you're willing to submit a PR - does it mean you have a fix for this? Or should we look into this? |
Hi, @nirga |
Hey 👋 I also bumped into the same issue. From a quick look at the code, the LangChain instrumentation suppresses further instruments by setting the That key is then used by the instruments to decide if they instrument the call or not. See the code here. Was there any special reason for disabling downstream instrumentations? 🤔 EDIT |
@thisthat the reason we've done that is because these spans are (supposed to be?) already collected by the Langchain callbacks. Before, you would have gotten duplicate OpenAI spans (which results in counting tokens twice for example). We should figure out why the callbacks are not producing the needed spans in this case. |
@nirga thanks for the answer. I believe the LangChain sensor should only provide visibility into the pipeline and not trace additional LLM calls. Otherwise, we would need to re-implement every LLM sensors twice, one for their individual calls and one for LangChain. |
Which component is this bug for?
Langchain Instrumentation
📜 Description
In my Langchain-based application using LangGraph, I noticed that OpenAI requests with OpenAI's own client within a LangGraph node are not traced. Specifically, when I call the OpenAI GPT-4o model from within a LangGraph node, I do not see a span related to the OpenAI call in my exported trace log, nor do I see any associated LLM call metrics.
👟 Reproduction steps
Here is an example demonstrating the issue:
In the trace logs, there are no gen_ai attributes or metrics for the OpenAI call. However, if I replace the OpenAI client with Langchain’s own OpenAI client, a span with LLM metrics is generated as expected.
👍 Expected behavior
When an OpenAI call is made within a LangGraph node, I expect to see tracing data that includes gen_ai attributes and other metrics associated with the LLM call, such as:
👎 Actual Behavior with Screenshots
When running the code from within a LangGraph node, no tracing data is recorded for the OpenAI calls. Below is the actual log output captured during the execution:
As shown, the logs lack the expected gen_ai and llm attributes or metrics related to the OpenAI call, which would normally be included when using Langchain's OpenAI client directly.
🤖 Python Version
Python 3.12.4
📃 Provide any additional context for the Bug.
langchain==0.2.16
langchain-cohere==0.1.9
langchain-community==0.2.17
langchain-core==0.2.41
langchain-experimental==0.0.65
langchain-openai==0.1.25
langchain-text-splitters==0.2.4
langgraph==0.2.23
langgraph-checkpoint==1.0.10
openai==1.47.0
traceloop-sdk==0.33.3
👀 Have you spent some time to check if this bug has been raised before?
Are you willing to submit PR?
Yes I am willing to submit a PR!
The text was updated successfully, but these errors were encountered: