You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
I want to implement a feedback sending mechanism from the front-end of a chat bot to a python backend.
For LangChain, it is possible to add a link in the chain that fetches the current span_id using from openinference.instrumentation.langchain import get_current_span, but this only gives the span_id of that link.
It is much more natural to get the root span id, which is the correct one to annotate with the user-sent feedback, but it's currently not easy to accomplish programmatically. See discussion here: Arize-ai/phoenix#4800
Describe the solution you'd like
An easy to use mechanism to get the root span_id of the currently running trace, ideally in a way that is agnostic to the currently instrumented LLM framework solution, but also a specific one for each framework would be nice.
Describe alternatives you've considered
I've considered creating the root span manually and then calling a chain within that with the automatically instrumented langchain, but that is very inelegant.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
I want to implement a feedback sending mechanism from the front-end of a chat bot to a python backend.
For LangChain, it is possible to add a link in the chain that fetches the current span_id using
from openinference.instrumentation.langchain import get_current_span
, but this only gives the span_id of that link.It is much more natural to get the root span id, which is the correct one to annotate with the user-sent feedback, but it's currently not easy to accomplish programmatically. See discussion here: Arize-ai/phoenix#4800
Describe the solution you'd like
An easy to use mechanism to get the root span_id of the currently running trace, ideally in a way that is agnostic to the currently instrumented LLM framework solution, but also a specific one for each framework would be nice.
Describe alternatives you've considered
I've considered creating the root span manually and then calling a chain within that with the automatically instrumented langchain, but that is very inelegant.
The text was updated successfully, but these errors were encountered: