-
Notifications
You must be signed in to change notification settings - Fork 408
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(llmobs): llmobs-specific context manager #10767
Open
Yun-Kim
wants to merge
3
commits into
main
Choose a base branch
from
yunkim/llmobs-context
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
+390
−277
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
|
2 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
Public facing changes:
LLMObs.current_span()
to API, returns the current active LLMObs-generated (integration, SDK) span.LLMObs.current_trace_context()
to API, returns current LLMObs context (which can represent both a span or a distributed span)Private changes:
Previous behavior
LLMObs spans are based on APM spans, except LLMObs spans' parenting involves only other LLMObs spans. So with a potential trace structure containing a mixture of APM-specific and LLMObs spans, like:
LLMObs only cares about the LLMObs spans, where span C's parent is the root span, even though in APM it would be span B. Combined with distributed tracing and multithreading, this makes it not so easy to determine that "correct" (read LLMObs) parenting tree for traces submitted to LLM Observability.
Problems with previous approach
Previously we worked around this by traversing the span's local parent tree and finding the next LLM-type span on both span start and finish for non-distributed cases, and for distributed cases we would attach the parent ID on the span context's meta field to be propagated in distributed request headers. However attaching things to the span context meta was not suitable long-term due to a couple factors:
Example ugly workaround
Any meta fields set on the context object gets propagated as span tags on all subsequent spans in the trace on span start time, except for the spans in the first service of a trace which get propagated at span finish time. Fixing this resulted in overriding these span tags on span start and more checks on span finish.Current approach
Instead of being dependent on a Context object that doesn't quite fit our use case and trying to make it fit our use case, we simply keep track of our own active LLMObs span/context:
LLMObsContextProvider
handles keeping track of the current active LLMObs span viaactive()
andactivate()
LLMObsContextProvider._activate_llmobs_span()
and set the llmobs parent ID as a tag at span start time.(called by
LLMObs._start_span()
andBaseLLMIntegration.trace(submit_to_llmobs=True)
and the bedrock integration).LLMObs.inject_distributed_headers
now uses the LLMObsContextProvider to inject the active llmobs span's ID into request headersLLMObs.activate_distributed_headers()
now uses the LLMObsContextProvider to activate the extracted llmobs context to continue the trace in a distributed case.trace_utils.activate_distributed_headers()
now includes automatic llmobs context activation if llmobs is enabled. I've config-gated this so that LLMObs is only imported for llmobs users (same forHTTPPropagator.inject()
.By keeping track of our own active LLMObs spans, spans submitted to LLM Observability have an independent set of span and parent IDs, even if the span and trace IDs are shared with APM spans for now. This is the first step to decoupling from tracer internals.
TODO: add span/trace ID field for APM correlation
Next steps
We can go further by generating LLMObs-specific span/trace IDs which are separate from APM. This will solve some edge cases with traces involving mixed APM/LLMObs spans.
Checklist
Reviewer Checklist