-
Notifications
You must be signed in to change notification settings - Fork 890
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Log record mutations are visible in next registered processors #4067
Log record mutations are visible in next registered processors #4067
Conversation
This comment was marked as outdated.
This comment was marked as outdated.
@open-telemetry/go-approvers @open-telemetry/cpp-approvers @open-telemetry/rust-approvers PTAL |
This comment was marked as resolved.
This comment was marked as resolved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can leave the definition of shared logRecord
in the doc, so future viewers don't need to check the PR to know the definition.
Co-authored-by: Sam Xie <sam@samxie.me>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given all the discussions and options here, I think this is the right path forward with what we have today.
This PR was marked stale due to lack of activity. It will be closed in 7 days. |
@pellared Can you resolve the conflict? This looks like has enough approvals, so should be good to merge. |
Looks like he just appeared to fix conflict :) @jmacd @arminru Tagging to request review as PR seems assigned to you both! |
@open-telemetry/technical-committee Could you suggest next steps for moving this forward? There are approvals already, but no further movement and Bot will mark this stale again soon. |
Let's mention this in our Spec call today and otherwise we should merge this in the next 1-2 days, as we have left it open for a little while and no concerns have been raised. |
Thanks! |
Just to be super clear: what's the final opinion from the @open-telemetry/cpp-maintainers on this topic? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@open-telemetry/technical-committee, can you please check whether this PR is good to be merged given #4067 (comment)? |
…telemetry#4067) ### Current description Closes open-telemetry#4065
Current description
Closes #4065
After few SIG meetings, many discussions, the Go SIG decided to follow the intention of the specification:
I propose to clarify the specification to avoid confusion of future readers.
Previous description
Per #4065 (comment)
Allow chaining processors and having the log record modifications local.
This is how currently C++ (stable), Go (beta) Logs SDK are designed.
For reference, the following SDKs implements shared mutations for log record processors: Java, .NET, Python, JavaScript, PHP.
For Go, it allows high performance by minimizing the amount of heap allocations while keeping the design clean and Go idiomatic (similar to structured logging in Go standard library).
open-telemetry/opentelemetry-go#5470 is a PoC of making log record mutation visible in next registered processors but suffers from very awkward design (still it is not compliant with the way the specification is written right now).
open-telemetry/opentelemetry-go#5478 is another PoC of making log record mutation visible in next registered processors, but it suffers more heap allocations and adds synchronization (in my opinion, unacceptable performance overhead).
EDIT:
A shared log record means that consecutive processors receive the same instance of the log record.
Not shared means that each log record record receives a copy of log record.
Some implementations already use a shared record (Java, .NET, Python, JavaScript, Python, Rust). However, in Go (C++ probably as well) we prefer to use copies (shallow) to reduce the amount of heap allocations (makes it possible to have a zero allocation log processing). E.g. it can drastically improve the performance as e.g. trace and debug logs (if info log level is set) would reduce the memory pressure and make the garbage collection a lot more efficient (for 95% of the cases it would not cause any heap allocation).
Even if the other implementations would benefit from using copies as well may not to want to change it as it would be a breaking change.
In short, stack allocated log records can be more performant. Having log records shared causes the Go escape analysis to make a heap allocation more often.