-
Notifications
You must be signed in to change notification settings - Fork 95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extract Sent Cache to an interface for future expansion #561
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
kentquirk
added
type: enhancement
New feature or request
version: bump patch
A PR with release-worthy changes and is backwards-compatible.
labels
Nov 13, 2022
MikeGoldsmith
approved these changes
Nov 14, 2022
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great @kentquirk 👍🏻
I've left a couple of suggestions for updating the new interface but none are essential as they are for internal use only.
kentquirk
added a commit
that referenced
this pull request
Nov 22, 2022
## Which problem is this PR solving? Before this change, Refinery used a circular LRU cache to retain a record for every trace; this cache is hardcoded to 5x the configured cache size, and does not change when the cache is resized. This is a relatively small number, and for long-lived traces, it might mean that late spans look like new traces, and therefore might get a different sampling decision -- which would result in missing spans in Honeycomb. #561 abstracted the SampleCache interface to prepare for other implementations. This uses it to provide a new cache design. ## Short description of the changes This design implements a "cuckoo filter" cache for dropped traces, which can store the dropped trace information much more efficiently (about 4 bytes per trace as compared to about 200 bytes for kept traces). - Adds a CuckooTraceChecker type that implements a 2-stage cuckoo filter for tracking recently-used trace IDs over time. - Implements the SampleCache interface with a CuckooSentCache, which uses the existing LRU for kept traces, and a CuckooTraceChecker for dropped traces. - Implements a new configuration block for caches to allow users to opt into the cuckoo cache and control it for their needs, but is still backwards compatible. - Adds documentation to the config_complete file. - Adds additional metrics for tracking the cuckoo cache size
ghost
pushed a commit
to opsramp/tracing-proxy
that referenced
this pull request
Jul 5, 2024
) ## Which problem is this PR solving? This is a prep PR for further work on the sent trace cache. For improved scalability, we want to be able to store trace decision records for a longer time. The best way to implement this in a backwards-compatible way is to pull the mechanisms for managing decision records into an interface, and then implement the interface with the legacy logic. That's what this does. There are no expected changes in behavior, and all tests still pass. ## Short description of the changes - Define interfaces for a TraceSentCache and a TraceSentRecord - Implement code for those that duplicates the existing legacy logic - Refactor the places the code is used to use the new interfaces - Tweak span count data type so that it's not an int64 any more - Rename SpanCount to DescendantCount because now that we have links and events, they're not all Spans anymore, and future PRs will add additional counting functions. I haven't renamed the corresponding Get and Add functions because that's pretty messy.
ghost
pushed a commit
to opsramp/tracing-proxy
that referenced
this pull request
Jul 5, 2024
## Which problem is this PR solving? Before this change, Refinery used a circular LRU cache to retain a record for every trace; this cache is hardcoded to 5x the configured cache size, and does not change when the cache is resized. This is a relatively small number, and for long-lived traces, it might mean that late spans look like new traces, and therefore might get a different sampling decision -- which would result in missing spans in Honeycomb. honeycombio#561 abstracted the SampleCache interface to prepare for other implementations. This uses it to provide a new cache design. ## Short description of the changes This design implements a "cuckoo filter" cache for dropped traces, which can store the dropped trace information much more efficiently (about 4 bytes per trace as compared to about 200 bytes for kept traces). - Adds a CuckooTraceChecker type that implements a 2-stage cuckoo filter for tracking recently-used trace IDs over time. - Implements the SampleCache interface with a CuckooSentCache, which uses the existing LRU for kept traces, and a CuckooTraceChecker for dropped traces. - Implements a new configuration block for caches to allow users to opt into the cuckoo cache and control it for their needs, but is still backwards compatible. - Adds documentation to the config_complete file. - Adds additional metrics for tracking the cuckoo cache size
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
type: enhancement
New feature or request
version: bump patch
A PR with release-worthy changes and is backwards-compatible.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Which problem is this PR solving?
This is a prep PR for further work on the sent trace cache. For improved scalability, we want to be able to store trace decision records for a longer time. The best way to implement this in a backwards-compatible way is to pull the mechanisms for managing decision records into an interface, and then implement the interface with the legacy logic. That's what this does. There are no expected changes in behavior, and all tests still pass.
Short description of the changes