-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(experiment): make drop decision queue size configurable #1472
base: main
Are you sure you want to change the base?
Conversation
f08fdf0
to
7bd123b
Compare
@@ -50,13 +50,16 @@ var cuckooTraceCheckerMetrics = []metrics.Metadata{ | |||
{Name: CurrentLoadFactor, Type: metrics.Gauge, Unit: metrics.Percent, Description: "the fraction of slots occupied in the current cuckoo filter"}, | |||
} | |||
|
|||
func NewCuckooTraceChecker(capacity uint, m metrics.Metrics) *CuckooTraceChecker { | |||
func NewCuckooTraceChecker(capacity uint, addQueueDepth uint, m metrics.Metrics) *CuckooTraceChecker { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
addQueueDepth
param isn't used, should it be used on line 62?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch 😮💨
route/route.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these changes related to drop queue size or fixing that bug where we don't increment the right metrics for OTLP ingest during stress relief?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, this PR is my experimental PR. I'm not planning to merge it. I shouldn't have marked it for review
Which problem is this PR solving?
The
cuckoo_addqueue_full
shows full when experimenting HPA. I would like to increase the queue size since now each refinery will hold all trace decisions in the cluster instead of just 1/Nthbenchmark with the queue size increased from 1000 to 10000
Short description of the changes
DroppedQueueSize
config optionincoming/peer_router_otlp
metric to track otlp traffic