-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add filter field to prometheus encoder #147
Conversation
} | ||
|
||
type PromMetricsItems []PromMetricsItem | ||
|
||
type PromMetricsFilter struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need the prefix Prom
... consider:: s/PromMetricsFilter /MetricsFilter
maybe we need that to make the API easier ... this is just a suggestion
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the the Prom
prefix to follow the convention in the file.
pkg/pipeline/encode/encode_prom.go
Outdated
input string | ||
labelNames []string | ||
input string | ||
filterKey string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
consider putting key and value in sub-struct (I just think that will happen if we decide to change the parameters of the filter)
pkg/pipeline/encode/encode_prom.go
Outdated
@@ -102,9 +104,15 @@ func (e *encodeProm) EncodeMetric(metric config.GenericMap) []config.GenericMap | |||
// TODO: We may need different handling for histograms | |||
out := make([]config.GenericMap, 0) | |||
for metricName, mInfo := range e.metrics { | |||
val, keyFound := metric[mInfo.filterKey] | |||
shouldKeepRecord := keyFound && val == mInfo.filterValue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A very fast "thing" that you can improve here is to say ... if there is a Key and the value equals to some N/A value we filter just on the basis of the existence of the Key and we do not care about the value of the value ... trivial change and adds a lot of flexibility
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see what you are suggesting and I agree that it will add flexibility. But, I think we should put more thought into that.
I don't like magic values that might collide with the user's values.
metricValue, ok := metric[mInfo.input] | ||
if !ok { | ||
log.Debugf("field %v is missing", metricName) | ||
log.Errorf("field %v is missing", mInfo.input) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@KalmanMeth can you advise why we had originally Debug here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe the reason for using Debug originally is that before introducing dedicated filter
field, the filtering was done based on the input field. So it wasn't an error if the expected input field didn't exist. It probably indicated that the record didn't belong to the metric in context.
The input field basically had two roles:
- hold the value we wanted to measure
- filter records - associate a record with a metric.
because of this, prefixes were added to the input key.
This PR allows removing these prefixes and remove the 2nd role of the input field.
pkg/confgen/encode.go
Outdated
} | ||
} | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any other cross phases check that you think we can and should add here???
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider the following configuration of a prom metric
- name: bandwidth_per_network_service
type: counter
filter:
key: name
value: bandwidth_network_service
valuekey: recent_op_value
labels:
- by
- aggregate
buckets: []
If prom_encode
follows extract_aggregate
I can think of the following validations:
valuekey
is one of the keys of the following GenericMapflowlogs-pipeline/pkg/pipeline/extract/aggregate/aggregate.go
Lines 189 to 199 in c0f17db
"name": aggregate.Definition.Name, "operation": aggregate.Definition.Operation, "record_key": aggregate.Definition.RecordKey, "by": strings.Join(aggregate.Definition.By, ","), "aggregate": string(group.normalizedValues), "total_value": fmt.Sprintf("%f", group.value), "recentRawValues": group.RecentRawValues, "total_count": fmt.Sprintf("%d", group.count), "recent_op_value": group.recentOpValue, "recent_count": group.recentCount, strings.Join(aggregate.Definition.By, "_"): string(group.normalizedValues), - Same is true about
labels
andfilter.key
- if
filter.key == name
then we can validate thatfilter.value
is defined. - regarding
type
4.1. iftype == gauge
thenvaluekey
is in[total_value, total_count]
4.2. iftype == counter
thenvaluekey
is in[recent_op_value, recent_count]
4.3. iftype == histogram
thenvaluekey
isrecentRawValues
- histograms have at least one bucket
I find it confusing to have an argument variable named "metric" and a data member named "metrics"
df04ef1
to
9ad430c
Compare
Codecov Report
@@ Coverage Diff @@
## main #147 +/- ##
==========================================
+ Coverage 58.44% 58.46% +0.01%
==========================================
Files 51 51
Lines 2950 2949 -1
==========================================
Hits 1724 1724
Misses 1113 1113
+ Partials 113 112 -1
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
Follow-up for #146. Please review that one before this.