Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add filter field to prometheus encoder #147

Merged
merged 13 commits into from
Mar 22, 2022
Merged

Conversation

ronensc
Copy link
Collaborator

@ronensc ronensc commented Mar 15, 2022

Follow-up for #146. Please review that one before this.

@ronensc ronensc self-assigned this Mar 15, 2022
}

type PromMetricsItems []PromMetricsItem

type PromMetricsFilter struct {
Copy link
Collaborator

@eranra eranra Mar 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need the prefix Prom ... consider:: s/PromMetricsFilter /MetricsFilter
maybe we need that to make the API easier ... this is just a suggestion

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added the the Prom prefix to follow the convention in the file.

input string
labelNames []string
input string
filterKey string
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consider putting key and value in sub-struct (I just think that will happen if we decide to change the parameters of the filter)

@@ -102,9 +104,15 @@ func (e *encodeProm) EncodeMetric(metric config.GenericMap) []config.GenericMap
// TODO: We may need different handling for histograms
out := make([]config.GenericMap, 0)
for metricName, mInfo := range e.metrics {
val, keyFound := metric[mInfo.filterKey]
shouldKeepRecord := keyFound && val == mInfo.filterValue
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A very fast "thing" that you can improve here is to say ... if there is a Key and the value equals to some N/A value we filter just on the basis of the existence of the Key and we do not care about the value of the value ... trivial change and adds a lot of flexibility

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see what you are suggesting and I agree that it will add flexibility. But, I think we should put more thought into that.
I don't like magic values that might collide with the user's values.

metricValue, ok := metric[mInfo.input]
if !ok {
log.Debugf("field %v is missing", metricName)
log.Errorf("field %v is missing", mInfo.input)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@KalmanMeth can you advise why we had originally Debug here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the reason for using Debug originally is that before introducing dedicated filter field, the filtering was done based on the input field. So it wasn't an error if the expected input field didn't exist. It probably indicated that the record didn't belong to the metric in context.
The input field basically had two roles:

  1. hold the value we wanted to measure
  2. filter records - associate a record with a metric.

because of this, prefixes were added to the input key.
This PR allows removing these prefixes and remove the 2nd role of the input field.

}
}


Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any other cross phases check that you think we can and should add here???

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider the following configuration of a prom metric

      - name: bandwidth_per_network_service
        type: counter
        filter:
          key: name
          value: bandwidth_network_service
        valuekey: recent_op_value
        labels:
        - by
        - aggregate
        buckets: []

If prom_encode follows extract_aggregate I can think of the following validations:

  1. valuekey is one of the keys of the following GenericMap
    "name": aggregate.Definition.Name,
    "operation": aggregate.Definition.Operation,
    "record_key": aggregate.Definition.RecordKey,
    "by": strings.Join(aggregate.Definition.By, ","),
    "aggregate": string(group.normalizedValues),
    "total_value": fmt.Sprintf("%f", group.value),
    "recentRawValues": group.RecentRawValues,
    "total_count": fmt.Sprintf("%d", group.count),
    "recent_op_value": group.recentOpValue,
    "recent_count": group.recentCount,
    strings.Join(aggregate.Definition.By, "_"): string(group.normalizedValues),
  2. Same is true about labels and filter.key
  3. if filter.key == name then we can validate that filter.value is defined.
  4. regarding type
    4.1. if type == gauge then valuekey is in [total_value, total_count]
    4.2. if type == counter then valuekey is in [recent_op_value, recent_count]
    4.3. if type == histogram then valuekey is recentRawValues
  5. histograms have at least one bucket

@codecov-commenter
Copy link

codecov-commenter commented Mar 20, 2022

Codecov Report

Merging #147 (c97cb0c) into main (beb3c57) will increase coverage by 0.01%.
The diff coverage is 93.33%.

@@            Coverage Diff             @@
##             main     #147      +/-   ##
==========================================
+ Coverage   58.44%   58.46%   +0.01%     
==========================================
  Files          51       51              
  Lines        2950     2949       -1     
==========================================
  Hits         1724     1724              
  Misses       1113     1113              
+ Partials      113      112       -1     
Flag Coverage Δ
unittests 58.46% <93.33%> (+0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
pkg/api/encode_prom.go 100.00% <ø> (ø)
pkg/confgen/encode.go 50.00% <ø> (+13.63%) ⬆️
pkg/pipeline/encode/encode_prom.go 81.08% <87.50%> (-0.70%) ⬇️
pkg/pipeline/extract/aggregate/aggregate.go 94.87% <100.00%> (-0.09%) ⬇️
pkg/test/utils.go 82.41% <100.00%> (-0.20%) ⬇️
pkg/pipeline/ingest/ingest_collector.go 62.50% <0.00%> (-1.67%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update beb3c57...c97cb0c. Read the comment docs.

@ronensc ronensc merged commit 6668d30 into netobserv:main Mar 22, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants