-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change the processing type from int to float in kube_horizontalpodautoscaler_spec_target_metric #1685
Conversation
…oscaler_spec_target_metric
Welcome @whitebear009! |
/assign @fpetkovski |
} | ||
default: | ||
// Skip unsupported metric type | ||
continue | ||
} | ||
|
||
for i := range ok { | ||
for i := range v { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume modifying the slice does not change the behavior because the length of both v
and ok
is the same. Is that correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. Because I removed ok
during the coding process and forgot to change it back.
I think it's better to use ok
here. It has been changed back now.
@@ -80,7 +80,7 @@ func TestHPAStore(t *testing.T) { | |||
}, | |||
Target: autoscaling.MetricTarget{ | |||
Value: resourcePtr(resource.MustParse("10")), | |||
AverageValue: resourcePtr(resource.MustParse("12")), | |||
AverageValue: resourcePtr(resource.MustParse("0.5")), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we also add a test case when Value
is a float?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test cases have been updated.
A new commit has been submitted. Please check again, thanks. @fpetkovski |
if m.Object.Target.AverageValue != nil { | ||
v[average], ok[average] = m.Object.Target.AverageValue.AsInt64() | ||
v[average], ok[average] = float64(m.Object.Target.AverageValue.MilliValue())/1000, true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we are now always storing true
in ok
, do we still need the ok
variable? Can we try removing it to see if the tests break?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea! The ok
variable seems useless. But removing it directly would break the unit test, since we set the length of the v
to a constant, it would show the extra zeros (the default value).
Although I can judge whether the value of v[i] is zero (HPA target metric should be required to be greater than 0) to decide whether to display the corresponding metric, but I think this way is not very good.
So Could I change the type of v
to the map type, and finally displayed the metric that exists in the map?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have push a new commit. How do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a map could work, but it might also create a flaky test. Iterating over a map does not guarantee that the order of the keys will always be the same. Can this lead to metrics being serialized in a different order each time a test is run? Could you run the tests with -count=100
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried running it with -count=100
and it passed the test.
Because the compareOutput function used in the unit test calls the sortByLine function, it will sort the metric set.
func compareOutput(expected, actual string) error {
entities := []string{expected, actual}
// Align wanted and actual
for i := 0; i < len(entities); i++ {
for _, f := range []func(string) string{removeUnusedWhitespace, sortLabels, sortByLine} {
entities[i] = f(entities[i])
}
}
if diff := cmp.Diff(entities[0], entities[1]); diff != "" {
return errors.Errorf("(-want, +got):\n%s", diff)
}
return nil
}
func sortByLine(s string) string {
split := strings.Split(s, "\n")
sort.Strings(split)
return strings.Join(split, "\n")
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect!
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: fpetkovski, whitebear009 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What this PR does / why we need it:
Since the kube_horizontalpodautoscaler_spec_target_metric metric currently uses the AsInt64() function when processing data, if there is a float type metric between 0 and 1, it will be filtered and will not be displayed.
How does this change affect the cardinality of KSM: (increases, decreases or does not change cardinality)
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #1594