Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change the processing type from int to float in kube_horizontalpodautoscaler_spec_target_metric #1685

Merged
merged 4 commits into from
Feb 18, 2022

Conversation

whitebear009
Copy link
Contributor

What this PR does / why we need it:
Since the kube_horizontalpodautoscaler_spec_target_metric metric currently uses the AsInt64() function when processing data, if there is a float type metric between 0 and 1, it will be filtered and will not be displayed.
How does this change affect the cardinality of KSM: (increases, decreases or does not change cardinality)

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #1594

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Feb 16, 2022
@k8s-ci-robot
Copy link
Contributor

Welcome @whitebear009!

It looks like this is your first PR to kubernetes/kube-state-metrics 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/kube-state-metrics has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Feb 16, 2022
@whitebear009
Copy link
Contributor Author

/assign @fpetkovski
Please review. Thanks.

}
default:
// Skip unsupported metric type
continue
}

for i := range ok {
for i := range v {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume modifying the slice does not change the behavior because the length of both v and ok is the same. Is that correct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Because I removed ok during the coding process and forgot to change it back.

I think it's better to use ok here. It has been changed back now.

@@ -80,7 +80,7 @@ func TestHPAStore(t *testing.T) {
},
Target: autoscaling.MetricTarget{
Value: resourcePtr(resource.MustParse("10")),
AverageValue: resourcePtr(resource.MustParse("12")),
AverageValue: resourcePtr(resource.MustParse("0.5")),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we also add a test case when Value is a float?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test cases have been updated.

@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Feb 16, 2022
@whitebear009
Copy link
Contributor Author

A new commit has been submitted. Please check again, thanks. @fpetkovski

if m.Object.Target.AverageValue != nil {
v[average], ok[average] = m.Object.Target.AverageValue.AsInt64()
v[average], ok[average] = float64(m.Object.Target.AverageValue.MilliValue())/1000, true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we are now always storing true in ok, do we still need the ok variable? Can we try removing it to see if the tests break?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good idea! The ok variable seems useless. But removing it directly would break the unit test, since we set the length of the v to a constant, it would show the extra zeros (the default value).

Although I can judge whether the value of v[i] is zero (HPA target metric should be required to be greater than 0) to decide whether to display the corresponding metric, but I think this way is not very good.
So Could I change the type of v to the map type, and finally displayed the metric that exists in the map?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have push a new commit. How do you think?

Copy link
Contributor

@fpetkovski fpetkovski Feb 17, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a map could work, but it might also create a flaky test. Iterating over a map does not guarantee that the order of the keys will always be the same. Can this lead to metrics being serialized in a different order each time a test is run? Could you run the tests with -count=100?

Copy link
Contributor Author

@whitebear009 whitebear009 Feb 17, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried running it with -count=100 and it passed the test.
Because the compareOutput function used in the unit test calls the sortByLine function, it will sort the metric set.

func compareOutput(expected, actual string) error {
	entities := []string{expected, actual}
	// Align wanted and actual
	for i := 0; i < len(entities); i++ {
		for _, f := range []func(string) string{removeUnusedWhitespace, sortLabels, sortByLine} {
			entities[i] = f(entities[i])
		}
	}
	if diff := cmp.Diff(entities[0], entities[1]); diff != "" {
		return errors.Errorf("(-want, +got):\n%s", diff)
	}
	return nil
}
func sortByLine(s string) string {
	split := strings.Split(s, "\n")
	sort.Strings(split)
	return strings.Join(split, "\n")
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect!

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 18, 2022
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: fpetkovski, whitebear009

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 18, 2022
@k8s-ci-robot k8s-ci-robot merged commit 929f4ac into kubernetes:master Feb 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

HPA using a custom metric of type Object is not visible in kube_horizontalpodautoscaler_spec_target_metric
3 participants