Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[VPA] Request to support spec.selector in VPA #6493

Closed
yunwang0911 opened this issue Feb 1, 2024 · 5 comments
Closed

[VPA] Request to support spec.selector in VPA #6493

yunwang0911 opened this issue Feb 1, 2024 · 5 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@yunwang0911
Copy link

Which component are you using?:
Vertical Pod Autoscaler

Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:

VPA has only spec.targetRef, it supports controllers like Deployment, StatefulSet and so on. However, VPA can't handle orphan pods(whose metadata.ownerReferences is empty). Orphan pods may happen in some scenarios, for example, some system use orphan pods to run stateful applications, and the orphan pods are controlled by some external system; pods controlled by a CR without scale resource, these pods can be regarded as orphan pods actually.

Besides, internally VPA recommender uses labels selector that determines which Pods are controlled by a VPA object.

// Vpa (Vertical Pod Autoscaler) object is responsible for vertical scaling of
// Pods matching a given label selector.
type Vpa struct {
	ID VpaID
	// Labels selector that determines which Pods are controlled by this VPA
	// object. Can be nil, in which case no Pod is matched.
->	PodSelector labels.Selector
	// Map of the object annotations (key-value pairs).
	Annotations vpaAnnotationsMap
	// Map of the status conditions (keys are condition types).
	Conditions vpaConditionsMap
	// Most recently computed recommendation. Can be nil.
	Recommendation *vpa_types.RecommendedPodResources
	...
}

Describe the solution you'd like.:
Add an optional field in VPA spec, spec.selector. Users can define VPA with targetRef, alternatively define selector to select a group of pods.

Describe any alternative solutions you've considered.:

Additional context.:

@yunwang0911 yunwang0911 added the kind/feature Categorizes issue or PR as related to a new feature. label Feb 1, 2024
@voelzmo
Copy link
Contributor

voelzmo commented Apr 2, 2024

In the beginning, VPA allowed for specifying the labelSelector directly, but switched away from this and decided to read the selector from the /scale subresource instead. This has two benefits

  • UX/Safety: misconfigurations are much harder to achieve. Previously, it was very easy to come up with incorrect configurations. For example, two VPAs selecting the same Pod and then fighting over the "correct" resources to set, resulting in an endless eviction loop. These scenarios are very hard to find as a human operator, because there is no easy visualization on which selectors overlap for which number of Pods.
  • Convenience: Instead of specifying Pods to put under autoscaling, you have to specify an object controlling Pods in the targetRef. As VPA works by evicting Pods to apply resource recommendations, there is no use in making it also apply to orphaned Pods, as they will never get re-created. This is also somewhat related to the safety aspect above, making it harder to configure things which don't work as expected.

I think not exposing labelSelector directly is a conscious decision for the user's benefit. You can try to argue about the VPA contract requiring a /scale subresource, but this is the common resource for autoscaling – HPA also requires it. If you're using a custom resource that wants to be autoscaled, this is how to achieve it.

@yunwang0911
Copy link
Author

In the beginning, VPA allowed for specifying the labelSelector directly, but switched away from this and decided to read the selector from the /scale subresource instead. This has two benefits

  • UX/Safety: misconfigurations are much harder to achieve. Previously, it was very easy to come up with incorrect configurations. For example, two VPAs selecting the same Pod and then fighting over the "correct" resources to set, resulting in an endless eviction loop. These scenarios are very hard to find as a human operator, because there is no easy visualization on which selectors overlap for which number of Pods.
  • Convenience: Instead of specifying Pods to put under autoscaling, you have to specify an object controlling Pods in the targetRef. As VPA works by evicting Pods to apply resource recommendations, there is no use in making it also apply to orphaned Pods, as they will never get re-created. This is also somewhat related to the safety aspect above, making it harder to configure things which don't work as expected.

I think not exposing labelSelector directly is a conscious decision for the user's benefit. You can try to argue about the VPA contract requiring a /scale subresource, but this is the common resource for autoscaling – HPA also requires it. If you're using a custom resource that wants to be autoscaled, this is how to achieve it.

Thank you for your detailed explanation.

@voelzmo
Copy link
Contributor

voelzmo commented Apr 3, 2024

I'm closing this for now. If we find new, compelling use-cases which would need a manual selector definition, we can re-visit our reasoning.
/close

@k8s-ci-robot
Copy link
Contributor

@voelzmo: Closing this issue.

In response to this:

I'm closing this for now. If we find new, compelling use-cases which would need a manual selector definition, we can re-visit our reasoning.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@yunwang0911
Copy link
Author

Hi @voelzmo, for UX/Safety concern, is it ok to report error directly? Not to calculate recommendation, and not to update Deployment. And for Convenience concern, once in-place updater is ready, can this concern be relieved?
I'm not nitpicking, I just want to explore its possibilities as much as possible. I apologize if you feel uncomfortable to my question.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

3 participants