You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I submit a ScaledObject that includes both CPU utilization triggers and other resource triggers, the KEDA operator may continuously update hpa and never stops.
Expected Behavior
Only one update occurs
Actual Behavior
Continuous HPA updates
Steps to Reproduce the Problem
Create a ScaledObject with both CPU utilization triggers and other resource triggers.
Ensure the CPU utilization trigger is not the last one in the ScaledObject.
Use a Kubernetes cluster with a version below 1.27 (e.g., 1.26).
Observe the continuous triggering of "Found difference in the HPA spec according to ScaledObject" by the KEDA operator.
This issue is fundamentally the same as the one encountered in kubernetes/kubernetes#74099. The root cause is Kubernetes reordering spec.metrics, the HPA v1 conversion logic causing the position of the CPU utilization HPA metric to be adjusted.
When creating or updating an HPA, the conversion logic in these segments of code link1 and link2 converts the first CPU utilization metric into the HPA v1 metric (if there are multiple CPU utilization triggers, the others are lost), and stores the remaining metrics in annotations. When converting back from HPA v1 to HPA, it appends the CPU utilization metric to the end (link3).
This results in a situation where, if the ScaledObject has multiple resource triggers and one of them is a CPU utilization trigger, the final HPA will always have the CPU utilization trigger at the end. Additionally, if there are multiple CPU utilization triggers, only one will remain (though having multiple CPU utilization triggers in one HPA configuration might seem to have little practical value...). This causes the KEDA operator to continuously detect differences and persistently update the HPA.
In Kubernetes 1.27 and later, this issue is resolved because the autoscaling v1 schema is deprioritized behind v2, meaning it no longer defaults to converting to HPA v1. The relevant change is shown below:
diff --git a/pkg/apis/autoscaling/install/install.go b/pkg/apis/autoscaling/install/install.go
index 3740aee3155..424fc5ce85d 100644
--- a/pkg/apis/autoscaling/install/install.go
+++ b/pkg/apis/autoscaling/install/install.go
@@ -40,6 +40,5 @@ func Install(scheme *runtime.Scheme) {
utilruntime.Must(v2.AddToScheme(scheme))
utilruntime.Must(v2beta1.AddToScheme(scheme))
utilruntime.Must(v1.AddToScheme(scheme))
- // TODO: move v2 to the front of the list in 1.24
- utilruntime.Must(scheme.SetVersionPriority(v1.SchemeGroupVersion, v2.SchemeGroupVersion, v2beta1.SchemeGroupVersion, v2beta2.SchemeGroupVersion))
+ utilruntime.Must(scheme.SetVersionPriority(v2.SchemeGroupVersion, v1.SchemeGroupVersion, v2beta1.SchemeGroupVersion, v2beta2.SchemeGroupVersion))
The text was updated successfully, but these errors were encountered:
Hello,
Thanks for reporting this. Just to understand the issue, this affects to k8s 1.26 or bellow, and 1.27 has already fixed this, right? We currently only support >= 1.27 officially. Personally I don't have troubles fixing this if it's easy, but I'd like to know @tomkerkhove and @zroubalik thoughts
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.
stalebot
added
the
stale
All issues that are marked as stale due to inactivity
label
Jul 26, 2024
Report
When I submit a
ScaledObject
that includes both CPU utilization triggers and other resource triggers, the KEDA operator may continuously update hpa and never stops.Expected Behavior
Only one update occurs
Actual Behavior
Continuous HPA updates
Steps to Reproduce the Problem
ScaledObject
with both CPU utilization triggers and other resource triggers.ScaledObject
.Logs from KEDA operator
No response
KEDA Version
2.12.1
Kubernetes Version
< 1.27
Scaler Details
Resource
Anything else?
This issue is fundamentally the same as the one encountered in kubernetes/kubernetes#74099. The root cause is Kubernetes reordering spec.metrics, the HPA v1 conversion logic causing the position of the CPU utilization HPA metric to be adjusted.
When creating or updating an HPA, the conversion logic in these segments of code link1 and link2 converts the first CPU utilization metric into the HPA v1 metric (if there are multiple CPU utilization triggers, the others are lost), and stores the remaining metrics in annotations. When converting back from HPA v1 to HPA, it appends the CPU utilization metric to the end (link3).
This results in a situation where, if the ScaledObject has multiple resource triggers and one of them is a CPU utilization trigger, the final HPA will always have the CPU utilization trigger at the end. Additionally, if there are multiple CPU utilization triggers, only one will remain (though having multiple CPU utilization triggers in one HPA configuration might seem to have little practical value...). This causes the KEDA operator to continuously detect differences and persistently update the HPA.
In Kubernetes 1.27 and later, this issue is resolved because the autoscaling v1 schema is deprioritized behind v2, meaning it no longer defaults to converting to HPA v1. The relevant change is shown below:
The text was updated successfully, but these errors were encountered: