You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Which component are you using?: cluster-autoscaler
What version of the component are you using?: cluster-autoscaler-1.23.0
Component version: cluster-autoscaler-1.23.0
What k8s version are you using (kubectl version)?:
kubectl version Output
$ kubectl version
What environment is this in?:
self-build kubernetes cluster, not cloud provider cluster
What did you expect to happen?: don't reset unneed time
What happened instead?: reset unneed time
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Why clean unneededNodes when the number of using or used node exceed 50? What is the purpose? It makes our cluster scale-down slow because clean means reset unneeded time.
cluster-autoscaler/core/scale_down.go
// Nothing super-bad should happen if the node is removed from tracker prematurely.
simulator.RemoveNodeFromTracker(sd.usageTracker, toRemove.Node.Name, sd.unneededNodes)
nodeDeletionStart := time.Now()
cluster-autoscaler/simulator/tracker.go
// RemoveNodeFromTracker removes node from tracker and also cleans the passed utilization map.
func RemoveNodeFromTracker(tracker *UsageTracker, node string, utilization map[string]time.Time) {
klog.V(4).Infof("Removing node %s from utilization map", node)
keysToRemove := make([]string, 0)
if mainRecord, found := tracker.Get(node); found {
if mainRecord.usingTooMany {
klog.V(4).Infof("Node %s is using too many nodes, removing all keys from utilization map", node)
keysToRemove = getAllKeys(utilization)
} else {
usingloop:
for usedNode := range mainRecord.using {
if usedNodeRecord, found := tracker.Get(usedNode); found {
if usedNodeRecord.usedByTooMany {
klog.V(4).Infof("Node %s is used by too many nodes, removing all keys from utilization map", usedNode)
keysToRemove = getAllKeys(utilization)
break usingloop
} else {
for anotherNode := range usedNodeRecord.usedBy {
keysToRemove = append(keysToRemove, anotherNode)
}
}
}
}
}
}
tracker.Unregister(node)
delete(utilization, node)
for _, key := range keysToRemove {
delete(utilization, key)
}
}
The text was updated successfully, but these errors were encountered:
Which component are you using?: cluster-autoscaler
What version of the component are you using?: cluster-autoscaler-1.23.0
Component version: cluster-autoscaler-1.23.0
What k8s version are you using (
kubectl version
)?:kubectl version
OutputWhat environment is this in?:
self-build kubernetes cluster, not cloud provider cluster
What did you expect to happen?: don't reset unneed time
What happened instead?: reset unneed time
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
cluster-autoscaler/core/scale_down.go
cluster-autoscaler/simulator/tracker.go
The text was updated successfully, but these errors were encountered: