-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Local PV will prevent node scaling down #6417
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
I also encountered this problem. Have you solved it? @jewelzqiu |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Which component are you using?:
cluster-autoscaler
What version of the component are you using?:
Component version: v1.26.3
What k8s version are you using (
kubectl version
)?:kubectl version
OutputWhat environment is this in?:
EKS
What did you expect to happen?:
Under-utilized nodes with local PV should be scaled down
What happened instead?:
Under-utilized nodes with local PV cannot be scaled down
How to reproduce it (as minimally and precisely as possible):
If pod has local PV (provisioned by local-volume-provisioner in our use case) mounted
CA refuses to evict the pod because the local pv has NodeAffinity for specific node
There is no way we can bypass this restriction to scale down those under-utilized nodes
Anything else we need to know?:
Specifying
safe-to-evict-local-volumes
andsafe-to-evict
does no help,because pod will eventually enter bounded PV check where it will fail
I think we should provide an option to remove some volumes before bounded PV check
The text was updated successfully, but these errors were encountered: