-
Notifications
You must be signed in to change notification settings - Fork 408
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] node lifecycle controller in yurt-manager can not update status of node #1934
Comments
@crazytaxii Thanks for raising issue. |
/assign @crazytaxii |
It has been fixed in #1884. |
The entire system:controller:node-controller ClusterRole for kube-controller-manager in Kubernetes cluster v1.27.2 is: # ...
rules:
- apiGroups:
- ""
resources:
- nodes
verbs:
- delete
- get
- list
- patch
- update
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- update
- apiGroups:
- ""
resources:
- pods/status
verbs:
- patch
- update
- apiGroups:
- ""
resources:
- pods
verbs:
- delete
- list
- apiGroups:
- networking.k8s.io
resources:
- clustercidrs
verbs:
- create
- get
- list
- update
- apiGroups:
- ""
- events.k8s.io
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- ""
resources:
- pods
verbs:
- get Compare to the ClusterRole of yurt-manager(v1.4): # ...
- apiGroups:
- ""
resources:
- nodes
verbs:
# - delete # missing one
- get
- list
- patch
- update
- watch # extra one
# - apiGroups: # missing one
# - ""
# resources:
# - nodes/status
# verbs:
# - patch
# - update
- apiGroups:
- ""
resources:
- pods
verbs:
- create # extra one
- delete
- get
- list
- patch # extra one
- update # extra one
- watch # extra one
- apiGroups:
- ""
resources:
- pods/status
verbs:
# - patch # missing one
- update
# - apiGroups: # missing one
# - networking.k8s.io
# resources:
# - clustercidrs
# verbs:
# - create
# - get
# - list
# - update
# - apiGroups: # missing one
# - ""
# - events.k8s.io
# resources:
# - events
# verbs:
# - create
# - patch
# - update
# ... But the node lifecycle controller in yurt-manager differs a lot from the one in kube-controller-manager v1.27.2 definitely. |
@crazytaxii Except |
What happened:
Node always stays with
Ready
status after stopping kubelet on it, even shutting down the node itself.The bug causes the Pods can not be migrated to other nodes.
What you expected to happen:
The abnormal node should be updated into
NotReady
status.How to reproduce it (as minimally and precisely as possible):
Stopping the kubelet on a node.
Anything else we need to know?:
Error log in yurt-manager's node lifecycle controller:
nodes/status is a subresource, it should be added to the ClusterRole of yurt-manager also.
Environment:
kubectl version
): v1.27.2/kind bug
The text was updated successfully, but these errors were encountered: