You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What steps did you take and what happened:
[A clear and concise description of what the bug is.]
Set up a node A and set two labels to A: a: b and c: d
Set up another node B and set one label to B: a: b
Set up a sonobuoy plugin with DaemonSet driver
We want to avoid the plugin running on the node A
Run a sonobuoy daemonset plugin with either PodSpec configuration below:
Case X: Put a node selector to run Pods only on node A and B and a node affinity to avoid running Pods on node A
Case Y: Put a node affinity that contains two match expressions in a node SelectorTerms: the one forces Pods to run only on node A and B and the other one prevents a Pod running on node A
podSpec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: aoperator: Invalues:
- b
- key: coperator: NotInvalues:
- d
Run a sonobuoy daemonset plugin with above plugin congiguration
Sonobuoy counts the both node A and B as the available nodes
On our environment, node A is the Fargate node and node B is the normal node. DaemonSets cannot run on Fargate nodes, resulting the plugin always fails with No pod aws scheduleded on node A error.
What did you expect to happen:
Sonobuoy should not count the node A as an available node.
This issue is caused by inconsistency in handling nodeSelector and nodeAffinity between Kubernetes Scheduler and Sonobuoy.
Case X: Kubernetes schedules a Pod on a node that satisfies both nodeSelector and nodeAffinity.
If you specify both nodeSelector and nodeAffinity, both must be satisfied for the Pod to be scheduled onto a node.
Currently Sonobuoy performs like OR of nodeSelector and nodeAffinitycode.
Case Y: In Kubernetes, match expressions in a single matchExpressions field are ANDed.
If you specify multiple expressions in a single matchExpressions field associated with a term in nodeSelectorTerms, then the Pod can be scheduled onto a node only if all the expressions are satisfied (expressions are ANDed).
Currently Sonobuoy counts a node as available if at least one expression is matched code.
Environment:
Sonobuoy version: 0.57.1 (go1.21.4)
Kubernetes version: (use kubectl version): Confirmed with multiple versions, 1.25~1.27
Kubernetes installer & version: AWS EKS
Cloud provider or hardware configuration: AWS EKS
OS (e.g. from /etc/os-release): n/a
Sonobuoy tarball (which contains * below): Please request if needed
The text was updated successfully, but these errors were encountered:
nonylene
changed the title
Node selector / Node affinity behavior is inconsistent with Kubernetes
Node selector / Node affinity behavior is inconsistent with Kubernetes Scheduler
Feb 1, 2024
nonylene
added a commit
to nonylene/sonobuoy
that referenced
this issue
Feb 5, 2024
Align node filter behavior with Kuberntes scheduler
to avoid errors when both of nodeSelctor and nodeAffinity are set
to PodSpec.
> If you specify both nodeSelector and nodeAffinity, both must be satisfied for the Pod to be scheduled onto a node.
>
> https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
Issue: vmware-tanzu#1957
Signed-off-by: nonylene <nonylene@gmail.com>
nonylene
added a commit
to nonylene/sonobuoy
that referenced
this issue
Feb 5, 2024
Align nodeAffinity matching behavior with Kubernetes schduler.
> If you specify multiple expressions in a single matchExpressions field associated with a term in nodeSelectorTerms, then the Pod can be scheduled onto a node only if all the expressions are satisfied (expressions are ANDed).
>
> https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/Closevmware-tanzu#1957
Signed-off-by: nonylene <nonylene@gmail.com>
What steps did you take and what happened:
[A clear and concise description of what the bug is.]
a: b
andc: d
a: b
DaemonSet
driverCase X
Case Y
On our environment, node A is the Fargate node and node B is the normal node. DaemonSets cannot run on Fargate nodes, resulting the plugin always fails with
No pod aws scheduleded on node A
error.What did you expect to happen:
Sonobuoy should not count the node A as an available node.
This issue is caused by inconsistency in handling
nodeSelector
andnodeAffinity
between Kubernetes Scheduler and Sonobuoy.Case X: Kubernetes schedules a Pod on a node that satisfies both
nodeSelector
andnodeAffinity
.Currently Sonobuoy performs like
OR
ofnodeSelector
andnodeAffinity
code.Case Y: In Kubernetes, match expressions in a single
matchExpressions
field are ANDed.Currently Sonobuoy counts a node as available if at least one expression is matched code.
Environment:
kubectl version
): Confirmed with multiple versions, 1.25~1.27/etc/os-release
): n/aThe text was updated successfully, but these errors were encountered: