You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 5, 2024. It is now read-only.
I have a question about nidhogg, if you don't mind. When kubernetes adds new nodes to the cluster (e.g. cluster-autoscaler), is it possible to have an untainted node during a small period of time (until the nidhogg controller applies the taint to it)?
I'm interested in knowing this because during that small period of time a pod might be scheduled on the node where the daemonsets are not ready yet.
The text was updated successfully, but these errors were encountered:
It's definitely possible for that to happen.
However most nodes will start up with the node.kubernetes.io/not-ready taint applied to them and Nidhogg will add the additional taints before this is removed.
In practice we've never seen Nidhogg fail to taint before normal pods start scheduling.
They are doing some work in upstream k8s to add a feature like this into Kube itself but it looks like it's not going to be done for quite a while yet kubernetes/enhancements#1003
Observed on GKE: the auto scaler considered pre-tainted (taints as part of the node pool definition) nodes unfit, unless the to-be-scheduled pods tolerate the taint, which is exactly what you'd not want when interested in nidhogg.
Obviously, this all makes sense, but it is good to know you can't seed nodes with the nidhogg taint trying to outsmart Kubernetes.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Thanks for your great work, @Joseph-Irving.
I have a question about nidhogg, if you don't mind. When kubernetes adds new nodes to the cluster (e.g. cluster-autoscaler), is it possible to have an untainted node during a small period of time (until the nidhogg controller applies the taint to it)?
I'm interested in knowing this because during that small period of time a pod might be scheduled on the node where the daemonsets are not ready yet.
The text was updated successfully, but these errors were encountered: