-
Notifications
You must be signed in to change notification settings - Fork 610
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-nic node ip must correspond to kubelet api listening address #407
Comments
imo this is not openstack specific issue, it is same issue in all cloudproviders. So instead of solving this issue in openstack cloud.conf it should be solved in kubernetes node level. |
With only two networks you could define PublicNetworkName with the network that you do not want to be internal. However this fails again when you have 3 or more nics. So real solution for multiple nics would be to have configurable variable "InternalNetworkName" (just example) what would be used to define Internal network. Then all other networks would be Externals. It seems that at least gce, cloudstack and azure just hardcode that first interface is always the internal one. In aws they seems to be ended in conclusion that the there should also be only one InternalIP (kubernetes/kubernetes#61921). @zetaab, I do not think this problem would be solvable on generic cloud provider code as there isn't enough information about which of the internal networks would be the "corrrect". And if you add some new information to the networks it would mean that all cloud providers would need to adapt to that. Easiest solution seems to be that there is only one InternalIP and this is how most of the providers are already now working. In openstack case as we do not get the indexes of the networks so we can not just select the first, so we need to have some user defined variable (InternalNetworkName) for selecting the correct network. Also if you have explicit way for selecting the network the you may have more complex network configurations. |
The way to provide 'ip to used inside k8s cluster' seems to be a valid request... e.g |
The network[0] would be nice if it would be possible. However we just get list of IPs what is then converted to map and iterated. So there is no guarantee that IPs will be in same order for all nodes. I am not even sure if the list that we get from gophercloud is sorted. But most likely most current users are just using single network or dual network with PublicNetworkName. So as long as those work as previously nothing should break. |
That will help in case of multi-nic k8s node deployments when cloud provider was reporting any of node ip addresses instead of kubelet listening ip address. Now you can specify network where cloud provider must select ip addresses from. This will not affect previous logic until admins will not specify internal-network-name option in cloud-config file. Related to: kubernetes#407 Change-Id: Ifd576ded28f594f74ab45942a1bed11e223650c7
Thit will help in case of multi-nic k8s node deployments. Previously, cloud provider was assigning all addresses in random order and k8s was selecting only one of them. But usually, multi-nic scenario requires to specify which network is "control" and admins want to bind kubelet listening address only to that "control" net. This commit will not affect previous logic until internal-network-name is specified in cloud-config file. Related to: kubernetes#407 Change-Id: I1e4076b853b12020c47529b0590f21523b9d26a8
This will help in case of multi-nic k8s node deployments. Previously, cloud provider was assigning all addresses in random order and k8s was selecting only one of them. But usually, multi-nic scenario requires to specify which network is "control" and admins want to bind kubelet listening address only to that "control" net. This commit will not affect previous logic until internal-network-name is specified in cloud-config file. Related to: kubernetes#407 Change-Id: I1e4076b853b12020c47529b0590f21523b9d26a8
What is the setting of external_openstack_network_public_networks: ? The floating ip Network ? |
I am not sure where is |
This will help in case of multi-nic k8s node deployments. Previously, cloud provider was assigning all addresses in random order and k8s was selecting only one of them. But usually, multi-nic scenario requires to specify which network is "control" and admins want to bind kubelet listening address only to that "control" net. This commit will not affect previous logic until internal-network-name is specified in cloud-config file. Related to: kubernetes#407 Change-Id: I1e4076b853b12020c47529b0590f21523b9d26a8
/kind bug
What happened:
openstack cloud provider replaces nodes internal ip with openstack node instance addresses, but kubelet api of node can be bounded only to one of these addresses (e.g. for security purposes). Typical scenario: first nic - for deployment, second nic- for k8s network and kubelet --address parameter equals to the second nic ip. In this case cluster can receive wrong address of node kubelet api (because usually selects first nic ip as cluster ip) and features like "kubectl logs" and "kubectl proxy" will not work.
What you expected to happen:
Possibility to declare control-network in cloud config file.
Or declare kubelet listen addresses per node.
How to reproduce it (as minimally and precisely as possible):
Use multiple nics for kubernetes nodes and set kubelet --address parameter to one of them
Check that wrong ip was assigned to internal-ip field in
kubectl get nodes -o wide
outputTry to
kubectl exec
to pod that was sheduled to node with wrong ip.Anything else we need to know?:
from opentack-cloud-provider logs
Error patching node with cloud ip addresses = [failed to patch status "{\"status\":{\"$setElementOrder/addresses\":[{\"type\":\"InternalIP\"},{\"type\":\"ExternalIP\"},{\"type\":\"InternalIP\"}],\"addresses\":[{\"address\":\"172.16.10.9\"type\":\"InternalIP\"},{\"type\":\"ExternalIP\"},{\"type\":\"InternalIP\"}],\"addresses\":[{\"address\":\"172.16.10.95\",\"type\":\"InternalIP\"},{\"address\":\"192.168.10.102\",\"type\":\"InternalIP\"}]}}" for node "cmp1.****": Node "cmp1.****" is invalid: status.addresses[1]: Duplicate value: core.NodeAddress{Type:"InternalIP", Address:"192.168.10.102"}]
Environment:
uname -a
): Linux ctl03 4.4.0-36-generic RBAC info for external cloud provider / CCM #55-Ubuntu SMP Thu Aug 11 18:01:55 UTC 2016 x86_64 x86_64 x86_64 GNU/Linuxnic1 - 192.168.10.0/24
nic2 - 172.16.10.0/24
kubectl get nodes -o wide
:The text was updated successfully, but these errors were encountered: