-
Notifications
You must be signed in to change notification settings - Fork 158
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature: Support differing network interface names when using kube-vip #128
Comments
Hi @chrismuzyn, you can try to set a different interfaces using the variable |
@MonolithProjects Correct me if I'm wrong, but I believe the kube-vip stuff is controlled via a DaemonSet, and that DaemonSet is instituted on a single node. So only that single node's https://github.com/lablabs/ansible-role-rke2/blob/main/templates/kube-vip/kube-vip.yml.j2#L38 |
Hi @chrismuzyn, i had the same problem with different network interfaces per node (eth0, enp8s0, etc) so one rke2_interface parameter for all nodes is not enough in HA mode.
Hope it helps. |
Thanks @michal-kania for the tip, it seems to have at least half way worked. It's been a while since I originally posted this, so yesterday I pulled the latest version of this role, and tried again, trying to set
This makes k6 the kube-vip leader since it's first. It has interface enp5s0d1 and that seems to propagate correctly to the kube-vip-ds pod. But the other 3 hosts are all set to use
Since k5 and k8 don't have
Of course the loadbalancer ips only work for pods on the nodes that have their kube-vip-ds pod working. So something is not quite right... I'll see if I can figure out the logic in ansible but if this is a kube-vip problem I'm unfortunately outside of my element. |
Summary
Currently every node in the rke2 cluster must share the same primary interface for the kube-vip DaemonSet to function. If there is a node with a different network card, etc, we need to be able to specify that unique interface.
Issue Type
Feature Idea
The text was updated successfully, but these errors were encountered: