-
Notifications
You must be signed in to change notification settings - Fork 269
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Networking issue between nodes #6307
Comments
This indicates that VXLAN CNI traffic between nodes is being dropped. You should:
|
great. it works. Thanks a lot. Will it be taken care in future version without this workaround. |
No. The workaround disables hardware checksum offload, and imposes a significant hit to performance. It should only be used on nodes with buggy NIC drivers that fail to correctly calculate checksums for vxlan packets. Preferably you would use a different virtual interface type, or seek help from vmware to update the driver to a version not affected by this bug. |
@brandond So this issue is related to VMware (and some other drivers)? I thought this problem is about a kernel bug. |
That was one cause of it, iirc there is another more common bug in one of the virtual nic drivers, which is why it's most commonly seen on VMware. |
Environmental Info:
RKE2 Version:
v1.30.2 +rke2r1
Node(s) CPU architecture, OS, and Version:
3 Linux Nodes (RHEL 8), 2 Windows Nodes (Server 2019)
The above machines are VMWare vms
Cluster Configuration:
1 server (Linux)
4 agents (2 Linux and 2 Windows)
Flannel CNI
Describe the bug:
except first node all other nodes not able to access Kube APIs and unable to resolve DNS.
Between nodes directly it can reachable without any issues.
Expected behavior:
Should not see any connectivity issues between nodes
Actual behavior:
Lot of connectivity issues
The text was updated successfully, but these errors were encountered: