You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
We are trying to establish L2 connectivity between KubeVirt VMs. MacVtap seems like a promising option for this, as it eliminates the bridge in the virt-launcher. When the VMs are successfully started, they can ping each other without a problem. Ping is possible on the same node and if the VMs are on different nodes.
Initially, the VMs do not see anyone with LLDP, and the underlying hypervisor or network switch sees both VMs. This can be seen in the screenshot below, which is from the Proxmox (sentinel) host that hosts the Kubernetes nodes, and it can see both the VM (vm-ubuntu-1) and the Kubernetes node (node1).
Proxmox (or core switch)
What you expected to happen:
Now the interesting part comes. To debug this behavior, a tcpdump was started in the virtlaunchers net1 interface. This tcpdump is started using the network namespace of that container. As soon as this tcpdump is running, the VMs discovers the Proxmox via LLDP and each other, as long as they run on the same node.
For both VMs to discover each other, two tcpdumps will need to run, one for each VMs Net1 interface.
How to reproduce it (as minimally and precisely as possible):
Instead of tcpdump @fabiand mentioned, we can also just enable promiscuous mode.
ip link show; ip -all netns exec ip link show
ip netns exec cni-a1896a46-f32e-1880-b2aa-c2e75d85a3ed ip link set net1 promisc on
ip netns exec cni-5858b1d6-2b3f-5e7f-86bd-16746c6e94ca ip link set net1 promisc on
What happened:
We are trying to establish L2 connectivity between KubeVirt VMs. MacVtap seems like a promising option for this, as it eliminates the bridge in the virt-launcher. When the VMs are successfully started, they can ping each other without a problem. Ping is possible on the same node and if the VMs are on different nodes.
Initially, the VMs do not see anyone with LLDP, and the underlying hypervisor or network switch sees both VMs. This can be seen in the screenshot below, which is from the Proxmox (sentinel) host that hosts the Kubernetes nodes, and it can see both the VM (vm-ubuntu-1) and the Kubernetes node (node1).
Proxmox (or core switch)
What you expected to happen:
Now the interesting part comes. To debug this behavior, a tcpdump was started in the virtlaunchers net1 interface. This tcpdump is started using the network namespace of that container. As soon as this tcpdump is running, the VMs discovers the Proxmox via LLDP and each other, as long as they run on the same node.
For both VMs to discover each other, two tcpdumps will need to run, one for each VMs Net1 interface.
How to reproduce it (as minimally and precisely as possible):
Enable Feature Gate
Install macvtap-cni
Deploy Test VMs
Start TCP Dump to enable L2 LLDP connectivity.
Anything else we need to know?:
I have already posted this issue on Kubevirt but did not yet get a reply: kubevirt/kubevirt#9464
The text was updated successfully, but these errors were encountered: