Skip to content

Commit

Permalink
[WIP] Update EVPN-GW API Readme
Browse files Browse the repository at this point in the history
Signed-off-by: Dimitrios Markou <dimitrios.markou@est.tech>
  • Loading branch information
mardim91 committed May 4, 2023
1 parent 8cc5c42 commit 7152915
Show file tree
Hide file tree
Showing 3 changed files with 61 additions and 3 deletions.
64 changes: 61 additions & 3 deletions network/evpn-gw/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,48 @@ The EVPN Gateway API in the Kubernetes context is used for network provisioning

The API works on four object classes: VRF (Virtual Routing Funtions), SVI (Switched Virtual Interface), LogicalBridge and BridgePort. Their relationship is illustrated in the API data model below.

<p align="center">
<img width="500" height="400" src="./images/data-model-evpn-gw-api.png">
</p>
```mermaid
erDiagram
LogicalBridge }|..o{ BridgePort : ""
LogicalBridge ||..o| SVI: ""
SVI }o..|| VRF: ""
LogicalBridge ||..o| L2-EVPN : ""
VRF ||..o| L3-EVPN : ""
LogicalBridge{
string name
uint vlan_id[key]
uint vni[optional]
}
BridgePort{
string name
uint vport_id[key]
string mac_adrress
PortType ptype
List vlan_id
}
VRF{
string name[key]
uint vni[optional]
uint routing_table
string loopback_ip
string vtep_ip[optional]
}
L3-EVPN{
uint rd
string rmac
uint route_target
}
L2-EVPN{
uint rd
uint route_target
}
SVI{
uint vrf[key]
uint vlan_id[key]
string mac_adress
List gw_ip
}
```

The following objects are managed though the xPU EVPN Gateway API

Expand All @@ -41,3 +80,22 @@ The following objects are managed though the xPU EVPN Gateway API
A VRF can optionally be associated with an L3-EVPN instance to provide L3 connectivity to external nodes. In that case the specified VNI value is used as import/export route target (RTs) in EVPN BGP as well as in the VXLAN encapsulation of the tunneled L3-VPN traffic. The VRF loopback IP address is used as basis for the EVPN route distinguisher (RD).

The EVPN GW advertises the VRF loopback IP and the subnet prefixes of the connected SVI interfaces as VPN routes to attract traffic. VPN routes imported from BGP are reachable from locally connected BridgePorts.

## EVPN GW offload - Target Architecture

In the image below is depicted the graphical representation of the target architecture.

At the top is the single server host which is running a standard Kubernetes system. This consists of the Kubernetes control plane with Kubelet, standard primary networking realized by Calico cni and kube-proxy. Both Calico cni and kube-proxy are relying on Linux routing and IP tables for providing primary networking locally on that host. Subsequently for external connectivity the host is connected through a primary virtual function (VF) to the xPU.

On top of that in order to accelerate secondary networking a secondary cni (xpu cni) plums in VFs as secondary network interfaces into Kubernetes Pods which are consumed as standard SR-IOV VFs. These VFs can be configured as "Access" type which will give VLAN access to a single network or as "Trunk" type which will give VLAN access to a range of networks. Finally those VFs will be connected to the Physical ports of the xPU through a programmable hardware pipeline. This is the so called fastpath where the packets will flow.

To populate the programmable hardware pipeline with rules a control plane is needed. This runs on the left hand side of the below image on the ARM cores complex of the xPU. The control plane consists of several Open Source components. The first one is the Linux Bridging and Routing control plane which also represents the slowpath implementation of the xPU. Next is the EVPN Gateway control plane which is based on the FRR and is used for the BGP peering and finally is the IPSEC control plane which is based on strongSwan and is used for the IPSEC encryption.

The configuration of the control plane is taken care by a
component that is called xpu infrastructure manager which runs also on the ARM cores complex. The basic funtionality of this component is to configure initially the control plane and then translate the resulting state of the Linux system into forwarding information on the xPU pipeline. A large part of the xpu infrastructure manager is vendor agnostic and only a small part of it is vendor specific and is the one that is used for programming the rules on the xPU pipeline.

As final step the provisioning of networking will take place. This will happen using the EVPN Gateway gRPC API. This gRPC API will be leveraged by a simple cli wrapper but also by the xpu cni which wil program the networking of the VFs that are injected as secondary interfaces into the Kubernetes Pods.

To summarize, the architecture that has been described above allows the deployment of single server solutions where the whole EVPN Gateway functionality is offloaded in a fully programmable xPU NIC.

![evpn gw offload - target architecture](./images/evpn-gw-offload-target-arch.png)
Binary file removed network/evpn-gw/images/data-model-evpn-gw-api.png
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 7152915

Please sign in to comment.