diff --git a/docs/docs-content/clusters/edge/edge.md b/docs/docs-content/clusters/edge/edge.md index 8e09c89a44..f1d400a2d6 100644 --- a/docs/docs-content/clusters/edge/edge.md +++ b/docs/docs-content/clusters/edge/edge.md @@ -87,3 +87,6 @@ To start with Edge, review the [architecture](architecture.md) and the [lifecycl - [Site Deployment](site-deployment/site-deployment.md) + +- [Networking](networking/networking.md) + diff --git a/docs/docs-content/clusters/edge/networking/_category_.json b/docs/docs-content/clusters/edge/networking/_category_.json new file mode 100644 index 0000000000..90b85e2d02 --- /dev/null +++ b/docs/docs-content/clusters/edge/networking/_category_.json @@ -0,0 +1,4 @@ +{ + "position": 60 +} + \ No newline at end of file diff --git a/docs/docs-content/clusters/edge/networking/kubevip.md b/docs/docs-content/clusters/edge/networking/kubevip.md new file mode 100644 index 0000000000..34dcd0c078 --- /dev/null +++ b/docs/docs-content/clusters/edge/networking/kubevip.md @@ -0,0 +1,113 @@ +--- +sidebar_label: "Publish Cluster Services with Kube-vip" +title: "Publish Cluster Services with Kube-vip" +description: "Guide to publishing cluster services with kube-vip." +hide_table_of_contents: false +sidebar_position: 30 +tags: ["edge"] +--- + +You can use kube-vip to provide a virtual IP address for your cluster and use it to expose a service of type `LoadBalancer` on the external network. You can have kube-vip dynamically request IP addresses or use a static IP address. + +Kube-vip supports DHCP environments and can request additional IP address from the DHCP server automatically. Using kube-vip, you can expose services inside your cluster externally with a virtual IP address even if you do not have control over your host's network. Kube-vip can also act as a load balancer for both your control plane and Kubernetes services of type `LoadBalancer`. + +## Limitations + +Kube-vip has many environment variables you can use to customize its behavior. You can specify values for the environment variables with the `cluster.kubevipArgs` parameter. For a complete list of environment variables in kube-vip, refer to [kube-vip documentation](https://kube-vip.io/docs/installation/flags/?query=vip_interface#environment-variables). +However, Palette has configured values for the following parameters and they cannot be changed: + +| Environment Variable | Description | Example Value | +|----------------------|-------------|---------------| +| `vip_arp` | Enables ARP broadcasts from leader. | `"true"` | +| `port` | Specifies the port number that `kube-vip` will use.| `"6443"` | +| `vip_cidr` | Sets the CIDR notation for the Virtual IP. A value of `32` denotes a single IP address in IPv4. | `"32"` | +| `cp_enable` | Enables kube-vip control plane functionality. | `"true"` | +| `cp_namespace` | The namespace where the lease will reside. | `"kube-system"` | +| `vip_ddns` | Enables Dynamic DNS support. | `"{{ .DDNS}}"` | +| `vip_leaderelection` | Enables Kubernetes LeaderElection. | `"true"` | +| `vip_leaseduration` | Sets the lease duration in seconds. | `"30"` | +| `vip_renewdeadline` | Specifies the deadline in seconds for renewing the lease. | `"20"` | +| `vip_retryperiod` | Number of times the leader will hold the lease for. | `"4"` | +| `address` | Template placeholder for the virtual IP address. | `"{{ .VIP}}"` | + +## Prerequisites + +- At least one Edge device with x86_64 or AMD64 processor architecture registered in your Palette account + +## Enablement + +1. Log in to [Palette](https://console.spectrocloud.com/). + +2. From the left **Main Menu** click **Clusters ** and select **Add a New Cluster**. + +4. Choose **Edge Native** for the cluster type and click **Start Edge Native Configuration**. + +5. Give the cluster a name, description, and tags. Click on **Next**. + +6. Select the cluster profile that you plan to use to deploy your cluster with. Click **Next**. + +7. In the **Parameters** step, click on the Kubernetes layer of your profile. In the YAML file for the Kubernetes layer of your cluster profile, add the following parameters. + ```yaml + cluster: + kubevipArgs: + vip_interface: "INTERFACE_NAME" + svc_enable: true + vip_servicesinterface: "INTERFACE_NAME" + ``` + + These are kube-vip environment variables that enable kube-vip to provide load balancing services for Kubernetes services and specify which network interface will be used by kube-vip for handling traffic to the Kubernetes API server and Kubernetes services. The following table provides you guidance on how to choose the values for each parameter: + + | **Parameter** | **Description** | + |-----------|-------------| + | `vip_interface` | Specifies the Network Interface Controller (NIC) that kube-vip will use for handling traffic to the Kubernetes API. If you do not specify `vip_serviceinterface`, kube-vip will also use this interface for handling traffic to LoadBalancer-type services. | + | `svc_enable` | Enables kube-vip to handle traffic for services of type LoadBalancer. | + | `vip_serviceinterface` | Specifies the NIC that kube-vip will use for handling traffic to LoadBalancer-type services. If your cluster has network overlay enabled, or if your host has multiple NICs and you want to publish services on a different NIC than the one used by Kubernetes, you should specify the name of the NIC as the value of this parameter. If this parameter is not specified and you have set `svc_enable` to `true`, kube-vip will use the NIC you specified in `vip_interface` to handle traffic to LoadBalancer-type services. | + +8. Next, in layer of your cluster profile that has the service you want to expose, add two parameters `loadBalancerIP: IP_ADDRESS` and `loadBalancerClass: kube-vip.io/kube-vip-class` to the service spec. + + If you are deploying in a DHCP environment, use `0.0.0.0` as the value for the `loadBalancerIP` parameter. If you want kube-vip to use a static IP, specify the IP address and make sure it's unused by other hosts in the network. The following example manifest displays the usage of these two parameters. + ```yaml {7-8} + apiVersion: v1 + kind: Service + metadata: + name: http-app-svc + namespace: myapp + spec: + loadBalancerIP: 0.0.0.0 + loadBalancerClass: kube-vip.io/kube-vip-class + ports: + - port: 80 + protocol: TCP + targetPort: http + selector: + app.kubernetes.io/name: http-app + type: LoadBalancer + ``` + +9. Click **Next** and finish the rest of the configurations. For more information, refer to [Create Cluster Definition](../site-deployment/site-installation/cluster-deployment.md). + +When the cluster finishes deploying, kube-vip adds an annotation named `kube-vip.io/requestedIP` to the Service resource to document which IP address it has received from the external network. Whenever kube-vip restarts, it will attempt to re-request the same IP address for that service. You can remove the annotation to make kube-vip request a fresh address with the following command. Replace `SERVICE_NAME` with the name of your service, and make sure to include the minus symbol`-` at the end of the annotation. + +```shell +kubectl annotate service SERVICE_NAME kube-vip.io/requestedIP- +``` + +## Validation + +Use the following steps to validate that kube-vip has been set up correctly and is performing load balancing services for your cluster. + +1. Access the cluster via kubectl CLI. For more information, refer to [Access Cluster with CLI](../../cluster-management/palette-webctl.md). +2. Issue the command `kubectl get service SERVICE_NAME` and replace `SERVICE_NAME` with the name of the service you configure with kube-vip. The output of the command displays the external IP address that kube-vip has received from the external network or the IP address you specified in the `loadBalancerIP` parameter. + + ```shell + kubectl get service http-app-svc + ``` + + ```hideClipboard + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + http-app-svc LoadBalancer 10.100.200.10 10.10.1.100 80:30720/TCP 5m + ``` + + In the above example, the external IP `10.10.1.100` is the IP address that kube-vip received from the DHCP server or the IP address you specified in the `loadBalancerIP` parameter. + + diff --git a/docs/docs-content/clusters/edge/networking/networking.md b/docs/docs-content/clusters/edge/networking/networking.md new file mode 100644 index 0000000000..3892f0b95f --- /dev/null +++ b/docs/docs-content/clusters/edge/networking/networking.md @@ -0,0 +1,17 @@ +--- +sidebar_label: "Networking" +title: "Networking" +description: "Learn about solutions Palette offers for various network environments during Edge deployment." +hide_table_of_contents: false +sidebar_position: 50 +tags: ["edge"] +--- + +Edge environments are inherently diverse, especially when it comes to networking conditions. Often, the network your Edge devices are deployed in are outside of your control, and you do not have on-site technical staff to troubleshoot when things go wrong. Therefore, it's important to ensure that your Edge deployments can accommodate various networking conditions and are resilient to accidents and outages. + +This section describes solutions offered by Palette Edge to navigate different network environments at the edge to ensure that you can keep your Edge clusters and their services operational and available. + +## Resources + +- [Publish Cluster Services with Kube-vip](kubevip.md) +- [Enable Overlay Network](vxlan-overlay.md) \ No newline at end of file diff --git a/docs/docs-content/clusters/edge/networking/vxlan-overlay.md b/docs/docs-content/clusters/edge/networking/vxlan-overlay.md new file mode 100644 index 0000000000..df136571da --- /dev/null +++ b/docs/docs-content/clusters/edge/networking/vxlan-overlay.md @@ -0,0 +1,141 @@ +--- +sidebar_label: "Enable Overlay Network" +title: "Enable Overlay Network" +description: "Learn how to enable a virtual overlay network you can control on top of an often unpredictable physical network." +hide_table_of_contents: false +sidebar_position: 30 +tags: ["edge"] +--- + +Edge clusters are often deployed in locations where network environments are not managed by teams that maintain the edge deployments. However, a Kubernetes cluster, specifically several control plane components, require stable IP addresses. In the case of an extended network outage, it's possible that your cluster components would lose their original IP addresses when the cluster expects them to remain stable, causing the cluster to experience degraded performance or become non-operational. + +Palette allows you to create a virtual overlay network on top of the physical network, and the virtual IP addresses of all cluster components are managed by Palette. Inside the cluster, the different components use the virtual IP addresses to communicate with each other instead of the underlying IP addresses that could change due to external factors. If the cluster experiences an outage with the overlay network enabled, components inside the cluster retain their virtual IP addresses in the overlay network, even if their IP addresses in the underlying physical network has changed, protecting the cluster from an outage. + +
+ +![VxLAN Overlay Architecture](/clusters_edge_site-installation_vxlan-overlay_architecture.png) + +
+ +## When Should You Consider Enabling Overlay Network? +If your Edge clusters are deployed in network environments that fit the following descriptions, you should consider enabling an overlay network for your cluster: + +- Network environments with dynamic IP address management, such as a DHCP network. +- Instable network environments or environments that are out of your control. For example, you are deploying an Edge host in a restaurant located in a commercial building, where the network is managed by the building and cannot be easily altered by your staff. +- Environments where you expect your edge hosts to move from one physical location to another. + +### Example Scenario + +The Analytics team of a manufacturing company is deploying an Edge host to their assembly line to collect metrics from the manufacturing process. The building in which the Edge host is deployed has a network that is managed by a DHCP server. The region experiences a bad weather event that causes a sustained outage. + +|Without Overlay Network |With Overlay Network| +|---------------------|-----------------------| +| Upon recovery, each Kubernetes component inside in the Edge host requests an IP address from the DHCP server, and receives a different IP address than their original IP address before the outage happened. Since Kubernetes expects several components in the control plane to have stable IP addresses, the cluster becomes non-operational and assembly line is unable to resume operations | Each Kubernetes component inside in the Edge host has a virtual IP address in the overlay network. Upon recovery, their IP addresses in the overlay network remain the same despite their IP addresses changing in the underlying DHCP network. The Edge host is able to assume its workload and the assembly line resumes operations | + +## Prerequisites + +* At least one Edge host with an AMD64 or X86_64 processor architecture registered with your Palette account. +* All Edge hosts must be on the same network. You may provision your own virtual network to connect Edge hosts that are on different physical networks, but all Edge hosts to be included in the cluster must be on the same network before cluster creation. + +## Enable Overlay Network + +You can enable an overlay network for your cluster during cluster creation. + +:::caution +You will not be able to change the network overlay configurations after the cluster has already been created. +::: + +1. Log in to [Palette](https://console.spectrocloud.com). + +2. Navigate to the left **Main Menu** and select **Clusters**. + +3. Click on **Add New Cluster**. + +4. Choose **Edge Native** for the cluster type and click **Start Edge Native Configuration**. + +5. Give the cluster a name, description, and tags. Click on **Next**. + +6. Select a cluster profile. If you don't have a cluster profile for Edge Native, refer to the [Create Edge Native Cluster Profile](../site-deployment/model-profile.md) guide. Click on **Next** after you have selected a cluster profile. + +7. In the network layer of your cluster profile, specify the name of the Network Interface Controllers (NIC) on your Edge hosts to be `scbr-100`. This is the name of the interface Palette creates on your Edge devices to establish the overlay network. + + The following are the sections of the packs you need to change depending on which CNI pack you are using: + + + + + In the Calico pack YAML file default template, uncomment `manifests.calico.env.calicoNode.IP_AUTODETECTION_METHOD` and set its value to `interface=scbr-100`. + ```yaml {11} + manifests: + calico: + ... + env: + # Additional env variables for calico-node + calicoNode: + #IPV6: "autodetect" + #FELIX_IPV6SUPPORT: "true" + #CALICO_IPV6POOL_NAT_OUTGOING: "true" + #CALICO_IPV4POOL_CIDR: "192.168.0.0/16" + IP_AUTODETECTION_METHOD: "interface=scbr-100" + ``` + + + + In the Flannel pack YAML file, add a line `- "--iface=scbr-100"` in the default template under `charts.flannel.args`. + + ```yaml {8} + charts: + flannel: + ... + # flannel command arguments + args: + - "--ip-masq" + - "--kube-subnet-mgr" + - "--iface=scbr-100" + ``` + + + You do not need to make any adjustments to the Cilium pack. + + + + If you are using other CNIs, refer to the documentation of your selected CNI and configure it to make sure that it picks the NIC named `scbr-100` on your Edge host. + + + +8. Review the rest of your cluster profile values and make changes as needed. Click on **Next**. + +8. In the **Cluster Config** stage, toggle on **Enable Overlay Network**. This will prompt you to provide additional configuration for your virtual overlay network. + +9. In the **Overlay CIDR Range** field, provide a private IP range for your cluster to use. Ensure that this range is not used by others in the same network environment. When you toggle on **Enable Overlay Network**, Palette provides with a default commonly unused range. We suggest you keep the default range unless you have a specific IP range you want to use. + + :::caution + The overlay CIDR range cannot be changed after the cluster creation. + ::: + + After you have provided the overlay CIDR, the **VIP** field at the top of the page will be grayed out, and the first IP address in the overlay CIDR range will be used as the Overlay VIP. This VIP is the internal overlay VIP used by the cluster. + +10. Finish the rest of the cluster configurations and click **Finish Configuration** to deploy the cluster. For more information, refer to [Create Cluster Definition](../site-deployment/site-installation/cluster-deployment.md). + +## Validate + +1. Log in to [Palette](https://console.spectrocloud.com). + +2. Navigate to the left **Main Menu** and select **Clusters**. + +3. Select the host cluster you created to view its details page. + +4. Select the **Nodes** tab, in the **Overlay IP Address** column, each host has an overlay IP address within the CIDR range you provided during cluster configuration. + +:::tip +To view the external IP addresses of the edge hosts, from the **Main Menu**, go to **Clusters**, and click the **Edge Hosts** tab. The IP address displayed in the table is the external IP address. +::: + +## Access Cluster with Overlay Network Enabled + +You can access a cluster with overlay network enabled in the following ways: + +- Access the cluster with kubectl CLI. For more information, refer to [Access Cluster with CLI](../../cluster-management/palette-webctl.md). +- Access LoadBalancer services. You can provision LoadBalancer services in your Kubernetes cluster and expose them to external traffic. For example, refer to [Publish Cluster Services with Kube-vip](kubevip.md). +- Access a node by IP address. You can use the node's external IP address to access the node directly. The overlay IP addresses are internal to the cluster itself and cannot be accessed from outside the cluster. + diff --git a/static/assets/docs/images/clusters_edge_site-installation_vxlan-overlay_architecture.png b/static/assets/docs/images/clusters_edge_site-installation_vxlan-overlay_architecture.png new file mode 100644 index 0000000000..9ecdb3d560 Binary files /dev/null and b/static/assets/docs/images/clusters_edge_site-installation_vxlan-overlay_architecture.png differ