Skip to content

Latest commit

 

History

History
262 lines (187 loc) · 10.1 KB

service-loadbalancer.md

File metadata and controls

262 lines (187 loc) · 10.1 KB

Kubernetes Services with Exoscale Network Load Balancers

This guide explains how to use the Exoscale Cloud Controller Manager (CCM) to create Kubernetes Services of type LoadBalancer.

Note: it is assumed that you have a functional Kubernetes cluster running the Exoscale CCM.

When you create a Kubernetes Service of type LoadBalancer, the Exoscale CCM provisions an Exoscale Network Load Balancer (NLB) instance, on which it will create one NLB service for every ServicePort entry defined Kubernetes declared in the Service manifest.

Prerequisites

The Exoscale CCM service controller only supports managing load balancing to Kubernetes Pods running on Nodes managed by Exoscale Instance Pools. We strongly recommend that you build a custom Compute instance template that is usable by an Instance Pool, for example to automatically have the new members join your Kubernetes cluster as Nodes.

Configuration

When the Exoscale Cloud Controller Manager is deployed and configured in a Kubernetes cluster, creating a Service of type LoadBalancer will automatically create an Exoscale Network Load Balancer (NLB) instance and configured with a service listening on every port defined in the Kubernetes Service ports spec.

The following manifest illustrates the minimal configuration for exposing a Kubernetes Service via an Exoscale NLB:

kind: Service
apiVersion: v1
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  type: LoadBalancer
  ports:
  - port: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginxdemos/hello:latest
        ports:
        - containerPort: 80

The Exoscale CCM will create an Exoscale NLB instance containing a service forwarding network traffic received on port 80 to the port 80 of the pods matching the app: nginx selector.

Annotations

In addition to the standard Kubernetes Service object specifications, the behavior of the Exoscale CCM service node is configurable by adding annotations in the Kubernetes Service object's annotations map. The following annotations are supported (annotations marked by a * are required):

service.beta.kubernetes.io/exoscale-loadbalancer-id

The ID of the Exoscale NLB corresponding to the Kubernetes Service. This annotation is set automatically by the Exoscale CCM after having created the NLB instance if one was not specified (see section Using an externally managed NLB instance with the Exoscale CCM).

service.beta.kubernetes.io/exoscale-loadbalancer-name

The name of the Exoscale NLB. Defaults to <Kubernetes Service UID>.

service.beta.kubernetes.io/exoscale-loadbalancer-description

The description of the Exoscale NLB.

service.beta.kubernetes.io/exoscale-loadbalancer-external

If set to true, the Exoscale CCM will consider the NLB as externally managed and will not attempt to create/update/delete the NLB instance whose ID is specified in the K8s Service annotations.

service.beta.kubernetes.io/exoscale-loadbalancer-service-name

The name of Exoscale NLB service corresponding to the Kubernetes Service port. Defaults to <Kubernetes Service UID>-<Service port>.

Note: this annotation is only honored if a single port is defined in the Kubernetes Service, and is set to the default value otherwise.

service.beta.kubernetes.io/exoscale-loadbalancer-service-description

The description of the Exoscale NLB service corresponding to the Kubernetes Service.

Note: this annotation is only honored if a single port is defined in the Kubernetes Service.

service.beta.kubernetes.io/exoscale-loadbalancer-service-instancepool-id

The ID of the Exoscale Instance Pool to forward ingress traffic to. Defaults to the Instance Pool ID of the cluster Nodes ; this information must be specified in case your Service is targeting Pods that are subject to custom Node scheduling.

Note: the Instance Pool cannot be changed after NLB service creation – the K8s Service will have to be deleted and re-created with the annotation updated.

service.beta.kubernetes.io/exoscale-loadbalancer-service-strategy

The Exoscale NLB Service strategy to use.

Supported values: round-robin (default), source-hash.

Note: because Exoscale Network Load Balancers dispatch network traffic across Compute instances in the specified Instance Pool (i.e. Kubernetes Nodes), if you run multiple replicas of Pods spread on several Nodes the load balancing might be less evenly distributed across all containers – as the kube-proxy also performs Node-local load balancing on pods belonging to a same Deployment. Similarly, using the source-hash strategy is not guaranteed to always forward traffic from a client source IP address/port/protocol tuple to the same container.

service.beta.kubernetes.io/exoscale-loadbalancer-service-healthcheck-mode

The Exoscale NLB service health checking mode.

Supported values: tcp (default), http.

service.beta.kubernetes.io/exoscale-loadbalancer-service-healthcheck-uri

The Exoscale NLB service health check HTTP request URI (in http mode only).

service.beta.kubernetes.io/exoscale-loadbalancer-service-healthcheck-interval

The Exoscale NLB service health checking interval in seconds. Defaults to 10s.

service.beta.kubernetes.io/exoscale-loadbalancer-service-healthcheck-timeout

The Exoscale NLB service health checking timeout in seconds. Defaults to 5s.

service.beta.kubernetes.io/exoscale-loadbalancer-service-healthcheck-retries

The Exoscale NLB service health checking retries before considering a target down. Defaults to 1.

Using a Kubernetes Ingress Controller behind an Exoscale NLB

If you wish to expose a Kubernetes Ingress Controller (such as the popular ingress-nginx), or any other Service behind an Exoscale NLB, please note that by default the traffic is forwarded to all [healthy] Nodes in the destination Instance Pool, whether they actually host Pods targeted by the Service or not – which may result in additional hops inside the Kubernetes cluster, as well as losing the source IP address (source NAT).

According to the Kubernetes documentation, it is possible to set the value of the Service spec.externalTrafficPolicy to Local, which preserves the client source IP and avoids a second hop, but risks potentially imbalanced traffic spreading. In this configuration, the Exoscale CCM will configure managed NLB services to use the Service spec.healthCheckNodePort value for the NLB service healthcheck port, which will result in having the ingress traffic forwarded only to Nodes running the target Pods. With spec.externalTrafficPolicy=Cluster (the default), the CCM uses spec.ports[].nodePort.

Using an externally managed NLB instance with the Exoscale CCM

If you prefer to manage the NLB instance yourself using different tools (e.g. Terraform), you can specify the ID of the NLB instance to use in the K8s Service annotations as well as an annotation instructing the Exoscale CCM not to create/update/delete the specified NLB instance:

kind: Service
apiVersion: v1
metadata:
  name: nginx
  annotations:
    service.beta.kubernetes.io/exoscale-loadbalancer-id: "09191de9-513b-4270-a44c-5aad8354bb47"
    service.beta.kubernetes.io/exoscale-loadbalancer-external: "true"
spec:
  selector:
    app: nginx
  type: LoadBalancer
  ports:
  - port: 80

Notes:

  • The NLB instance referenced in the annotations must exist before the K8s Service is created.
  • When deploying a K8s Service to an external NLB, be careful not to use a Service port already used by another Service attached to the same external NLB, as it will overwrite the existing NLB Service with the new K8s Service port.

⚠️ Important Notes

  • Currently, the Exoscale CCM doesn't support UDP service load balancing due to a technical limitation in Kubernetes.
  • As NodePort created by K8s Services are picked randomly within a defined range (by default 30000-32767), don't forget to configure Security Groups used by your Compute Instance Pools to accept ingress traffic in this range, otherwise the Exoscale Network Load Balancers won't be able to forward traffic to your Pods.