In this example, we will setup a basic HTTP envoy load balancer that will receive its config from Yggdrasil via gRPC. To do this, we will configure two docker containers; one container running an envoy node and the other running Yggdrasil. This example assumes that you have a working Kubernetes cluster, so Yggdrasil can communicate with the Kubernetes API.
Note:
This specific example is running on GCP, but the steps are cloud-agnostic and there is no reason why this wouldn't also work with a local docker daemon and Kube cluster (e.g, minikube).
For this example to work, we will need to have a service running in Kube with a valid corresponding ingress resource. In this example, we will use an nginx ingress controller.
Note:
If deploying an ingress controller using Helm on GCP, it will likely be necessary for the --set controller.publishService.enabled=true
flag to be set, so that the created ingress uses the ingress controller's IP address/hostname. The ingress IP address should match the ingress controller's, as this is the IP address that Yggdrasil will use to generate config for envoy.
Assuming we have a simple HTTP web service called 'hello-world', we can apply the following 'hello-world' ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world
namespace: default
annotations:
kubernetes.io/ingress.class: