This chapter explains how to apply both Nginx and AWS ALB Ingress Controllers to a Kubernetes cluster. These controllers allow you to set rules that control the routing of external traffic to the services in your Kubernetes cluster. The role of the Ingress controller is to watch the Kubernetes API server for ingress events, and to action those events. For example, creating an Ingress resource is an event. The Ingress controller will act on the event by creating the appropriate Ingress resource; in the case of this example, this could be an Nginx or an AWS ALB resource.
Kubernetes Ingress has two parts:
-
Ingress resource: the Ingress definition, defined in a YAML file
-
Ingress controller: deployed on the Kubernetes master, the controller implements the Ingress in your cluster.
There is no default Ingress controller provided in Kubernetes. It may be provided by your platform, or you must install one. Since an Ingress controller is not provided on AWS by default, we will install one.
This chapter uses a cluster with 3 master nodes and 5 worker nodes as described here: multi-master, multi-node gossip based cluster.
Perform the following steps to install an Nginx ingress controller for your Kubernetes cluster. It will be exposed using the AWS Elastic Load Balancer.
First, apply the initial setup commands:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml | kubectl apply -f - curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml | kubectl apply -f - curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml | kubectl apply -f - curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml | kubectl apply -f - curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml | kubectl apply -f -
RBAC roles, or Role Based Access Control, are methods for controlling access to resources based on roles and permissions of individual users. RBAC must be enabled on the api server with the flag, --authorization-mode=RBAC
. A Role
can be used to restrict access to a single namespace. Alternatively, a ClusterRole
can apply access permissions to a particular namespace or across all namespaces. In addition, RoleBinding
binds the permissions granted by a role to a single user or set of users. ClusterRoleBinding
can grant these permissions cluster wide and across all namespaces.
To configure the Nginx Ingress Controller with RBAC roles use the following command:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/without-rbac.yaml | kubectl apply -f -
To configure with RBAC:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml | kubectl apply -f - curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml | kubectl apply -f
The AWS ELB allows you to apply both L4 and L7 network protocols for ingress behind Type=LoadBalancer
. Layer 4 uses TCP as the listener protocol for ports 80 and 443. Layer 7 uses HTTP for port 80 and terminates TLS at the ELB.
To configure Layer 4:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l4.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l4.yaml
To configure Layer 7:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l7.yaml > service-l7.yaml
Edit the the line of the file service-l7.yaml to replace the dummy id with a valid one in the form "arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX"
kubectl apply -f service-l7.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l7.yaml
If RBAC is enabled, apply the following:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/patch-service-with-rbac.yaml
If not, apply this command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/patch-service-without-rbac.yaml
CoreOS has developed an Ingress controller that uses AWS Application Load Balancer (ALB) to route traffic to Kubernetes services. We’ll deploy this Ingress controller and use it to route traffic to our Pods.
You should already have an IAM role assigned to your Kubernetes worker nodes. The role ARN will be similar to
arn:aws:iam::<account-id>:role/nodes.cluster.k8s.local
. Check the role in the IAM Console and make sure it
contains the following for the nodes.cluster.k8s.local inline policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:*"
],
"Resource": "*"
},
.
.
Before starting the exercise there are some small edits to make.
-
Edit
templates/alb-ingress-resource.yaml
and change the list of subnets to match your own: alb.ingress.kubernetes.io/subnets -
Edit
templates/alb-ingress-controller.yaml
and change the AWS_REGION and CLUSTER_NAME to match your own. There is no need to enter an access key.
As mentioned earlier, deploying an Ingress resource has no effect unless there is an Ingress controller that implements the resource. This involves two steps:
-
deploying the
default-http-backend
resource that all Ingress controllers depend upon -
deploying the Ingress controller itself
First, deploy the default-http-backend
resource:
$ kubectl create -f https://raw.githubusercontent.com/coreos/alb-ingress-controller/master/examples/default-backend.yaml
Then deploy the Ingress controller:
$ kubectl create -f templates/alb-ingress-controller.yaml
We’ll deploy a sample application that we’ll expose via an Ingress. We will use the same greeter application as used in the microservices section, with one small change: we’ll expose the webapp service using a NodePort instead of a LoadBalancer. The difference is that NodePort maps the container port to a port on the node hosting the container. The same port will be used on each node. LoadBalancer, on the other hand, will create an AWS ELB that balances traffic across the pods running on the worker nodes. In this example, we’ll use an Ingress to create an ALB to balance traffic across the pods running on the worker nodes.
-
Deploy the application:
$ kubectl create -f templates/app.yml
-
Get the list of services:
$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE greeter-service ClusterIP 100.71.100.49 <none> 8080/TCP 57m kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 27d name-service ClusterIP 100.71.205.66 <none> 8080/TCP 57m webapp-service NodePort 100.70.135.114 <none> 80:32202/TCP 57m
Deploy the Ingress resource. This will create an AWS ALB and route traffic to the pods in the service using ALB target groups:
$ kubectl create -f templates/alb-ingress-resource.yaml
It will take a couple of minutes to create the ALB associated with your Ingress. Check the status as follows:
$ kubectl describe ing webapp-alb-ingress
Name: webapp-alb-ingress
Namespace: default
Address: clusterk8sl-default-webapp-9895-1236164836.us-east-1.elb.amazonaws.com
Default backend: default-http-backend:80 (100.96.7.26:8080)
Rules:
Host Path Backends
---- ---- --------
*
/ webapp-service:80 (<none>)
Annotations:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 32m ingress-controller clusterk8sl-default-webapp-9895 created
Normal CREATE 32m ingress-controller clusterk8sl-32202-HTTP-5a4bb0d target group created
Normal CREATE 32m ingress-controller 80 listener created
Normal CREATE 32m ingress-controller 1 rule created
Normal CREATE 3m (x3 over 32m) ingress-controller Ingress default/webapp-alb-ingress
Normal UPDATE 3m (x6 over 32m) ingress-controller Ingress default/webapp-alb-ingress
This shows your Ingress is listening on port 80. Now check the status of your service:
$ kubectl get svc webapp-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
webapp-service NodePort 100.70.135.114 <none> 80:32202/TCP 1h
This shows your service is listening on port 32202 (your port may differ). I expect an ALB to be created with a listener on port 80, and a target group routing traffic to port 32202 on each of my nodes. Port 32202 is the NodePort that maps to my container port.
Use the EC2 Console to check the status of your ALB. Check the Target Groups and see if they are routing traffic to port 32202 (this should be evident in the Description and Targets tab, i.e. the health checks should also route to this port). Check the Load Balancer listener - it should be listening on port 80.
Once the ALB has a status of 'active' in the EC2 Console, you can curl your Ingress endpoint using the Address of the Ingress resource:
$ curl clusterk8sl-default-webapp-9895-1236164836.us-east-1.elb.amazonaws.com Hello Arunc
$ kubectl delete -f templates/alb-ingress-resource.yaml $ kubectl delete -f templates/app.yml $ kubectl delete -f templates/alb-ingress-controller.yaml $ kubectl delete -f https://raw.githubusercontent.com/coreos/alb-ingress-controller/master/examples/default-backend.yaml
Check in the EC2 console to ensure your ALB has been deleted.
Kube AWS Ingress Controller creates AWS Application Load Balancer (ALB) that is used to terminate TLS connections and use AWS Certificate Manager (ACM) or AWS Identity and Access Management (IAM) certificates. ALBs are used to route traffic to an Ingress http router for example skipper, which routes traffic to Kubernetes services and implements advanced features like green-blue deployments, feature toggles, reate limits, circuitbreakers, shadow traffic or A/B tests.
In short the major differences from CoreOS ALB Ingress Controller is:
-
it uses Cloudformation instead of API calls
-
does not have routes limitations from AWS
-
automatically finds the best matching ACM and IAM certifacte for your ingress
-
you are free to use an http router imlementation of your choice, which can implement more features like green-blue deployments
For this tutorial I assume you have GNU sed installed, if not read
commands with sed
to modify the files according to the sed
command
being run. If you are running BSD or MacOS you can use gsed
.
Cloud Labels are required to make Kube AWS Ingress Controller work, because it has to find the AWS Application Load Balancers it manages by AWS Tags, which are called cloud Labels in Kops.
If not already set, you have to set some environment variables to choose AZs to deploy to, your S3 Bucket name for Kops configuration and you Kops cluster name:
export AWS_AVAILABILITY_ZONES=eu-central-1b,eu-central-1c export S3_BUCKET=kops-aws-workshop-<your-name> export KOPS_CLUSTER_NAME=example.cluster.k8s.local
The you create the Kops cluster and validate that everything is set up properly.
export KOPS_STATE_STORE=s3://${S3_BUCKET} kops create cluster --name $KOPS_CLUSTER_NAME --zones $AWS_AVAILABILITY_ZONES --cloud-labels kubernetes.io/cluster/$KOPS_CLUSTER_NAME=owned --yes kops validate cluster
This is the effective policy that you need for your EC2 nodes for the kube-aws-ingress-controller, which we will use:
{
"Effect": "Allow",
"Action": [
"acm:ListCertificates",
"acm:DescribeCertificate",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:AttachLoadBalancers",
"autoscaling:DetachLoadBalancers",
"autoscaling:DetachLoadBalancerTargetGroups",
"autoscaling:AttachLoadBalancerTargetGroups",
"cloudformation:*",
"elasticloadbalancing:*",
"elasticloadbalancingv2:*",
"ec2:DescribeInstances",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeRouteTables",
"ec2:DescribeVpcs",
"iam:GetServerCertificate",
"iam:ListServerCertificates"
],
"Resource": [
"*"
]
}
To apply the mentioned policy you have to add additionalPolicies with kops for your cluster, so edit your cluster.
kops edit cluster $KOPS_CLUSTER_NAME
and add this to your node policy:
additionalPolicies:
node: |
[
{
"Effect": "Allow",
"Action": [
"acm:ListCertificates",
"acm:DescribeCertificate",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:AttachLoadBalancers",
"autoscaling:DetachLoadBalancers",
"autoscaling:DetachLoadBalancerTargetGroups",
"autoscaling:AttachLoadBalancerTargetGroups",
"cloudformation:*",
"elasticloadbalancing:*",
"elasticloadbalancingv2:*",
"ec2:DescribeInstances",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeRouteTables",
"ec2:DescribeVpcs",
"iam:GetServerCertificate",
"iam:ListServerCertificates"
],
"Resource": ["*"]
}
]
After that make sure this was applied to your cluster with:
kops update cluster $KOPS_CLUSTER_NAME --yes kops rolling-update cluster
To be able to route traffic from ALB to your nodes you need to create an Amazon EC2 security group with Kubernetes tags, that allow ingress port 80 and 443 from the internet and everything from ALBs to your nodes. Tags are used from Kubernetes components to find AWS components owned by the cluster. We will do with the AWS cli:
aws ec2 create-security-group --description ingress.$KOPS_CLUSTER_NAME --group-name ingress.$KOPS_CLUSTER_NAME aws ec2 describe-security-groups --group-names ingress.$KOPS_CLUSTER_NAME sgidingress=$(aws ec2 describe-security-groups --filters Name=group-name,Values=ingress.$KOPS_CLUSTER_NAME | jq '.["SecurityGroups"][0]["GroupId"]' -r) sgidnode=$(aws ec2 describe-security-groups --filters Name=group-name,Values=nodes.$KOPS_CLUSTER_NAME | jq '.["SecurityGroups"][0]["GroupId"]' -r) aws ec2 authorize-security-group-ingress --group-id $sgidingress --protocol tcp --port 443 --cidr 0.0.0.0/0 aws ec2 authorize-security-group-ingress --group-id $sgidingress --protocol tcp --port 80 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-id $sgidnode --protocol all --port -1 --source-group $sgidingress aws ec2 create-tags --resources $sgidingress--tags "kubernetes.io/cluster/id=owned" "kubernetes:application=kube-ingress-aws-controller"
To have TLS termination you can use AWS managed certificates. If you are unsure if you have at least one certificate provisioned use the following command to list ACM certificates:
aws acm list-certificates
If you have one, you can move on to the next section.
To create an ACM certificate, you have to requset a CSR with a domain name that you own in route53, for example.org. We will here request one wildcard certificate for example.org:
aws acm request-certificate --domain-name *.example.org
You will have to successfully do a challenge to show ownership of the
given domain. In most cases you have to click on a link from an e-mail
sent by certificates.amazon.com. E-Mail subject will be Certificate approval for <example.org>
.
If you did the challenge successfully, you can now check the status of your certificate. Find the ARN of the new certificate:
aws acm list-certificates
Describe the certificate and check the Status value:
aws acm describe-certificate --certificate-arn arn:aws:acm:<snip> | jq '.["Certificate"]["Status"]'
If this is no "ISSUED", your certificate is not valid and you have to fix it. To resend the CSR validation e-mail, you can use
aws acm resend-validation-email
Kube-aws-ingress-controller will be deployed as deployment with 1 replica, which is ok for production, because it’s only configuring ALBs.
REGION=${AWS_AVAILABILITY_ZONES#*,} REGION=${REGION:0:-1} sed -i "s/<REGION>/$REGION/" templates/kube-aws-ingress-controller-deployment.yaml kubectl create -f templates/kube-aws-ingress-controller-deployment.yaml
Skipper will be deployed as daemonset:
kubectl create -f templates/skipper-ingress-daemonset.yaml
Check, if the installation was successful:
kops validate cluster
If not and you are sure all steps before were done, please check the logs of the POD, which is not in running state:
kubectl -n kube-system get pods -l component=ingress kubectl -n kube-system logs <podname>
Deploy one sample application and change the hostname depending on your route53 domain and ACM certificate:
kubectl create -f templates/sample-app-v1.yaml kubectl create -f templates/sample-svc-v1.yaml sed -i "s/<HOSTNAME>/demo-app.example.org/" templates/sample-ing-v1.yaml kubectl create -f templates/sample-ing-v1.yaml
Check if your deployment was successful:
kubectl get pods,svc -l application=demo
To check if your Ingress created an ALB check the ADDRESS
column:
kubectl get ing -l application=demo -o wide NAME HOSTS ADDRESS PORTS AGE demo-app-v1 myapp.example.org example-lb-19tamgwi3atjf-1066321195.us-central-1.elb.amazonaws.com 80 1m
If it is provisioned you can check with curl, http to https redirect is created automatically by Skipper:
curl -L -H"Host: myapp.example.org" example-lb-19tamgwi3atjf-1066321195.us-central-1.elb.amazonaws.com <body style='color: green; background-color: white;'><h1>Hello!</h1>
Check if Kops dns-controller created a DNS record:
curl -L myapp.example.org <body style='color: green; background-color: white;'><h1>Hello!</h1>
We assume you have all components running that were applied in Base features
.
Deploy a second ingress with a feature toggle and rate limit to protect you backend:
sed -i "s/<HOSTNAME>/demo-app.example.org/" templates/sample-ing-v2.yaml kubectl create -f templates/sample-ing-v2.yaml
Deploy a second sample application:
kubectl create -f templates/sample-app-v2.yaml kubectl create -f templates/sample-svc-v2.yaml
Now, you can test the feature toggle to access the new v2 application:
curl "https://myapp.example.org/?version=v2" <body style='color: white; background-color: green;'><h1>Hello AWS!</h1>
If you run this more often, you can easily trigger the rate limit to stop proxying your call to the backend:
for i in {0..9}; do curl -v "https://myapp.example.org/?version=v2"; done
You should see output similar to:
-
Trying 52.222.161.4… -------- a lot of TLS output -------- > GET /?version=v2 HTTP/1.1 > Host: myapp.example.org > User-Agent: curl/7.49.0 > Accept: / > < HTTP/1.1 429 Too Many Requests < Content-Type: text/plain; charset=utf-8 < Server: Skipper < X-Content-Type-Options: nosniff < X-Rate-Limit: 60 < Date: Mon, 27 Nov 2017 18:19:26 GMT < Content-Length: 18 < Too Many Requests
-
Connection #0 to host myapp.example.org left intact
Your endpoint is now protected.
Next we will show traffic switching. Deploy an ingress with traffic switching 80% traffic goes to v1 and 20% to v2. Change the hostname depending on your route53 domain and ACM certificate as before:
sed -i "s/<HOSTNAME>/demo-app.example.org/" templates/sample-ing-tf.yaml kubectl create -f templates/sample-ing-traffic.yaml
Remove old ingress which will interfere with the new created one:
kubectl delete -f templates/sample-ing-v1.yaml kubectl delete -f templates/sample-ing-v2.yaml
Check deployments and services (both should be 2)
kubectl get pods,svc -l application=demo
To check if your Ingress has an ALB check the ADDRESS
column:
kubectl get ing -l application=demo -o wide NAME HOSTS ADDRESS PORTS AGE demo-traffic-switching myapp.example.org example-lb-19tamgwi3atjf-1066321195.us-central-1.elb.amazonaws.com 80 1m
If it is provisioned you can check with curl, http to https redirect is created automatically by Skipper:
curl -L -H"Host: myapp.example.org" example-lb-19tamgwi3atjf-1066321195.us-central-1.elb.amazonaws.com <body style='color: green; background-color: white;'><h1>Hello!</h1>
Check if Kops dns-controller created a DNS record:
curl -L myapp.example.org <body style='color: green; background-color: white;'><h1>Hello!</h1>
You can now open your browser at
https://myapp.example.org depending
on your hostname
and reload it maybe 5 times to see switching from
white background to green background. If you modify the
zalando.org/backend-weights
annotation you can control the chance
that you will hit the v1 or the v2 application. Use kubectl annotate to change this:
kubectl annotate ingress demo-traffic-switching zalando.org/backend-weights='{"demo-app-v1": 20, "demo-app-v2": 80}'