This repository contains the FLUID controller forked from NGINX official ingress controller that uses ConfigMap to store the NGINX configuration.
This fork does not need to reload internal NGINX when any server or backend directive changes. Thanks to LUA libraries, the configuration changes are picked on the fly avoiding issues when cluster contains a huge numbers of servers or backends and they change frequently.
Learn more about using Ingress on k8s.io
Configuring a webserver or loadbalancer is harder than it should be. Most webserver configuration files are very similar. There are some applications that have weird little quirks that tend to throw a wrench in things, but for the most part you can apply the same logic to them and achieve a desired result.
The Ingress resource embodies this idea, and an Ingress controller is meant to handle all the quirks associated with a specific "class" of Ingress.
An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the apiserver's /ingresses
endpoint for updates to the Ingress resource. Its job is to satisfy requests for Ingresses.
The table below summarizes the key difference between nginxinc/kubernetes-ingress and kubernetes/ingress-nginx Ingress controllers. Note that the table has two columns for the nginxinc/kubernetes-ingress Ingress controller, as it can be used both with NGINX and NGINX Plus
Aspect or Feature | NCCloud/fluid | kubernetes/ingress-nginx | nginxinc/kubernetes-ingress with NGINX | nginxinc/kubernetes-ingress with NGINX Plus |
---|---|---|---|---|
Fundamental | ||||
Authors | Namecheap community | Kubernetes community | NGINX Inc and community | NGINX Inc and community |
Server | OpenResty build based on Alpine image | Custom NGINX build that includes several third-party modules | NGINX official mainline build | NGINX Plus |
Commercial support | N/A | N/A | N/A | Included |
Load balancing configuration | ||||
Merging Ingress rules with the same host | Supported | Supported | Under consideration | Under consideration |
HTTP load balancing extensions - Annotations | See the supported annotations | See the supported annotations | See the supported annotations | See the supported annotations |
HTTP load balancing extensions -- ConfigMap | See the supported ConfigMap keys | See the supported ConfigMap keys | See the supported ConfigMap keys | See the supported ConfigMap keys |
TCP/UDP | Supported via a ConfigMap | Supported via a ConfigMap | Not supported | Not supported |
Websocket | Supported | Supported | Supported via an annotation | Supported via an annotation |
TCP SSL Passthrough | Supported via a ConfigMap | Supported via a ConfigMap | Not supported | Not supported |
JWT validation | Not supported | Not supported | Not supported | Supported |
Session persistence | Supported via a third-party module | Supported via a third-party module | Not supported | Supported |
Configuration templates *1 | See the template | See the template | See the templates | See the templates |
Deployment | ||||
Command-line arguments *2 | See the arguments | See the arguments | See the arguments | See the arguments |
TLS certificate and key for the default server | Required as a command-line argument/ auto-generated | TLS certificate and key for the default server | Required as a command-line argument/ auto-generated | Required as a command-line argument |
Helm chart | Supported | Supported | Coming soon | Coming soon |
Operational | ||||
Reporting the IP address(es) of the Ingress controller into Ingress resources | Supported | Supported | Coming soon | Coming soon |
Extended Status | Supported via a third-party module | Supported via a third-party module | Not supported | Supported |
Prometheus Integration | Supported | Supported | Not supported | Supported |
Dynamic reconfiguration of endpoints (no configuration reloading) | Supported | Not supported | Not supported | Supported |
Dynamic reconfiguration of virtualhosts/servernames (no configuration reloading) | Supported | Not supported | Not supported | Not supported |
- Conventions
- Requirements
- Deployment
- Command line arguments
- Contribute
- TLS
- Annotation ingress.class
- Customizing NGINX
- Source IP address
- Exposing TCP and UDP Services
- Proxy Protocol
- ModSecurity Web Application Firewall
- OpenTracing
- VTS and Prometheus metrics
- Custom errors
- NGINX status page
- Running multiple ingress controllers
- Disabling NGINX ingress controller
- Retries in non-idempotent methods
- Log format
- Websockets
- Optimizing TLS Time To First Byte (TTTFB)
- Debug & Troubleshooting
- Limitations
- Why endpoints and not services?
- External Articles
Anytime we reference a tls secret, we mean (x509, pem encoded, RSA 2048, etc). You can generate such a certificate with:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${KEY_FILE} -out ${CERT_FILE} -subj "/CN=${HOST}/O=${HOST}"
and create the secret via kubectl create secret tls ${CERT_NAME} --key ${KEY_FILE} --cert ${CERT_FILE}
The default backend is a service which handles all url paths and hosts the nginx controller doesn't understand (i.e., all the requests that are not mapped with an Ingress). Basically a default backend exposes two URLs:
/healthz
that returns 200/
that returns 404
The sub-directory /images/404-server
provides a service which satisfies the requirements for a default backend. The sub-directory /images/custom-error-pages
provides an additional service for the purpose of customizing the error pages served via the default backend.
If you have multiple Ingress controllers in a single cluster, you can pick one by specifying the ingress.class
annotation, eg creating an Ingress with an annotation like
metadata:
name: foo
annotations:
kubernetes.io/ingress.class: "gce"
will target the GCE controller, forcing the nginx controller to ignore it, while an annotation like
metadata:
name: foo
annotations:
kubernetes.io/ingress.class: "nginx"
will target the nginx controller, forcing the GCE controller to ignore it.
Note: Deploying multiple ingress controller and not specifying the annotation will result in both controllers fighting to satisfy the Ingress.
There are three ways to customize NGINX:
- ConfigMap: using a Configmap to set global configurations in NGINX.
- Annotations: use this if you want a specific configuration for a particular Ingress rule.
- Custom template: when more specific settings are required, like open_file_cache, adjust listen options as
rcvbuf
or when is not possible to change the configuration through the ConfigMap.
By default NGINX uses the content of the header X-Forwarded-For
as the source of truth to get information about the client IP address. This works without issues in L7 if we configure the setting proxy-real-ip-cidr
with the correct information of the IP/network address of trusted external load balancer.
If the ingress controller is running in AWS we need to use the VPC IPv4 CIDR.
Another option is to enable proxy protocol using use-proxy-protocol: "true"
.
In this mode NGINX does not use the content of the header to get the source IP address of the connection.
If you are using a L4 proxy to forward the traffic to the NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP address. To prevent this you could use the Proxy Protocol for forwarding traffic, this will send the connection details before forwarding the actual TCP connection itself.
Amongst others ELBs in AWS and HAProxy support Proxy Protocol.
If you're running multiple ingress controllers, or running on a cloud provider that natively handles ingress, you need to specify the annotation kubernetes.io/ingress.class: "nginx"
in all ingresses that you would like this controller to claim. This mechanism also provides users the ability to run multiple NGINX ingress controllers (e.g. one which serves public traffic, one which serves "internal" traffic). When utilizing this functionality the option --ingress-class
should be changed to a value unique for the cluster within the definition of the replication controller. Here is a partial example:
spec:
template:
spec:
containers:
- name: nginx-ingress-internal-controller
args:
- /nginx-ingress-controller
- '--default-backend-service=ingress/nginx-ingress-default-backend'
- '--election-id=ingress-controller-leader-internal'
- '--ingress-class=nginx-internal'
- '--configmap=ingress/nginx-ingress-internal-controller'
Not specifying the annotation will lead to multiple ingress controllers claiming the same ingress. Specifying a value which does not match the class of any existing ingress controllers will result in all ingress controllers ignoring the ingress.
The use of multiple ingress controllers in a single cluster is supported in Kubernetes versions >= 1.3.
Support for websockets is provided by NGINX out of the box. No special configuration required.
The only requirement to avoid the close of connections is the increase of the values of proxy-read-timeout
and proxy-send-timeout
.
The default value of this settings is 60 seconds
.
A more adequate value to support websockets is a value higher than one hour (3600
).
Important: If the NGINX ingress controller is exposed with a service type=LoadBalancer
make sure the protocol between the loadbalancer and NGINX is TCP.
NGINX provides the configuration option ssl_buffer_size to allow the optimization of the TLS record size.
This improves the TLS Time To First Byte (TTTFB).
The default value in the Ingress controller is 4k
(NGINX default is 16k
).
Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error.
The previous behavior can be restored using retry-non-idempotent=true
in the configuration ConfigMap.
Setting the annotation kubernetes.io/ingress.class
to any other value which does not match a valid ingress class will force the NGINX Ingress controller to ignore your Ingress. If you are only running a single NGINX ingress controller, this can be achieved by setting this to any value except "nginx" or an empty string.
Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller.
- Ingress rules for TLS require the definition of the field
host
The NGINX ingress controller does not use Services to route traffic to the pods. Instead it uses the Endpoints API in order to bypass kube-proxy to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.