-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The new v1.0.0 IngressClass handling logic makes a zero-downtime Ingress controller upgrade hard for the users #7502
Comments
@janosi: This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
So, my proposal is:
So with the first proposal, you can for example:
Does this works? @strongjz @tao12345666333 wdyt? |
I am afraid, prioritization of the annotation would not help :( The annotation has been in deprecated state since 1.18 and for that reason many users have started using the IngressClassName field instead. Those Ingresses do not have the annotation. That is, we would end up in requesting a mass Ingress update from the users, which update is not a long term solution either, as the annotation is deprecated. I wonder about that huge effort: I am not 100% familiar with the code. My understanding so far was, that the controller watches for the IngressClass resources based on the spec.Controller field, but after that everything works based on the metadata.Name field.
Another alternative I can think of is:
|
Doing this worked for me. Superficially, fixed all broken old & new ingresses on my dev system before too many people noticed - YMMV.
The process in the container is:
|
Great! As @benjamin-tucker mentioned here, I used the "controller.extraArgs" and works for me too
Thank you |
NGINX Ingress controller version: v1.0.0 vs 0.4x
Kubernetes version (use
kubectl version
):1.19, 1.20, 1.21, 1.22
Environment:
Cloud provider or hardware configuration: Not relevant, the problem is generic for all users
OS (e.g. from /etc/os-release): Not relevant
Kernel (e.g.
uname -a
): Not relevantInstall tools: Not relevant
Basic cluster related info:
kubectl version
1.20, any kubectl that supports v1 IngressHow was the ingress-nginx-controller installed: Not relevant
What happened:
The 0.4x version of ingress-nginx-controller require that the
ingressClass.spec.controller
has a fixed value:k8s.io/ingress-nginx
. In order to shard Ingresses between two ingress-nginx-controller deployments there must be 2 IngressClasses in the following form:The The 0.4x version of ingress-nginx-controller uses the
metadata.Name
field of the IngressClass to identify which Ingresses it should process.There were two ingress-nginx-controller deployments on the cluster with version 0.48.1. Controller A was configured to watch IngressClass C. Controller B was controlled to watch IngressClass D.
An Ingress resource refers to an IngressClass (and thus to a processing ingress-nginx-controller instance) via its
ingress.spec.ingressClassName
field. Ingress E hadingress.spec.ingressClassName=C
, Ingress F hadingress.spec.ingressClassName=D
on the cluster.As the result, Controller A processed Ingress E, and Controller B processed Ingress F. The two controllers did not process the other Ingress. This was OK, Ingresses were shared between the controllers as expected. This setup is used since 1.18.
A new v1.0.0-beta2 ingress-nginx-controller instance was deployed on the same cluster to replace Controller A in the long run. I wanted to run the old and new controllers parallel to test and to verify that the new controller works OK. But the new 1.0.0-beta2 ingress-nginx-controller processed both Ingress E and Ingress F immediately.
That is, the new v1.0.0 controller cannot differentiate Ingresses based on the exisitng IngressClasses and the Ingresses that refer to those IngressClasses.
What you expected to happen:
As a user I would like to re-use at least my existing Ingresses during an upgrade to v1.0.0. I have v1 Ingresses since 1.19, so the restriction that the new ingress controller supports only v1 Ingresses does not affect me. For this reason I expect that my existing Ingresses are OK.
But it is not possible with the new IngressClass handling logic in v1.0.0. I should create new IngressClasses for v1.0.0 on my cluster, and if I want to run the old (e.g. v0.48.1) and new (v1.0.0) controllers parallel on the same cluster I also have to duplicate my Ingresses, and configure the new ones to refer to the new IngressClasses.
The root cause is, that the v1.0.0 Ingress controller uses the
ingressClass.spec.controller
field to identify the IngressClasses that it owns. And because the old IngressClasses must have the valuek8s.io/ingress-nginx
in that field the v1.0.0. controller will process all old IngressClasses on the cluster with that controller value.How to reproduce it:
.spec.controller
field has the valuek8s.io/ingress-nginx
ingress-class
parameter. One deployment should refer to the first IngressClass like--ingress-class=<ingressclass_1.metadata.name>
. The other deployment should refer to the second IngressClass like--ingress-class=<ingressclass_2.metadata.name>
..spec.IngressClassName
. The other Ingress shall refer to the other IngressClass.Anything else we need to know:
Slack discussion: https://kubernetes.slack.com/archives/CANQGM8BA/p1629105520296900
/kind bug
The text was updated successfully, but these errors were encountered: