-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm Chart should support setting annotations-risk-level in configmap #12618
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
I have
On any ingress resource with a |
Same here - very unwelcome |
This broke configurations using Modsecurity snippets for our deployments, ones that had already been validated were stuck in a 'sync' state. We had to downgrade back to 1.11. |
Hit this as well. Very unfortunate way this is handled also, it doesn't seem to log anything about the ingress being rejected because of the now failed validation. Seems like ingresses configurations which already existed on the cluster prior to installing this new version should have been exempt, at least for some number of releases, to allow people to find this issue when the admission webhook caught it. |
Minor Release 4.12.0 Introduced Breaking Changes, Impacting Multiple ClustersConsidering that this was a minor release, it has caused significant breaking changes for many users. In my case, it brought down six of my clusters due to ArgoCD automatically updating the Helm chart after detecting the new minor release. Since these updates are automated for minor versions, this unexpected behavior caused major disruptions. To restore functionality, I had to pin the chart version to Here's the change I had to make to resolve the issue until suitable changes can be added: - targetRevision: "~4.x"
+ targetRevision: "4.11.3" Could we please be more considerate when introducing "potential" breaking changes in minor releases? This would help avoid such widespread issues for users relying on automated updates. |
We had a production cluster affected by this change as well with service downtime until we were able to track down what was happening and mitigate the issue. |
wth man |
You can set it here: |
Thank you, @chasestech, for pointing this out! I can confirm that your suggested changes work as expected. Here's the updated ❯ k get cm ingress-nginx-controller -o yaml
apiVersion: v1
data:
allow-snippet-annotations: "true"
annotations-risk-level: Critical
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"allow-snippet-annotations":"true","annotations-risk-level":"Critical"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx","app.kubernetes.io/version":"1.11.3","argocd.argoproj.io/instance":"ingress-nginx","helm.sh/chart":"ingress-nginx-4.11.3"},"name":"ingress-nginx-controller","namespace":"kube-system"}}
creationTimestamp: "2024-07-04T19:38:38Z"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.11.3
argocd.argoproj.io/instance: ingress-nginx
helm.sh/chart: ingress-nginx-4.11.3
name: ingress-nginx-controller
namespace: kube-system
resourceVersion: "74298954"
uid: 63f58db5-4a95-4a9c-97d2-579f69765b25 Additionally, here's my ❯ cat values.yaml
tcp: {}
udp: {}
controller:
replicaCount: 3
allowSnippetAnnotations: true
compute-full-forwarded-for: "true"
forwarded-for-header: "X-Forwarded-For"
use-forwarded-headers: "true"
service:
type: LoadBalancer
loadBalancerIP: "10.29.200.1"
externalTrafficPolicy: "Local"
annotations:
metallb.universe.tf/allow-shared-ip: "ingress"
watchIngressWithoutClass: true
ingressClassResource:
default: true
extraArgs:
default-ssl-certificate: "external-secrets/example-io-tls"
config:
annotations-risk-level: Critical After applying your changes, I was able to successfully upgrade the Helm chart to ❯ k get cm ingress-nginx-controller -o yaml
apiVersion: v1
data:
allow-snippet-annotations: "true"
annotations-risk-level: Critical
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"allow-snippet-annotations":"true","annotations-risk-level":"Critical"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx","app.kubernetes.io/version":"1.12.0","argocd.argoproj.io/instance":"ingress-nginx","helm.sh/chart":"ingress-nginx-4.12.0"},"name":"ingress-nginx-controller","namespace":"kube-system"}}
creationTimestamp: "2024-07-04T19:38:38Z"
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.12.0
argocd.argoproj.io/instance: ingress-nginx
helm.sh/chart: ingress-nginx-4.12.0
name: ingress-nginx-controller
namespace: kube-system
resourceVersion: "74300968"
uid: 63f58db5-4a95-4a9c-97d2-579f69765b25 Thanks again for highlighting this! I also reviewed the default |
thanks @chasestech Works great! |
As stated before, the option can be set via the Also these changes have been announced a long time ago in the first public beta release of v1.12, our talk at KubeCon NA and in the release notes of the official and stable v1.12 release. We as maintainers already introduced such changes in minor releases in the past as we're following the same concept as the Kubernetes project does. There's no Kubernetes v2, yet, but still minor releases can contain feature deprecations, removals or breaking changes. This is why you also wouldn't blindly upgrade your Kubernetes cluster without reading the release notes. |
Thanks for your hard work on this project; we’re all grateful for your efforts! May I recommend an improvement for the release notes? Perhaps put the I’ve been using the project for years but had never heard of (let alone set) the Another possibility (which Kubernetes also does) is to provide deprecation warnings for a time so that people have a chance to adjust their configuration before the breaking change is shipped. Something like this would be helpful:
Last, the error that I get when trying to deploy an ingress that’s invalid could mention either (a) the annotation that is risky; and/or (b) the setting that controls the annotation risk level. Thanks again! |
Thank you for your response and for clarifying the reasoning behind the changes. However, I would like to respectfully point out a concern regarding the adherence to semantic versioning (SemVer) as stated in your changelog: "The release numbering uses semantic versioning." Semantic versioning, as defined, is quite clear:
Introducing breaking changes in a minor release, regardless of prior announcements or alignment with Kubernetes practices, does not align with the principles of SemVer. While I understand the rationale behind following Kubernetes' approach, it creates confusion when the project explicitly claims to follow SemVer but then deviates from its core principles. If the project intends to follow a different versioning philosophy, such as the one Kubernetes uses, I would suggest updating the documentation and changelog to reflect this explicitly. This would help set clearer expectations for users and avoid misunderstandings about the nature of versioning and potential breaking changes in minor releases. I appreciate the work you and the team put into maintaining this project, but I believe clarity and consistency in versioning are critical for the community relying on it. Thank you for considering this feedback. |
Sorry to jump in on the actual topic - i think the actual semver or not semver part should be moved into a different issue / thread, no matter who is right or not. It is a topic of itself and this issue may then be used as a example for whatever point you want to make. To the topic: Before With I do not mind if the option now has been moved to
controller.config.annotations-risk-level=Critical
So to sum it up in some simple questions
Thanks! |
@EugenMayer Previously, the annotations-risk-level was defaulted to Critical. As a breaking change, they lowered the default to High and buried the change in the release notes for the beta. (And now the devs are redefining semver because they want the cool label without following the requirements.) |
With controller 1.12 / chart 4.12, annotations seem to be validated differently, breaking our setup, because we use
nginx.ingress.kubernetes.io/configuration-snippet
annotations, which are rated critical. So the validation now fails.However, there seems to be no way to set the
annotations-risk-level
to critical in the configmap via the helm chart. There is no option for it (or I haven't found it).I'm unaware of another issue
No
The text was updated successfully, but these errors were encountered: