-
Notifications
You must be signed in to change notification settings - Fork 626
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kustomize-controller reports same object "configured" on every reconciliation #1934
Comments
I first noted this problem in the "flux" channel of the CNCF Slack workspace. |
This is due to the fact that KongClusterPlugin does not subscribe to the Kubernetes CRD structure. Flux assumes all custom resources have a |
I see the examples in the referenced document, but those are hewing to a convention not enforced by the API machinery. Kubernetes accepts and handles Kong's CRDs well enough, not complaining about the lack of a top-level "spec" or "status" field. |
Would it be feasible to introduce something like the following as a heuristic?
|
Not wanting to trust just my interpretation of the CRD convention, I asked in the "sig-api-machinery" channel of the "Kubernetes" Slack workspace. @howardjohn confirmed that the top-level "spec" field is a matter of convention, not a requirement. |
Describe the bug
Every time that Flux's kustomize-controller reconciles one of my Kustomization objects, there is one object that it reports that it "configured," while all the other objects covered by the same Kustomization remain "unchanged," as reported in the container log and via Flux notifications. This object is of API group "configuration.konghq.com," version "v1," and kind KongClusterPlugin, per the CustomResourceDefinition from the Kong ingress controller project. The Git repository that Flux is watching is not changing during these reconciliation attempts.
I fetched this KongClusterPlugin object via kubectl get before and after one of these reconciliation attempts, formatted as YAML. The only field that changed was "metadata.resourceVersion."
Steps to reproduce
Expected behavior
Flux should update the Kubernetes object if it differs materially from the manifests read from the GitRepository source, but once the two are sufficiently similar, Flux should not find any reason to patch or update the object again.
Screenshots and recordings
The Flux notification for each reconciliation attempt reports:
OS / Distro
Container-Optimized OS from Google (version "5.4.89+")
Flux version
0.18.2
Flux check
► checking prerequisites
✔ Kubernetes 1.19.13-gke.1200 >=1.19.0-0
► checking controllers
✔ helm-controller: deployment ready
► ghcr.io/fluxcd/helm-controller:v0.12.0
✔ kustomize-controller: deployment ready
► ghcr.io/fluxcd/kustomize-controller:v0.15.4
✔ notification-controller: deployment ready
► ghcr.io/fluxcd/notification-controller:v0.17.0
✔ source-controller: deployment ready
► ghcr.io/fluxcd/source-controller:v0.16.0
✔ all checks passed
Git provider
GitHub
Container Registry provider
DockerHub
Additional context
We never saw this problem with the same Kubernetes manifests when using versions of Flux up through the 0.17 minor version.
Code of Conduct
The text was updated successfully, but these errors were encountered: