Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kustomize-controller reports same object "configured" on every reconciliation #1934

Closed
1 task done
seh opened this issue Oct 12, 2021 · 5 comments · Fixed by fluxcd/kustomize-controller#459
Closed
1 task done

Comments

@seh
Copy link
Contributor

seh commented Oct 12, 2021

Describe the bug

Every time that Flux's kustomize-controller reconciles one of my Kustomization objects, there is one object that it reports that it "configured," while all the other objects covered by the same Kustomization remain "unchanged," as reported in the container log and via Flux notifications. This object is of API group "configuration.konghq.com," version "v1," and kind KongClusterPlugin, per the CustomResourceDefinition from the Kong ingress controller project. The Git repository that Flux is watching is not changing during these reconciliation attempts.

I fetched this KongClusterPlugin object via kubectl get before and after one of these reconciliation attempts, formatted as YAML. The only field that changed was "metadata.resourceVersion."

Steps to reproduce

  1. Install Flux version 0.18.2.
  2. Install the KongClusterPlugin CRD.
  3. Create a KongClusterPlugin manifest such as the following, and have a Flux Kustomization include it for reconciliation.
  4. Observe that every time Flux reconciles this object, it notes that it "configured" it, changing its resource version.
apiVersion: configuration.konghq.com/v1
kind: KongClusterPlugin
metadata:
  annotations:
    kubernetes.io/ingress.class: kong
  labels:
    global: "true"
  name: prometheus-global
plugin: prometheus

Expected behavior

Flux should update the Kubernetes object if it differs materially from the manifests read from the GitRepository source, but once the two are sufficiently similar, Flux should not find any reason to patch or update the object again.

Screenshots and recordings

The Flux notification for each reconciliation attempt reports:

kustomization/redacted.flux-system
KongClusterPlugin/prometheus-global configured

OS / Distro

Container-Optimized OS from Google (version "5.4.89+")

Flux version

0.18.2

Flux check

► checking prerequisites
✔ Kubernetes 1.19.13-gke.1200 >=1.19.0-0
► checking controllers
✔ helm-controller: deployment ready
► ghcr.io/fluxcd/helm-controller:v0.12.0
✔ kustomize-controller: deployment ready
► ghcr.io/fluxcd/kustomize-controller:v0.15.4
✔ notification-controller: deployment ready
► ghcr.io/fluxcd/notification-controller:v0.17.0
✔ source-controller: deployment ready
► ghcr.io/fluxcd/source-controller:v0.16.0
✔ all checks passed

Git provider

GitHub

Container Registry provider

DockerHub

Additional context

We never saw this problem with the same Kubernetes manifests when using versions of Flux up through the 0.17 minor version.

Code of Conduct

  • I agree to follow this project's Code of Conduct
@seh
Copy link
Contributor Author

seh commented Oct 12, 2021

I first noted this problem in the "flux" channel of the CNCF Slack workspace.

@stefanprodan
Copy link
Member

stefanprodan commented Oct 12, 2021

This is due to the fact that KongClusterPlugin does not subscribe to the Kubernetes CRD structure. Flux assumes all custom resources have a spec field in root and uses that to detect drift.

@seh
Copy link
Contributor Author

seh commented Oct 12, 2021

I see the examples in the referenced document, but those are hewing to a convention not enforced by the API machinery. Kubernetes accepts and handles Kong's CRDs well enough, not complaining about the lack of a top-level "spec" or "status" field.

@seh
Copy link
Contributor Author

seh commented Oct 12, 2021

Would it be feasible to introduce something like the following as a heuristic?

If the object has a "spec" field, use that.
Otherwise, use all top-level fields outside of "metadata" and "status."

@seh
Copy link
Contributor Author

seh commented Oct 12, 2021

Not wanting to trust just my interpretation of the CRD convention, I asked in the "sig-api-machinery" channel of the "Kubernetes" Slack workspace. @howardjohn confirmed that the top-level "spec" field is a matter of convention, not a requirement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants