Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error reconciling ClusterChannelProvisioner when dispatcher service is updated #649

Closed
neosab opened this issue Nov 30, 2018 · 3 comments
Closed

Comments

@neosab
Copy link
Contributor

neosab commented Nov 30, 2018

Expected Behavior

Channel controllers (Kafka & in-memory) reconcile ClusterChannelProvisioner when the dispatcher service (owned by them) is updated.

Actual Behavior

They run into an error Could not find ClusterChannelProvisioner during reconciliation

in-memory-controller logs:

{"level":"info","ts":"2018-11-30T00:50:33.704Z","logger":"provisioner","caller":"clusterchannelprovisioner/reconcile.go:70","msg":"Could not find ClusterChannelProvisioner","eventing.knative.dev/clusterChannelProvisioner":"in-memory-channel","eventing.knative.dev/clusterChannelProvisionerComponent":"Controller","controller":"in-memory-channel-controller","request":"knative-eventing/in-memory-channel","error":"ClusterChannelProvisioner.eventing.knative.dev \"in-memory-channel\" not found"}

kafka controller logs

{"level":"info","ts":"2018-11-29T19:06:41.687Z","logger":"provisioner","caller":"controller/reconcile.go:44","msg":"reconciling ClusterChannelProvisioner","request":"knative-eventing/kafka"}
{"level":"info","ts":"2018-11-29T19:06:41.687Z","logger":"provisioner","caller":"controller/reconcile.go:49","msg":"could not find ClusterChannelProvisioner","request":"knative-eventing/kafka"}

Steps to Reproduce the Problem

  1. Start the in-memory or kafka controllers
  2. Manually update the dispatcher svc spec say kafka-dispatcher or in-memory-channel-dispatcher
  3. Check the controlller logs. The reconcilliation would have failed.

Note: You can also notice this error when you just start the controller as a we receive two reconciliation request. First one is for the watched CCP object and that succeeds. The second one is triggered because of the owned dispatcher svc and this one would fail.

Additional Info

The root cause is the behavior with controller runtime'sEnqueueRequestForOwner. The reconciliation request triggered because of a change to the owned object includes the namespace of the owned object i.e the namespace of the dispatcher service. So we get "request":"knative-eventing/kafka" instead of "request":"/kafka". The same for in-memory channel controller.

@grantr
Copy link
Contributor

grantr commented Nov 30, 2018

Thanks for debugging this @neosab! Seems like a bug in controller-runtime.

@grantr
Copy link
Contributor

grantr commented Nov 30, 2018

Here's an issue describing the CR bug: kubernetes-sigs/controller-runtime#214

@neosab
Copy link
Contributor Author

neosab commented Nov 30, 2018

Ah, I did see that issue in controller-runtime but I somehow thought it was closed as a non-issue. Good to know.

lberk added a commit to lberk/eventing that referenced this issue Jun 22, 2020
* update-ci makefile target: allow for release-next branch

* Change file suffixes

* Remove mirroring/mapping in update-ci.sh
Cali0707 pushed a commit to Cali0707/eventing that referenced this issue Jul 4, 2024
…st_1.12_ocp

[release-v1.12] Updating RetryableHttp lib to latest
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants