-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
external-dns v0.13.5 trying to create CNAME records after upgrading leading to crashloopbackoff #3714
Comments
Please share all args to start external-dns and the resources that let external-dns to create these records. We also need the ingress status as it contains the target and we need to know if there are two resources that want to have different targets and what kind of source you use. |
Args used to start external-dns:
Some example resources:
We've seen it fail when trying to create records for both ingress and service type objects without us making any changes other then upgrading the external-dns version. |
I don't see |
@johngmyers I see similar behavior. In my case, I create a KIND cluster with a service that has an annotation( But because external-dns did not have a chance to delete the entry previously created, it goes into a CrashLoopBackOff state If I delete the service first, let external-dns delete the entry and then destroy and recreate cluster, then it works as expected. |
@amold1 Please supply a reproducable test case, complete with server arguments, Kubernetes resources, any other initial conditions, actual behavior, and expected behavior. |
@johngmyers I was also affected by this on v0.13.5, here are the steps to reproduce
EKS: 1.23 |
The two TXT records it was trying to CREATE were exactly the same (I tested using a custom image with additional logic) - so maybe some issue with the deduplication logic? BTW, I tried master (commit: 92824f4) and that didn't result into this behavior. |
If this isn't reproducing on master, there's little reason to investigate. |
Did a little more digging, seems the commit 1bd3834 fixed the issue for me. |
hey @johngmyers, sorry about that. here's the loki ingress resource we're using.
In our case we're running multiple clusters with workloads provisioned via argocd and have seen the same error occur but with different resources mentioned based on what external-dns tries to reconcile first. |
Same issue with google provider. v0.13.5 version as well. Downgrade to v0.13.4 helped. |
@joaocc That's not a CNAME record, as reported in the initial description. That's a TXT record and is expected behavior. |
@johngmyers You are correct. Will remove my comment to avoid future confusion. Sorry for the misunderstanding. Thx |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened: After upgrading external-dns from 0.13.4 to 0.13.5, it began trying to create CNAME records instead of A records like it had been previously. The external-dns pod then went into CrashLoopBackOff due to a "Modification Conflict" error.
What you expected to happen: External-dns would continue to create A records after an upgrade and not crash.
How to reproduce it (as minimally and precisely as possible): Have multiple
Anything else we need to know?:
Environment: Kubernetes cluster on v1.26
external-dns --version
): 0.13.5Logs:
The text was updated successfully, but these errors were encountered: