-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
One invalid record in ChangeBatch stops all others from updating #1517
Comments
Also similar problem #731 |
I think as workaround you could try natch site 1 or 2 |
Thanks @szuecs - adding |
It’s not really meant to fix your issue. |
Starting to work on this - https://kubernetes.slack.com/archives/C771MKDKQ/p1592295222475600 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Can someone confirm this is still an issue with the latest release(v0.7.3)? |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
I can confirm this is still an issue on 0.7.6:
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Do we have any inputs on this issue ? |
I am working on that I will have time to fix it. 😀 |
This should have been resolved with the contribution @knackaron and myself (then knackjeff) made back in 2021 with #2127 and included in 0.9.0. Just change the batchsize back to 1 resorts back to the old style behavior and simply submits the DNS change requests one at a time and not the "number of bytes sent" as @szuecs stated above. Have tested this in production environments and it solved this issue for us. |
@jegeland I don't think it's a great solution but it works for us as well. I meant that reducing batch size is not an optimal solution because the user has to calculate max bytes herself and judge about how big the average dns record might be. So it's not great for the user and not reducing api calls to cloud providers. Having too many issues with number of api calls to aws in the past with several incidents, I want to fix it when I have a bit more time to invest into coding and testing it. |
@jegeland how is that different than the workaround suggested in #1517 (comment)? I must stress that this is a workaround and does not resolve the issue. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@szuecs: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Isn't this resolved in later versions now? Failed records are lumped into their own batch and retried. |
@jukie yeah that's true, but not completely. IIRC it will split into two chunks and one will be applied and the other not and in the next iteration it will do the same, so it will fix it after some time. Maybe we can do better than that, maybe it's fine. |
@szuecs: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened:
What you expected to happen:
Ignore the invalid record, process the others
The use case here is the record demo.example.io is created outside of the K8s cluster. But the K8s ingress still needs to be able to handle traffic for this host, since the CNAME is set up to failover between two K8s clusters.
In previous versions of external-dns (<= v0.5.17) everything worked, since it just ignored any records that already exist. Now its batching changes and failing everything, even when only one of the records is "invalid".
Perhaps we need an "ignore" configuration option that would tell external-dns to continue on failure of N records instead of trying to do bulk, atomic submissions?
Environment:
external-dns: v0.7.1
K8s v1.16.2
The text was updated successfully, but these errors were encountered: