-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: randomise startup #2417
feat: randomise startup #2417
Conversation
Welcome @elblivion! |
2776bbc
to
c7321f5
Compare
/assign @njuettner |
What about providing documentation on how to do this with an init container? It would have the same effect and we wouldn't need to modify the code of the project. |
c7321f5
to
7eba400
Compare
7eba400
to
f27ff74
Compare
f27ff74
to
61e3948
Compare
Hi @Raffo! I updated the change so it's adding to the docs instead. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: elblivion, Raffo The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@elblivion what is the benefit of running multiple pods of external-dns? |
@dudicoco this is not for multiple instances on the same Deployment, rather on multiple clusters in the same AWS account. I see now this isn't clearly explained in the update. :-/ |
@elblivion what is the advantage of running multiple pods in one deployment? I don't think that is best practice per my previous comment. |
I'm a college of @elblivion and I'll try to explain it in my words 😄 : We are running multiple k8s clusters in the same AWS account, but only one pod per k8s cluster. Nevertheless operations like an external-dns version update will roll all those pods at the same time causing us to hit rate-limit, so this change is mitigating this issue. Does that make sense? |
Description
We run multiple Kubernetes clusters in our production AWS account, when a configuration change is made to external-dns simultaneously we see a lot of throttling from the AWS Route53 API.
This mitigates the issue by introducing a randomised wait on startup. This mitigates rather than solves folks' issues with AWS Route63 rate limits, but might alleviate the pain when running multiple external-dns instances in the same account as we do.
Fixes #ISSUE
Checklist