-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache ListRecords result #178
Comments
In general 👍 But how does this differ from setting the |
Setting |
sgtm 👍 |
@ideahitme what would you use as default TTL for the cache? I think 1 hour is far too long as it would mean that "manually" changing/deleting records would only be "restored" after one hour: IMHO the system should strive for correctness, i.e. Kubernetes state should reflect real DNS state. I guess something in the range of minutes is good enough, e.g. we could reduce the default interval to 30s and have a cache TTL of 300s (5 minutes):
|
@hjacobs yes, 1 hour is just an example :D but even setting a TTL of 5min would give a huge win - minimising number of potential clashes (after manual changes) and significantly reducing number of AWS API requests |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
The feature was never implemented, so it was inappropriate for this issue to be closed. ExternalDNS still badly needs a cache; with a large number of records in a hosted zone, it will max out Route 53 rate limits every time the sync loop runs. |
/reopen does that work @raxod502-plaid ? |
@tehlers320: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
…-sigs` (kubernetes-sigs#178) * Change import paths from GoogleContainerTools -> kubernetes-sigs * Replace all remaining occurances GoogleContainerTools -> kubernetes-sigs With this commit the krew-index is also switched to kubernetes-sigs
Sorry, would you mind clarifying what you're asking? |
Currently each iteration of the synchronisation loop requires external-dns to fetch the list of all records from DNS Provider, which can be in general case avoided by caching records in memory. We can define a cache with lease period, which will be updated in two scenarios:
In general case it should greatly help with reducing the API rates, especially in cases where
create
API rarely fails (never happens in case if DNS provider is used solely by single instance of external-dns). In case of stable and moderately active cluster (with not so many ingress/service being created/modified) external-dns will be able to reduce its interaction with DNS provider to bare minimum (on average to one request per lease period)This behaviour can be pluggable via cmd line flag e.g.
--enable-cache
The text was updated successfully, but these errors were encountered: