-
Notifications
You must be signed in to change notification settings - Fork 689
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do not create clusters for unrelated services #298
Comments
I can work on this issue. |
Cool. I haven't thought about how it would work but there would need to be
some connection between onadd/update/delete for ingress and service.
FWIW I don't think this is super urgent, the issues with connection limits
are best solved by adjusting the bootstrap config on the xds_cluster entry.
…On 27 March 2018 at 07:56, Steve Sloka ***@***.***> wrote:
I can work on this issue.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#298 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAAcA5NXnJfX96xK6udYCWIWsBwGoQzvks5tiVXrgaJpZM4S5GM6>
.
|
I think we need to use our local cache slices to determine whether we need endpoints and clusters in envoy. So, we need to store all data about endpoints and ingresses in memory of contour (and it already does it). Otherwise, we can get a race condition when we get new ingress but we don't have any clusters and endpoints for it. Or we should query kube API then to retrieve needed information (besides k8s watchers). @stevesloka if you don't have proximate plans to realize this feature I can try to implement it. Let me know if you already work on it. |
@Lookyan I did start a little bit but don't have it fully implemented. I was starting to wire in the SharedInformer since it already maintains the local cache, and allows us to better utilize the update methods and also doesn't require us to maintain the second cache in the The secondary goal of this issue is to implement health checks since they are implemented via |
I appreciate your enthusiasm but please let’s wait until #291 is fixed. My suspicion is the next blocker will be eds filtering the specific resource being watched (there is an issue, but on phone) |
#291 is now addressed, but 0.5.0 is shipping in two weeks (and one of those weeks I will be on leave) so I am moving this to 0.6. |
Currently Contour creates CDS entries for any
Service
document visible via the API. This is a problem because a CDS entry a causes Envoy to open a new EDS gRPC stream. This inflates the number of connections to thexds_cluster
and inflames issues like #291We should filter Services to only those that are directly referenced by an active Ingress.
The text was updated successfully, but these errors were encountered: