-
Notifications
You must be signed in to change notification settings - Fork 689
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Service endpoint changes not updated in envoy #293
Comments
Dont' worryt about those warnings during startup. Envoy starts before contour and tries to make a connection to contour which fails the first time (mostly). |
Updates projectcontour#293 This PR moves `cmd/contourcli` into the main `cmd/contour` binary so that it can be used via kubectl exec. Signed-off-by: Dave Cheney <dave@cheney.net>
The Original state: kuard deployment has 3 pods (and 3 endpoints):
And /clusters shows:
Then upon deleting the deployment I have 0 pods but 2 endpoints:
|
I removed the ingress then created the svc and deployment again.
|
Updates projectcontour#293 This PR moves `cmd/contourcli` into the main `cmd/contour` binary so that it can be used via kubectl exec. Signed-off-by: Dave Cheney <dave@cheney.net>
Thanks to Alexander Lukyanchenko (@Lookyan) we have increased the general gRPC limits on both the Envoy client and Contour server well above anything that should be an issue for the immediate future. The symptoms of hitting gRPC limits vary, but are basically "envoy doesn't see changes in the API server until I restart ". The underlying cause is likely to be that you have a large (more than 100, possibly 200, the exact limit is not precisely known) number of Service objects in your cluster -- these don't have to be associated with an Ingress. Currently Contour creates a CDS Cluster record for any Service object it learns about through the API, see #298. Each CDS record will cause Envoy to open a new EDS stream, one per Cluster, which can blow through the default limits that Envoy, as the gRPC client, and Contour, as the gRPC server, have set. One of the easiest ways to detect if this issue is occuring in your cluster is too look for lines about "cluster warming"
Without a matching "warming complete" message. We believe we have addressed this issue, #291, and the fixes are available now to test. These changes are in master now, and available in the gcr.io/heptio-images/contour:master image for you to try. This has been backported the release-0.4 branch and are available in a short lived image gcr.io/heptio-images/contour:release-0.4 (it's not going to be deleted, but don't expect it to continue to be updated beyond the 0.4.1 release). |
The problem appears to have been fixed, at least in the limited testing I've done. Thanks @Lookyan & @davecheney ! |
Thanks for confirming. |
…293) operator does not create envoy service when envoy is ClusterIPService type and gatewayClassRef. This patch fixes it. Signed-off-by: Kenjiro Nakayama <nakayamakenjiro@gmail.com>
I'm seeing some weird issues with Contour 0.4. It seems like Contour configures Envoy correctly upon startup but then fails to keep envoy updated as resources change (service endpoints specifically). If I restart a contour pod it starts with the configuration I expect to see, and it routes requests correctly, that is until endpoints change. Here's what I see in the logs upon startup -- should I be concerned by the two "gRPC update" messages?
Contour is running in AWS, with NLB, and TLS. It was deployed w/ the ds-hostnet config w/ the changes below (for TLS):
Snippet from envoy's /clusters endpoint when routing is broken:
Snippet from envoy's /clusters endpoint when routing is working:
The text was updated successfully, but these errors were encountered: