-
Notifications
You must be signed in to change notification settings - Fork 690
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cmd/contour: allow Kube client QPS/burst configuration #5003
Conversation
Allows the Kubernetes client's QPS and burst to be configured for the contour serve command. Signed-off-by: Steve Kriss <krisss@vmware.com>
Interesting, found https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/client/config#GetConfig -- looks like controller-runtime defaults these to 20/30 instead of the 5/10 that client-go uses (but we're not currently using the controller-runtime code path, so we're getting 5/10 as the defaults). |
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #5003 +/- ##
==========================================
- Coverage 77.60% 77.46% -0.15%
==========================================
Files 138 138
Lines 16837 16869 +32
==========================================
Hits 13067 13067
- Misses 3516 3548 +32
Partials 254 254
|
Quick ad-hoc test confirms that these flags are working and make a difference. I created 10 HTTPProxies at once and observed the timestamps of their updates (note there are 20 updates total because each one gets status.ingress and status.conditions set separately):
|
This would only be relevant for #5001 if there are lots of updates queued at once, but still likely a helpful knob to have for some clusters. |
Signed-off-by: Steve Kriss <krisss@vmware.com>
Signed-off-by: Steve Kriss <krisss@vmware.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given we kinda need to configure this via flag/ConfigMap to suit different use cases I'm on the fence now about eventually moving to the ContourConfiguration CRD, seems like a hassle to add more CLI flags which we have to do since this one in particular doesn't really work to put as a field on a resource we have to fetch from the API server, WDYT?
(implementation looks good, like the logging bit that lets us know what it is configured to, nice to see) |
Yeah, I was having similar thoughts as I went through this and realized where it needed to go. Definitely something for us to revisit and see if it still makes sense. |
@sunjayBhatia are you OK with merging this now for inclusion in 1.24? Just wanted to check since we've already cut the RC. |
sounds good to me, the default behavior should be the same so lower risk |
…r#5003) Allows the Kubernetes client's QPS and burst to be configured for the contour serve command. Signed-off-by: Steve Kriss <krisss@vmware.com> Signed-off-by: yy <yang.yang@daocloud.io>
…r#5003) Allows the Kubernetes client's QPS and burst to be configured for the contour serve command. Signed-off-by: Steve Kriss <krisss@vmware.com> Signed-off-by: yy <yang.yang@daocloud.io>
…r#5003) Allows the Kubernetes client's QPS and burst to be configured for the contour serve command. Signed-off-by: Steve Kriss <krisss@vmware.com>
Allows the Kubernetes client's QPS and burst
to be configured for the contour serve command.
Signed-off-by: Steve Kriss krisss@vmware.com
Still doing some testing with this but might be nice to include in 1.24 to see if it helps with scale issues.