-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When multiple pods are selected: Requests are being client-side throttled #44
Comments
@dgzlopes Thanks for opening this issue. Could you please elaborate on this comment:
|
I was reading https://kubernetes.io/docs/concepts/cluster-administration/flow-control/ And yeah, just wondering if, instead of relying on this throttling that the client provides, we should just make the freq of requests slower if there are many of them. If someone has increased the limits of their API Server or used an earlier Kubernetes version, this behavior could be dangerous. |
That is interesting. The extension makes relatively few requests. One for checking if the pod already has the ephemeral container, one for patching the pod if not and then, it waits until the container is ready. I suspect that is in this check, which uses a watch where maybe the call to the API Server can be optimized. |
I haven't looked properly to the message. It points to a |
Regarding this issue, it is important to notice that it may affect the time it takes to inject all agents in the targets, but it is mostly inconsequential in most cases. Therefore there are several alternatives:
Those actions are not mutually exclusive and none ensures the problem will eventually arise. |
Following the discussion in the Kubernetes community around this issues, it seems that the client-side throttling is no longer needed as the server-side "Priority and Fairness" is enable by default since Considering that the throttling happens when the agents are injected, we consider that a RQS limit of |
Fixed by #55 |
When trying to instantiate a PodDisruptor, with a selector that touches 15 pods, I get the following messages from time to time:
I wonder if we could be more gentle with out request pattern. Also, I wonder if this could be a problem on huge namespaces.
The text was updated successfully, but these errors were encountered: