-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How different PodAutoScaler configure RPS in Autoscale #5975
Comments
RPS is not subject to the The diff of Does that answer your questions? |
Yeah as @markusthoemmes said, However I have questions: are we going to support multiple metrics for KPA? How do we support custom metrics autoscaling? We may have to change the knobs. |
No, maybe I didn't explain it. I mean the current value (total and tu) of RPS is from |
This is another question for me. In the actual production environment, multiple metrics for KPA will be more reasonable. |
You can use the following annotations on revision to override:
The override happens here:
|
I understand this, but |
In what area(s)?
#5141
switch pa.Metric() { case autoscaling.RPS: total = config.RPSTargetDefault tu = config.TargetUtilization default: // Concurrency is used by default total = float64(pa.Spec.ContainerConcurrency) // If containerConcurrency is 0 we'll always target the default. if total == 0 { total = config.ContainerConcurrencyTargetDefault } tu = config.ContainerConcurrencyTargetFraction }
Ask your question here:
Why RPS is handled differently than Concurrency. If so, different podAutoscaler, you can not set rps separately
The text was updated successfully, but these errors were encountered: