-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
keda-metrics-adapter calling apiserver qps are quite high #2914
Comments
Yeah, this is something that needs improvement, @bamboo12366 would be nice if you can try to tackle this. FYI there's already controller present in the MetricsAdapter (currently used for metricNames), so you might want to go this direction: https://github.com/kedacore/keda/blob/main/controllers/keda/metrics_adapter_controller.go |
There might be even an easier solution, replace following section: Lines 84 to 90 in 99619e3
with ...
scaledObject := &kedav1alpha1.ScaledObject{}
p.client.Get(ctx, namespaceNameParsedFromMetricSelector, scaledObject)
... we should be able to parse ScaledObject.Name from a label It's definitely worth exploring. |
hey @zroubalik can I just using the manager.GetClient() and pass the client to the provider, thus the provider and scaledObject Controller are sharing the same informer cache together the code will be like with
|
ScaledObject controller is running in KEDA Operator Pod, Provider in Metrics Server Pod. |
The client for Metrics Server (thus provider) is setup here: https://github.com/kedacore/keda/blob/main/adapter/main.go I happy to see any improvements on this side. |
yea, what trouble me is the Metrics Server pare. |
@zroubalik master, could you help me review on this #2922? |
Report
we create arounds 200 scaleObjects in the cluster, the qps from keda-metrics-adapter to apiserver is quite high, which create burden to apiserver
the code for listing here: https://github.com/kedacore/keda/blob/main/pkg/provider/provider.go#L90
Expected Behavior
Not calling apiserver so frequently, should be done by reconcile by watching constantly
Actual Behavior
Calling apiserver every time the metrics request to get the so resource
Steps to Reproduce the Problem
large amounts of scaleObjectx
Logs from KEDA operator
No response
KEDA Version
2.6.1
Kubernetes Version
1.20
Platform
Other
Scaler Details
No response
Anything else?
if you guys feel ok either way, I can give a try
The text was updated successfully, but these errors were encountered: