Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEDA might break existing deployment on cluster which already has another External Metrics Adapter installed #470

Open
zroubalik opened this issue Nov 18, 2019 · 55 comments
Assignees
Labels
enhancement New feature or request stale-bot-ignore All issues that should not be automatically closed by our stale bot

Comments

@zroubalik
Copy link
Member

KEDA is using metrics adapter based on custom-metrics-apiserver
library
. As part of the deployment, user need to specify cluster wide APIService object named v1beta1.external.metrics.k8s.io, see in the library example and in KEDA deployment .

I wonder what would happen, if user has already deployed another Metrics Adapter (which is using the same APIService based approach) and we try to install Keda. It will probably replace the original APIService definition with KEDA one, so KEDA will work, but the original stuff installed on cluster probably not. We should not break things or should make clear, that this could happen.

We should investigate what are the possibilities and whether there are a better solutions on how to deal with the metrics. Or my assumptions are wrong, so please correct me in this case.

@zroubalik zroubalik added the bug Something isn't working label Nov 18, 2019
@zroubalik
Copy link
Member Author

this should probably have 'needs-discussion' label, I am not able to assign it.

@zroubalik zroubalik changed the title KEDA might not work on cluster which already has another External Metrics Adapter installed KEDA might break stuff on cluster which already has another External Metrics Adapter installed Nov 18, 2019
@zroubalik zroubalik changed the title KEDA might break stuff on cluster which already has another External Metrics Adapter installed KEDA might break existing deployment on cluster which already has another External Metrics Adapter installed Nov 18, 2019
@jeffhollan
Copy link
Member

@zroubalik I’ll make sure you get label permissions :). Adding needs-discussion and help-wanted in case someone gets change to validate if it is a bug

@jeffhollan jeffhollan added help wanted Looking for support from community needs-discussion and removed bug Something isn't working labels Nov 18, 2019
@Aarthisk
Copy link
Contributor

@jeffhollan I am pretty sure this is by design right now. I will take a look at what options are available to chain metric servers.

@zroubalik
Copy link
Member Author

@Aarthisk I am planning to look at other options as well.

@markusthoemmes
Copy link
Contributor

This is a known limitation of the custom/external metrics API. A possible solution is to come up with an aggregation API as per kubernetes-sigs/custom-metrics-apiserver#3. Knative's HPA support suffers from the same limitation.

@zroubalik
Copy link
Member Author

@markusthoemmes thanks for the info

@tomkerkhove
Copy link
Member

What's the status here, not sure if we can do anything about this?

@zroubalik
Copy link
Member Author

We should keep it open until it gets resolved in https://github.com/kubernetes-sigs/custom-metrics-apiserver/

@v-yarotsky
Copy link

Ran into this because datadog helm chart also creates the APIService object with the same name of v1beta1.external.metrics.k8s.io

@zroubalik zroubalik added this to Proposal in Roadmap Oct 26, 2020
@zroubalik zroubalik self-assigned this Oct 26, 2020
@zroubalik zroubalik added enhancement New feature or request and removed help wanted Looking for support from community labels Oct 26, 2020
@hinling-sonder
Copy link

We ran into the same situation as @v-yarotsky mentioned. We have datadog installed with datadog-cluster-agent-metrics-api:

➜  ~ kubectl get apiservice | grep external.metrics                                                                                                              
v1beta1.external.metrics.k8s.io             default/datadog-cluster-agent-metrics-api

It does not overwrite or break existing v1beta1.external.metrics.k8s.io APIService. It just won't install keda and complains about:

Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: APIService "v1beta1.external.metrics.k8s.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "keda": current value is "datadog"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "keda": current value is "default"

@tomkerkhove
Copy link
Member

That issue might be related to kedacore/charts#88

@zroubalik
Copy link
Member Author

@tomkerkhove unfortunately it is not. Looking at that issue and the last error message from the screenshot, it is missing some helm labels, so the validation is failing.

@hinling-sonder
Copy link

I did naively try to remove the 24-metrics-apiservice.yaml chart to work around the problem but of course then you run into another problem with v1beta1.external.metrics.k8s.io routing traffic to default/datadog-cluster-agent-metrics-api not keda-operator-metrics-apiserver and HPA failed to retrieve the redis metrics...

ScalingActive  False   FailedGetExternalMetric  the HPA was unable to compute the replica count: unable to get external metric hinling/redis-mailers/&LabelSelector{MatchLabels:map[string]string{scaledObjectName: redis-scaledobject,},MatchExpressions:[]LabelSelectorRequirement{},}: no metrics returned from external metrics API

I did see you @zroubalik have this issue: kubernetes-sigs/custom-metrics-apiserver#70 proposed. If you have a branch/fork with this fix, we are more than happy to try it out. We are also happy to help with implementation.

@zroubalik
Copy link
Member Author

@hinling-sonder it is still just a proposal, I should have start working on this in a very near future. But to get it working, a change on Datadog side will be needed as well.

@tomkerkhove tomkerkhove moved this from Proposal to Planned (Committed) in Roadmap Jan 12, 2021
@hbouissoumer
Copy link

hello all,

we are also experiencing the same issue here, we are using kube-state-metrics as a metric provider for our scaling strategy and they keep overriding the v1beta1.external.metrics.k8s.io APIService; it has to be either one or the other, i would be glad to help if i can!

@tomkerkhove
Copy link
Member

We are definately aware of this and that it's a pain point, sorry! We have a very smart guy in our team who will look into a POC for contributing this upstream.

@JorTurFer
Copy link
Member

JorTurFer commented Feb 28, 2022

The option that you could choose is using KEDA with Datadog Scaler

@alxgruU
Copy link

alxgruU commented Feb 28, 2022

trying to deploy keda on a k8s cluster with existing datadog deployment :

 helm install keda kedacore/keda --namespace keda

Error: rendered manifests contain a resource that already exists. Unable to continue with install: APIService "v1beta1.external.metrics.k8s.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "keda": current value is "datadog-agent"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "keda": current value is "default"

and same error when trying to deploy datadog on a cluster with existing keda deployment:

rendered manifests contain a resource that already exists. Unable to continue with install: APIService "v1beta1.external.metrics.k8s.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "datadog-agent": current value is "keda"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "default": current value is "keda"

is there a workaround for this?

@zroubalik
Copy link
Member Author

zroubalik commented Feb 28, 2022

@alxgruU as mentioned above, you cannot have both KEDA and another service that is using APIService "v1beta1.external.metrics.k8s.io", in your case Datadog.

@slv-306
Copy link

slv-306 commented Mar 7, 2022

Team I'm facing the below error. What is workaround?
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: APIService "v1beta1.external.metrics.k8s.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "keda"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"

@JorTurFer
Copy link
Member

hi @slv-306
When do that happen? Are you installing KEDA for first time and you have another metrics server or similar?

@slv-306
Copy link

slv-306 commented Mar 7, 2022

We have another metrics-server in place. Is there any workaround. By default we have custom-metrics/custom-metrics-stackdriver-adapter in place.

@JorTurFer
Copy link
Member

if the metrics server that you already have uses v1beta1.external.metrics.k8s.io (and the error suggests that), there is not any workaround, sorry.
The limitation is at k8s level, we can't provide any solution from KEDA side

@slv-306
Copy link

slv-306 commented Mar 8, 2022

Is there any way to push metrics to stackdriver with Keda

@tomkerkhove
Copy link
Member

No, unfortunately not.

@slv-306
Copy link

slv-306 commented Mar 8, 2022

Can we use two triggers for autoscaling single deployment. Like one trigger for CPU while other for RPS

@JorTurFer
Copy link
Member

do you mean using KEDA?
Yes, with KEDA you can use all the triggers that you want under the same Scaled{Job|Object} for the same workload.
If you meant using your current system and also KEDA, not, it's not possible

@tomkerkhove
Copy link
Member

@slv-306 As @JorTurFer mentioned this is supported and documented in our FAQ: https://keda.sh/docs/latest/faq/

May I ask you to create a GitHub Discussion for asking questions that are not related to this issue please? This helps us keep the conversation more focused - Thank you!

@tomkerkhove
Copy link
Member

FAQ update: kedacore/keda-docs#950

@devmanuelgonzalez
Copy link

Hello guys, I managed to workaround this disabling the metricsProvider from Datadog. In this way you can deploy both charts in your cluster. Keep in mind that the Datadog metricProvider is for Datadog to autoscale using Custom Metrics.

https://docs.datadoghq.com/containers/guide/cluster_agent_autoscaling_metrics/

@JorTurFer
Copy link
Member

We are working on a proposal to fix this limitation directly in k8s, but we are still drafting the KEP. I hope that during next months it can be fixed, but currently, thanks for your workaround!

@yesjinu
Copy link

yesjinu commented Dec 11, 2023

Hi guys! Just for those who still struggle to fix this issue (integrating Datadog with Keda)

I finally made it to install datadog in my GKE with clusterAgent.metricsProvider.enabled=false

You may not need metricsProvider to be enabled as it's only used for datadog auto-scale.

@JorTurFer
Copy link
Member

About this issue, we are working on a KEP to support multiple metrics servers natively in Kubernetes 😋
kubernetes/enhancements#4262

@zroubalik
Copy link
Member Author

Hi guys! Just for those who still struggle to fix this issue (integrating Datadog with Keda)

I finally made it to install datadog in my GKE with clusterAgent.metricsProvider.enabled=false

You may not need metricsProvider to be enabled as it's only used for datadog auto-scale.

This might be a good candidate for documentation, Troubleshooting/FAQ section. @JorTurFer WDYT?

@JorTurFer
Copy link
Member

JorTurFer commented Dec 11, 2023

It's already documented: https://keda.sh/docs/2.12/faq/#kubernetes

image

@zroubalik
Copy link
Member Author

I meant the coexistence with Datadog as mentioned in the quoted #470 (comment)

@JorTurFer
Copy link
Member

ah, okey. It could be, but I'm not 100% sure if we haven't written it yet

@MaciekLeks
Copy link

Any updates here? It has been almost 4 years. I encountered this issue (APIService already exists) in my GKE with Google-Managed Prometheus and custom-metric-stackdriver.

Error: Unable to continue with install: APIService "v1beta1.external.metrics.k8s.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "keda"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "keda"

@tomkerkhove
Copy link
Member

We are still blocked by Kubernetes upstream

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request stale-bot-ignore All issues that should not be automatically closed by our stale bot
Projects
Status: To Do
Development

No branches or pull requests