-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm lookup Function Support #5202
Comments
Note that even if we allowed configuring Argo CD to append the Since you would anyways need a customized repo-server, you can already accomplish this today using a wrapper script around the |
@jessesuen, I guess this workaround is possible only with in-cluster configuration, and wont work for the external ones. |
Ah yes, you are right about that unfortunately. |
@jessesuen just coming across this issue and I'm running into the same. There is duplicate issue here as well: #3640 You mentioned two things to accomplish as a work around for Argo not supporting this:
Can you expand more on the wrapper script? How would one inject that into a standard Argo deployment? |
Hi @jessesuen, @Gowiem, any updates on this? Thanks in advance, Dave |
@dvcanton I tried the Argo plugin / wrapper script approach that @jessesuen mentioned after asking about it directly in the Argo Slack. You can find more about that by looking at the plugins documentation. Unfortunately, that solution seemed overly hacky and pretty esoteric to me and my team. Instead we've now moved towards not using |
A lot of charts use build-in objects such as Capabilities to provide backward compatibility for old APIs. Capabilities.APIVersions works properly only with --validate flag because without this flag it returns only API versions without available resources. |
As about Capabilities, |
@kvaps take a look for an example which I posted. |
@randrusiak, it works to me: # helm template . --set ingress.enabled=true --include-crds > /tmp/1.yaml
# helm template . --api-versions networking.k8s.io/v1/Ingress --set ingress.enabled=true --include-crds > /tmp/2.yaml
# diff -u /tmp/1.yaml /tmp/2.yaml
@@ -399,7 +399,7 @@
emptyDir: {}
---
# Source: grafana/templates/ingress.yaml
-apiVersion: extensions/v1beta1
+apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: RELEASE-NAME-grafana
@@ -417,9 +417,12 @@
paths:
- path: /
+ pathType: Prefix
backend:
- serviceName: RELEASE-NAME-grafana
- servicePort: 80
+ service:
+ name: RELEASE-NAME-grafana
+ port:
+ number: 80 my idea was that ArgoCD could provide repo-server list of api-versions from destination Kubernets API. eg:
will return all available apiversions for the cluster. Not sure if lookup function support can be implemented with the same simplicity, as it already requires direct access to the cluster. |
@kvaps I understand how it works on the helm level, but I don't know how to pass this additional flag |
Actually my note was more likely for contributors than for users :) On current stage, I think you got nothing to do. The only workaround for you is to add serviceAccount to your repo-server and use Another option for you is to hardcode those parameters somewhere, eg. save the output of the following command:
And pass it to helm, using any suitable way for you, eg, you can still use wrapper script for cat /usr/local/bin/helm
#!/bin/sh
exec /usr/local/bin/helm.bin $HELM_EXTRA_ARGS "$@" where:
or using custom-plugin |
We also ran into the issue with the non-working lookup function. Background is that we want to make sure that for a certain service, a secure random password is generated instead of having a hardcoded default. If desired, the use can explicitly set his own password, but most people don't. Our goal is to keep the helmchart usage as simple as possible and require as little parameters for simple installations. So I would like to keep the "generate a secure (and stable) random password" as the default for "pure" Helm usage. |
Any update on this issue? |
Ran into this same problem today :( more context in https://cloud-native.slack.com/archives/C01TSERG0KZ/p1635024460105000 |
Same thing happens with aws-load-balancer-controller's mutating webhook that defines tls key, cert and CA: |
Ran into this today :( Do you have any timeline when the lookup will be available? |
It looks like you have overseen my question so I wanted to ask again. Is there any plan to get this in and when? |
I run into the same issue, but maybe the lack of this function is a good reason to move to the fluxv2, which already supports it |
FYI, this project provides a decent workaround https://github.com/kuuji/helm-external-val |
I was trying apply this, but whenever I use lookup function in helm I'm getting error:
but when I remove the lookup function and put value all works as expected. My secret.yaml file:
Helm version on argo-repo-server: |
I did resolve my issue using new service account ( instead of using "default") |
This workaround works only within the cluster where is ArgoCD deployed right? If you use multiple clusters connected to ArgoCD and ArgoCD is in other cluster, the lookup function won't work as it has no permission to do the lookup in other clusters right? |
Hey, we are struggeling with the same issue and felt over it already multiple times. In our case it is mostly handling auto-generated credentials/keys, which is working pretty fine with helm lookup feature. is there any plan to give the user the option, to choose --dry-run=server as diff/sync "model"? |
There's a challenge here from Gitops perspective. This is a dynamic dependency, so we may need to update and sync the application even if the helm charts and values didn't change. Would it be possible to somehow subscribe to a stream of events or have the corresponding config map as a part of the same application (so that update would trigger refresh + sync)? |
i am not sure if this would help somehow. that's why we are using helm lookup right now on multiple places, which is working fine w/o argocd :/ |
Hi, I’m interested in addressing the root cause of the issue. Regarding Resource Update TrackingI don’t believe we need to track updates to looked-up resources, since the As mentioned in this issue, the
Helm itself doesn’t load resources after deployment, and the Helm Operator doesn’t either. Helm charts relying on the Apologies for the lengthy explanation; I just wanted to clarify my point. API DesignFor the implementation, I propose adding a apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sealed-secrets
namespace: argocd
spec:
project: default
source:
chart: sealed-secrets
repoURL: https://bitnami-labs.github.io/sealed-secrets
targetRevision: 1.16.1
helm:
releaseName: sealed-secrets
+ serverDryRun: false # Set this to "true" to enable the lookup function
destination:
server: "https://kubernetes.default.svc"
namespace: kubeseal The name Since the Considered alternative field names
If these ideas look good, I want to work on the actual implementation as well. |
I wonder who is responsible for approving this design. |
@logica0419, please join the contributors meeting https://docs.google.com/document/d/1xkoFkVviB70YBzSEa4bDnu-rUZ1sIFtwKKG1Uw8XsY8/edit?tab=t.0. That's the best place to get people's attention and discuss designs. |
the root cause of this is that Helm is growing into an orchestration tool. Any Helm based orchestration should be used as a last resort. See the ArgoCD challenges with Helm hooks for example. Adding lookup functionality to Helm was a mistake IMO. Would much rather solve the problem inside of Kubernetes or extending Kubernetes. I am not sure if ArgoCD aims to have pairity with all of Helm functionality. |
Agreed. But to be fair, ArgoCD is the thing that's inside the cluster. So there's no reason to not support these things. |
The next challenge to be tackled is "lookup as who?" When you run But when Argo CD runs If you're using the new (alpha) service account impersonation feature, it's possible that the answer is "lookup as the configured service account." This gives the Argo CD admin the option of limiting what the lookup can access. Once we answer "who" we have to answer "how." In order for the So I think this is a rare case of "the API is the easy part." The rest of the design is hard. |
I'll take the opportunity to express my opinion again that separation of concerns in secret management is good. In my opinion, Helm charts should reference secrets rather than inject secrets. Secrets should always be populated on the destination cluster by a secrets operator. But I understand there are pragmatic reasons to prefer Helm lookups. :-) |
Good point. I think External Secrets Operator implemented this pattern already https://external-secrets.io/latest/api/clustersecretstore/. They ask you to provide a Kubernetes Service Account (which is authorized to connect to various Secrets Provider backends, but those are irrelevant in this context), and somehow use the KSA to authenticate. I guess similar pattern can be implemented in ArgoCD. Users can be expected to provide some Kubernetes Service Account, which has proper permission to do whatever they want. ( Helm lookup in this case ). Also if this pattern works out well, ArgoCD could even be able to define specific Kubernetes Service Account per Application... to further limit the ArgoCD's own root account permission.. Wdyt? |
@crenshaw-dev do you have an example of a secrets operator that can manage kubernetes webhooks? Today helm can manage the entire process of webhooks creation, with the missing piece of the puzzle being the lookup function:
|
@shinebayar-g I think service account impersonation gets us close to what you're describing. But instead of configuring the service account at the app level, we do it at the project level. In my opinion, that's sufficient to answer the "lookup as who" question. Now the problem is "how do we get the appropriate creds to the repo-server?" @dudicoco I think cert-manager can do what you're describing: https://cert-manager.io/docs/concepts/ca-injector/ |
@crenshaw-dev I'm aware that cert-manager is capable of that, I don't think users should be forced to use yet another complex tool just for the sake of generating self signed certs for webhooks (assuming they don't need cert-manager for other things). |
I'll give another example - what if I just want to generate a random secret to be consumed by my app (for example grafana admin password)? |
Both of these use cases require secret lifecycle management. You need to update certs when they expire, and you need to rotate secrets on a regular basis. Argo CD is not a lifecycle management tool. It doesn't know when you need to update secrets and when you need to avoid updating secrets. By making secret generation a side-effect of a GitOps deployment, you're forcing imperative state management into a system that's designed for continuous reconciliation of declarative state. Kubernetes works best with a micro-service architecture with strong separation of concerns. Cert manager is one of those micro-services. ESO could be another, providing a generated Grafana password. Mixing secret lifecycle management concerns into a GitOps controller is asking for pain. |
It's nice to find out that external-secrets added support for password generation, however, that does not mean that now everyone who wants to work with argocd should have to use this solution. There are many ways to generate secrets, for example you could also generate them with terraform. So it feels opinionated to me to say that it is "forcing imperative state management into a system that's designed for continuous reconciliation of declarative state", it's actually the other way around to force users to only use external-secrets and cert-manager as the only possible solutions if they want to generate secrets automatically. |
It is opinionated, and there's a cost/benefit to having that opinion. The cost is that people are forced to adapt and sometimes to hack around the opinion. The benefit is that some folks will adopt more modern practices and that Argo CD avoids introducing a costly feature. I'm not completely opposed to this feature. I think service account impersonation has carried us a long way towards making this viable. The next big hurdle is to design a way for the repo-server to securely read external cluster state when hydrating manifests. Depending on the complexity/risk of that solution, I may or may not be able to support merging it. |
Hello,
Happy new year Guys !!
So, I have this requirement to build the imagePath by reading the dockerRegistryIP value from configMap, so that I need not ask user explicitly where the registry is located.
Helm3 has introduced support for this where they introduced a lookup function through which configMap can be read at runtime like this,
{{ (lookup "v1" "ConfigMap" "default" "my-configmap").data.registryURL}}
But the lookup function will return nil when templates are rendered using "helm dryrun" or "helm template" as a result when you parse a field on nil, you will see an exception like this,
"nil pointer evaluating interface {}.registryURL Use --debug flag to render out invalid YAML"
The solution which was proposed on stack overflow is to use "helm template --validate" instead of "helm template"
Can you guys add support for this ?
Right now am populating docker-registry-ip like this, but with this kustomize-plugin approach am loosing the ability to render values.yaml file as an config screen through which user can override certain values i.e. the fix to solve one issue has lead to an other issue
The text was updated successfully, but these errors were encountered: