Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm value maping #1238

Closed
kfox1111 opened this issue Mar 24, 2019 · 10 comments
Closed

Helm value maping #1238

kfox1111 opened this issue Mar 24, 2019 · 10 comments
Assignees
Labels
language/helm Issue is related to a Helm operator project lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@kfox1111
Copy link

Feature Request

It is not always possible to have a clean values api with helm (when you nest charts for example)

It would be good if the operator could have a step in between the cr and values that let's the operator developer modify values on the way from cr. For example duplicating a value into multiple places.

@lilic lilic added the language/helm Issue is related to a Helm operator project label Mar 25, 2019
@joelanford joelanford self-assigned this Mar 26, 2019
@joelanford
Copy link
Member

@kfox1111 I may be misunderstanding your question, but it sounds like you're trying to figure out how to share and pass values around to subcharts, which Helm supports natively.

Those features should work with the helm-operator as well.

@kfox1111
Copy link
Author

yes/no...

chart values often can only be read by a subchart based on a subsection of values based on the subchart name. This is often problematic when the subchart is written by someone other then the person writing the main chart.

For example, there is currently no way to map one value going in, to one or more values on the chart.

For example, take this instantiation of a helm chart we currently use:

username=foo
helm install namespace --name $username --namespace ${username}-admin \
  --set magicnamespace.namespace=$username \
  --set ingress.controller.scope.namespace=$username 

Ideally helm would have a feature to deal with this, but doesnt. Barring that, I would like the CR to be clean. so

kind: helmnamespace
spec:
  user: foo

would go through some kind of mapper that could take spec.user and assign its value to the values of

  • magicnamespace.namespace,
  • ingress.controller.scope.namespace

before sending it to helm for processing.

Does that help?

@joelanford
Copy link
Member

@kfox1111 It does. Thanks. And I can definitely see how that would be useful for subcharts authored and maintained outside of your control.

However, I'm hesitant to add this feature since the Helm operator is basically a light wrapper around the Helm install, upgrade, uninstall workflow and the CR spec maps directly to the --values file. In general, we want to keep the Helm operator aligned with Helm's features since one of its primary goals is to introduce Helm chart developers to operators and make it simple for them to package an existing chart as an operator.

Thinking "out loud" here, but one way to achieve what you're trying to do may be to use a two-tiered approach. You could have two CRD/helm chart mappings in your watches.yaml, where your primary CRD/helm chart could have your high level values and a single template for your secondary CR, where you would be able to fill in $username in all of the necessary places.

@kfox1111
Copy link
Author

kfox1111 commented Apr 2, 2019

Yeah, I wondered if that would be the case. I'm kind of split on the whole thing. Having such functionality may be a selling point for those wanting to move from pure helm to operator wrapped helm. Don't want to overcomplicate the operator sdk though if its just to work around a lack of a helm feature. But helm doesn't provide a CR feature at all. So coming up with a good clean CR is kind of in scope for the operator sdk.

I was thinking something relatively simple, like the ability to specify a set of jsonpatches to apply from the CR -> values. A jsonpatch using the copy operation would do the trick easy I think. Jsonpatch support may be useful for Ansible and Go as well, not just the helm case. Maybe that would make it more appealing?

Hmm... So if I'm understanding right, your suggesting make a second mapping/helm chart that does CRD1 -> CRD2. where CRD2 is used by the operator to drive the real helm chart. The secondary chart does templating of CRD2 based on the values that came from CRD1? That could work, but is kind of twisty... user -> CRD1 -> operator -> helm -> CRD2 -> operator -> helm -> ? It also weaves in and out of yaml -> gotl -> yaml etc.

@joelanford
Copy link
Member

Hmm... So if I'm understanding right, your suggesting make a second mapping/helm chart that does CRD1 -> CRD2. where CRD2 is used by the operator to drive the real helm chart. The secondary chart does templating of CRD2 based on the values that came from CRD1? That could work, but is kind of twisty... user -> CRD1 -> operator -> helm -> CRD2 -> operator -> helm -> ? It also weaves in and out of yaml -> gotl -> yaml etc.

Yeah, that's the only way I can think of making it work as is. I agree it's not pretty and may have some pitfalls.

@shawn-hurley would this be applicable/desirable in the Ansible operator? I'm not familiar enough with Ansible to know if it is capable to threading a single variable through multiple roles/playbooks.

More broadly, it seems like this is a use case for a hybrid operator, which is basically an Ansible or Helm operator converted to a Go operator using operator-sdk migrate. The goal of hybrid operators is to give operator developers the ability to incorporate custom logic while still getting the benefits of the Ansible or Helm operator.

We're still trying to nail down use cases and public APIs for Ansible and Helm hybrid operators. When we get around to it, we'll have more discussion in #670 and possibly #1186

For this, maybe we could make the CR spec -> values.yml conversions overridable, such that the default direct copy mechanism could replaced with another function provided by the hybrid operator developer.

@joelanford
Copy link
Member

@kfox1111 We've had more team discussions about hybrid operators and I brought this use case up as something we should support. It has been incorporated more generally into #1307 as the first use case listed:

I want to augment reconciliation logic with go code that runs before or after my ansible or helm based reconciliation logic.

Do you think this is a reasonable approach for your use case? If so, can we close this and consolidate discussion in #1307?

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 5, 2019
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 5, 2019
@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci-robot
Copy link

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
language/helm Issue is related to a Helm operator project lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants