-
Notifications
You must be signed in to change notification settings - Fork 1.1k
add the valueFiles
value to the FluxHelmRelease
#1468
Conversation
* Allows to read values from a file(s) * They have the lowest priority. values override. * added private function to merge chartutils.Values * updated release.go to parse new values and files * updated helm-integration docs to give examples and describe all values
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, I added some suggestions.
@@ -24,6 +24,7 @@ type FluxHelmRelease struct { | |||
type FluxHelmReleaseSpec struct { | |||
ChartGitPath string `json:"chartGitPath"` | |||
ReleaseName string `json:"releaseName,omitempty"` | |||
ValueFiles []string `json:"valueFiles,omitempty"` |
This comment was marked as abuse.
This comment was marked as abuse.
Sorry, something went wrong.
|
I can see why this is useful, in the absence of other ways of combining values. I think the use case it's serving is: "use the same manifests for different clusters, but specialise them with per-cluster defaults". Is that more or less it? It seems like a bit of a trapeze act to put a file path in the FluxHelmRelease resource and expect it to be present in the Helm operator. Could it be adapted to work as suggested in #1172 instead, and still do what you need? |
The main use cases I have for these are for a few variable which are properties of the cluster, but this generic solution could be used in a bunch of ways (a different file for every chart you want to install if you are feeling crazy). Although I was more expected every/most FluxHelmRelease's to have the same path if they needed access to these cluster values. At the moment you need to mount a config map and two secrets just to get helm-operator running so adding a second config map doesn't seem too difficult. The benefit means that each cluster which you operate doesn't need to have it's own set of FluxHelmRelease's. In my case we will deploy ~30 charts, across 5 regions, with 3 clusters per region, that is a lot of fhr's. Of course, if there is a better way of doing this quickly I would be all for it. The other option I thought of was to contribute to helm (tiller in particular). See helm/helm#4655. However with tiller probably being removed in the future it didn't seem the right place. |
OK, so if you could refer to a secret from the FluxHelmRelease, and it would be combined with the values, would that work for you? |
Yes, but that feels very similar to this approach with the differences being:
Is the secret functionality already implemented? |
Right; both could be provided for, though secrets are in some sense more general, since they deal better with things that are secret, and have specialised tooling (and are otherwise equivalent in utility. I think.)
In this case, I think they would be read every time an install is attempted, yes.
Yup. So, it would not include the ability to stick things in the operator's filesystem by whatever means, and then refer to them. If that's a hard requirement, we can find another way.
Nope -- but I reckon this PR is pretty close to implementing it, since it already has the code path for finding then combining the values file per FHR. |
Ok I will look into changing this PR to get the secret. 👍 |
* add kubeConfig argument to Install function * pass chartutils kubeClient to install function * get values.yaml from secrets and merge in order * update FluxHelmRelease type and crd's
remove valueFiles, add valueFileSecrets
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This .. 🥁 .. looks like it'll work! Does it work? :-)
Thank you especially for providing updated documentation.
I have tried this out on minikube with the following objects. Works well. Having the defaults in every namespace you want to install is the limitation at the moment but it limits deploys having to go across namespace boundaries.
replicataCount: 5
logLevel:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: cluster-bootstrap
namespace: dev
data:
values.yaml: cmVwbGljYUNvdW50OiA1CmxvZ0xldmVsOgoK
- apiVersion: helm.integrations.flux.weave.works/v1alpha2
kind: FluxHelmRelease
metadata:
labels:
chart: podinfo
name: podinfo-dev
namespace: dev
spec:
chartGitPath: podinfo
releaseName: podinfo-dev
valueFileSecrets:
- name: cluster-bootstrap
values:
hpa:
enabled: false
image: stefanprodan/podinfo:dev-hdtwcel9
replicaCount: 3 This installs successfully and we can check the combined values with hpa:
enabled: false
image: stefanprodan/podinfo:dev-hdtwcel9
logLevel: null
replicaCount: 3 |
values
Discussed adding a change similar to this with @stefanprodan on zoom. This is a first pass (and some of my first go) so let me know what I could change.
I didn't add tests as I am not up to speed with go tests, I couldn't see any tests for helm-operator either 🙁