Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Choose version of helm (between helm-2.x, helm-3.0.x, helm-3.1.x and so on...) #4058

Open
ngc104 opened this issue Aug 6, 2020 · 10 comments
Labels
enhancement New feature or request

Comments

@ngc104
Copy link

ngc104 commented Aug 6, 2020

Problem

The following use-case is not exactly my use-case, but I'm trying to define it as much possible as it can.

I have 5 clusters K8S :

  • a cluster named cluster-k13h2 in version 1.13 where helm-2.x is installed
  • a cluster named cluster-k13h3 in version 1.13 where helm-3.0.x is installed
  • a cluster named cluster-k13h23 in version 1.13 where both helm-2.x and helm-3.0.x are installed
  • a cluster named cluster-k15 in version 1.15 where helm-3.2.x is installed
  • a cluster named cluster-k17 in version 1.17 where helm-3.2.x is installed

I have 2 helm charts to deploy :

  • helm chart named hc1 is defined with apiversion: v1
  • helm chart named hc2 is defined with apiversion: v2 (requires helm version 3.x)

The helm version I'm running with ArgoCD is 3.2.0 (from the argoproj/argocd:v1.6.1 image)

What do I expect

What do I expect ? And what can I do with the helm CLI ?

  • cluster-k13h2 : I can install hc1 only because helm-3.x is needed for hc2
  • all other clusters : I can install both hc1 and hc2.

What do I have with ArgoCD 1.6 ?

  • cluster-k13h2 : I can install hc1 with success. (and not the hc2 but that is normal, see above)
  • cluster-k13h3 : Total failure : for some reason, helm 3.2.0 does not support a feature of K8S-1.13 for hc2. And for hc1 it will require Tiller and Tiller is not installed on that cluster !
  • cluster-k13h23 : Success for hc1. But failed for hc2 for the same reason as above with cluster-k13h3.
  • cluster-k15 : I can install hc2 with success. But what happens with hc1 ? Because of apiversion: v1 it will require Tiller and Tiller is not installed on that cluster !
  • cluster-k17 : I can install hc2 with success. But what happens with hc1 ? Because of apiversion: v1 it will require Tiller and Tiller is not installed on that cluster !

Notice that I have added cluster-k15 and cluster-k17 that have the same behaviour here. But if you prefer a binary helm-3.0.x to make it work on cluster-k13h3 and cluster-k13h23, it will work on cluster-k15 and fail on cluster-k17

More doc :

Question

How can we specify the version of helm we want to use ?

This issue may be linked to issue #3872

@ngc104 ngc104 added the bug Something isn't working label Aug 6, 2020
@rachelwang20
Copy link
Contributor

rachelwang20 commented Aug 17, 2020

feat: Add configurable Helm version (#3872) #4111

@rachelwang20 rachelwang20 added the verify Solution needs verification label Aug 24, 2020
@ngc104
Copy link
Author

ngc104 commented Aug 26, 2020

Hello,
I read #4111. Nice job ! 👍

However, there is no distinction between possible versions of v3.

For this issue, there are 2 points of view.
1/ you do not support yet multiple versions of Kubernetes at the same time (for example, cluster-k13h3 and cluster-k17 cannot be supported by the same instance of Argo-CD. I can force the binary version of helm I want for "v3" but I cannot support 2 distinct "v3" binaries of Helm at the same time). This issue cannot be closed yet.

2/ you explicitely do not support it (too bad for all those who are still working with older versions of Kubernetes and have already deployed newer versions of Kubernetes : there is no version of Helm that both support K13 and K17 for example but there are people who are working with both versions). I guess you can close this issue 👎

Maybe you can add some other code than v2 and v3 to specify a specific version of Helm ?

@jessesuen jessesuen added this to the v1.8 milestone Aug 28, 2020
@jessesuen jessesuen assigned alexmt and unassigned rachelwang20 Aug 28, 2020
@loxley
Copy link
Contributor

loxley commented Aug 29, 2020

Hi,

If I am not mistaken helm v2 and v3 is inserted to the Docker image where the respective versions of Helm is specified. Then in the code it is referred to with either 'helm' and 'helm2'. Maybe it could be an option for yourself to bake in another version of helm to the image and we could make the 'binaryName' field to be overridable ?

I could fix that if it's a good enough solution, what do you all think and @alexmt ?

(or I could miss the point completly :) )

The '--helm-version' flag could then instead be the 'binaryName' or keep it and just add a '--helm-binary' flag for overriding the binary.

@ngc104
Copy link
Author

ngc104 commented Aug 31, 2020

Hello,

Maybe it could be an option for yourself to bake in another version of helm to the image and we could make the 'binaryName' field to be overridable ?

If I understand your meaning, this can be a nice idea. However when you mention --helm-version I'm not sure to understand well...

Here are my new thaughts (using your idea to override the binary name) :
The good idea : be able to specify the binary.
Where ? In the application definition
Why ? Because my project may be deployed on multiple clusters, including k8s-1.13, k8s-1.15, k8s-1.17 and the newer k8s-1.19 (released very recently) and all helm3 do not support all clusters. And because the helm version depends on the destination server which is defined in the application.
How ? No idea. As a full path ? As a relative path to a place where all helm binaries may be present ?

I would define

  • an application with destination.server=k8s-1.13 and helm binary as /whatever/helm-3.0.x
  • an application with destination.server=k8s-1.15 and helm binary as /whatever/helm-3.3.x
  • an application with destination.server=k8s-1.17 and helm binary as /whatever/helm-3.3.x (may change later to use the latest version)
  • an application with destination.server=k8s-1.19 and helm binary as /whatever/helm-3.3.x (may change as it is not mentionned in https://helm.sh/docs/topics/version_skew/#supported-version-skew yet)

Note 1 : I'm not using Helm2. I won't be able to describe a use case for helm2. Do not forget it if you want to support it.

Note 2 : With the ability to import my own binaries of helm (https://argoproj.github.io/argo-cd/operator-manual/custom_tools/#adding-tools-via-volume-mounts) I will probably use my tools and load as many subversions of helm3 as I need. So (thanks to PR #4111 ) the use case can now be simplified as : how do I specify the helm3 binary I want to use for a destination server ?

@loxley
Copy link
Contributor

loxley commented Aug 31, 2020

Hello,

Maybe it could be an option for yourself to bake in another version of helm to the image and we could make the 'binaryName' field to be overridable ?

If I understand your meaning, this can be a nice idea. However when you mention --helm-version I'm not sure to understand well...

I think it would be good to keep the ability to specify v2 or v3 as already done in #4111 because v2 or v3 does not share the same options on the cmdline.

But we could introduce binary to the spec and following flag for cmdline like --helm-binary for people to be able to specify a specific version of helm (like in your case).

Here are my new thaughts (using your idea to override the binary name) :
The good idea : be able to specify the binary.
Where ? In the application definition

The version field is already part of the Helm Application spec. So if introducing a binary spec option would be under the same spec path.

Something like:

spec:
  source:
    helm:
      version: v3
      binary: helm3.0whatever

Why ? Because my project may be deployed on multiple clusters, including k8s-1.13, k8s-1.15, k8s-1.17 and the newer k8s-1.19 (released very recently) and all helm3 do not support all clusters. And because the helm version depends on the destination server which is defined in the application.
How ? No idea. As a full path ? As a relative path to a place where all helm binaries may be present ?

I would define

  • an application with destination.server=k8s-1.13 and helm binary as /whatever/helm-3.0.x
  • an application with destination.server=k8s-1.15 and helm binary as /whatever/helm-3.3.x
  • an application with destination.server=k8s-1.17 and helm binary as /whatever/helm-3.3.x (may change later to use the latest version)
  • an application with destination.server=k8s-1.19 and helm binary as /whatever/helm-3.3.x (may change as it is not mentionned in https://helm.sh/docs/topics/version_skew/#supported-version-skew yet)

Note 1 : I'm not using Helm2. I won't be able to describe a use case for helm2. Do not forget it if you want to support it.

This would most likely also be supported in what i suggest above.

Note 2 : With the ability to import my own binaries of helm (https://argoproj.github.io/argo-cd/operator-manual/custom_tools/#adding-tools-via-volume-mounts) I will probably use my tools and load as many subversions of helm3 as I need. So (thanks to PR #4111 ) the use case can now be simplified as : how do I specify the helm3 binary I want to use for a destination server ?

Look at my answer above :)

@ngc104
Copy link
Author

ngc104 commented Aug 31, 2020

Something like:

spec:
  source:
    helm:
      version: v3
      binary: helm3.0whatever

LGTM

@loxley
Copy link
Contributor

loxley commented Aug 31, 2020

@jessesuen or @alexmt if OK I can take this one if you think this sounds like a good solution?

@loxley
Copy link
Contributor

loxley commented Sep 4, 2020

Hi, I have more or less implemented the ability to set the Helm binary to use. But I can see that there are (of course) differences between the different versions of Helm and not sure if it is feasible to be able to override each and every option that differs between versions.

For instance, when testing with Helm 3.0.3, the current struct for Helm 3.2.0 in ArgoCD is not compatible with that Helm version as it is adding '--include-crds' as additional template arg and Helm 3.0.3 bails out.

It is entirely possible to fix this, by adding an additional parameter/flag that handles the additional template args in some way so it works with other Helm versions than 3.2.0.

I am just not sure if it is something that is needed by a lot a of people?

It would be nice to get some of the maintainers view on how they look at supporting elder versions of Helm.

@hawksight
Copy link

@loxley - with the proposed spec, I'm assuming that you have to install or mount the specific binary to argocd using the custom tooling method? If the binary is not found, will it error or default to the installed version?

I guess for each version you'd have to maintain a list of args that are allowed to be passed to the binary? Unless you can run the binary, parse the available args and only pass in ones that are allowed by that binary?

I personally think that this type of feature might become a rabbit hole of potential issues.
Custom tooling / plugins should cover this use case. k8s 1.13 is quite an old version to support now.

But it would really be worth having some statement of supported versions with ArgoCD eg k8s 1.X to 1.XX, helm v2.X & v3.X. kustomise etc.

@alexmt alexmt added enhancement New feature or request and removed verify Solution needs verification bug Something isn't working labels Nov 17, 2020
@alexmt alexmt modified the milestones: v1.8, v1.9 Nov 17, 2020
@alexmt alexmt removed their assignment Nov 17, 2020
@alexmt alexmt modified the milestones: v2.0, v2.1 Apr 2, 2021
@alexmt alexmt modified the milestones: v2.1, v2.2 Jul 2, 2021
@alexmt alexmt removed this from the v2.2 milestone Dec 8, 2021
@Phil0ctetes
Copy link

Hi, I'm facing currently the issue that I would need to specify Helm version 3.7.2 to be compatible with the oci repository we are using. With newer Helm versions the indexing of tags changed and breaks the helm dependency build (update). For other installation not depending on an oci repository we would like to use the latest Helm version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

7 participants