Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deploy the same repo to the same namespace with different values via fleet.yaml #344

Closed
Shaked opened this issue Apr 21, 2021 · 13 comments
Closed
Assignees

Comments

@Shaked
Copy link

Shaked commented Apr 21, 2021

I have a setup of a service that can run in different modes. Each mode requires a different deployment/pod but at the end it's all about the different helm values I supply to it.

Currently I want to use the same fleet deployment (GitRepo via Rancher) to deploy the different modes to the same cluster within the same namespace.

At first,I thought that it would be quite simple and the only thing I'd have to do is change fleet.yaml to something like:

defaultNamespace: clearml
targetCustomizations:
- name: c1
  clusterSelector:
    matchLabels:
      provider: gcp
      env: dev
      clearml-agent: normal
  kustomize:
    dir: overlays/secrets/gcp/dev
  helm:
    values:
      queueName: queue
- name: c1-gpu
  clusterSelector:
    matchLabels:
      provider: gcp
      env: dev
      clearml-agent: normal
  kustomize:
    dir: overlays/secrets/gcp/dev
  helm:
    values:
      queueName: queue-gpu

I expected it to set two deployments, one for the c1 and another for the c1-gpu. However, this didn't work.

I thought that it would work by overriding the name of the deployment, as maybe there was some issue with the same naming convention, so I even added the following under c1-gpu:

nameOverride: "clearml-agent-gpu"
fullnameOverride: "clearml-agent-gpu"

Unfortunately, that didn't do the job.

I think that the main issue here is that fleet/rancher deploys helm once and therefore doesn't know how to handle multiple deployments to the same namespace. I haven't checked, but I assume that if I'd work with different namespaces, it will be installed correctly. I base this assumption mainly on my intuition but also saw that #210 might be related.

Is this something that could work somehow?

Currently I have solved this on templating level. This means that I have N deployment-<mode>.yaml and I have a flag that enables/disables them e.g

{{- if .Values.gpuAgent.enabled -}}
YAML TEMPLATE
{{- end }}
@Shaked
Copy link
Author

Shaked commented Aug 27, 2021

I just saw this line: https://github.com/rancher/fleet/blame/master/docs/gitrepo-structure.md#L124.

What is the reason for that? Why not scanning the entire file for more possible deployments e.g same cluster different namespace or even same cluster, same namespace different nameOverride, fullnameOverride?

@ibuildthecloud maybe you would know? Saw your name in the history log of this file.

@bkendzior
Copy link

bkendzior commented May 9, 2022

I have a similar need; a Helm chart that is deployed to a single cluster multiple times (each in their own namespace / with their own values file) - The only way I can come up with that Rancher/Fleet can support this currently would be to create multiple GitRepos and point them to separate branches (with each branch customizing the fleet.yaml for that particular release); not a sustainable solution.

Why not scanning the entire file for more possible deployments e.g same cluster different namespace or even same cluster

This is the behavior that I was expecting; what's the possibility of supporting this in the future?

I see mention of the ability to specify multiple fleet.yaml files in #210 - which although much less elegant (especially for use-cases where you're deploying 10s of applications to the same cluster) would also seem to solve this issue.

@aiyengar2
Copy link
Contributor

I thought that it would work by overriding the name of the deployment, as maybe there was some issue with the same naming convention, so I even added the following under c1-gpu:

nameOverride: "clearml-agent-gpu"
fullnameOverride: "clearml-agent-gpu"

@Shaked Have you tried setting the release name under targetCustomizations[*].helm.releaseName instead? This boils down to

ReleaseName string `json:"releaseName,omitempty"`

I believe that might be why there is a conflict, since that's the value we reference when identifying what release to deploy onto the cluster

if bd.Spec.Options.Helm == nil || bd.Spec.Options.Helm.ReleaseName == "" {
return ns + "/" + bd.Name
}
return ns + "/" + bd.Spec.Options.Helm.ReleaseName

@Shaked
Copy link
Author

Shaked commented Jun 1, 2022

@aiyengar2 I will have to test this. Following these comments https://github.com/rancher/fleet/blame/master/docs/gitrepo-structure.md#L143-L144 I am not sure it will work, because AFAIU, once the cluster has been found in fleet.yaml/targetCustomizations then it won't continue to the next possible deployment.

I am not sure about the details behind scene, so my assumptions might be wrong here.

@Shaked
Copy link
Author

Shaked commented Jun 2, 2022

@aiyengar2 I have tested the following fleet.yaml on Rancher 2.6.5 and 2.5.8:

defaultNamespace: trivy-test
helm:
  releaseName: trivy-test
  repo: https://aquasecurity.github.io/helm-charts/
  chart: trivy
  version: 0.4.13
targetCustomizations:
- name: older-version-support1
  defaultNamespace: trivy-test-older
  helm:
    releaseName: trivy-test-older
    version: 0.4.8
  clusterSelector:
    matchLabels:
      trivyTest: true
- name: older-version-support2
  defaultNamespace: trivy-test
  clusterSelector:
    matchLabels:
      trivyTest: true
  helm:
    releaseName: trivy-test-very-old
    version: 0.4.2

Following your example above, I'd expect to have 3 deployments with 2 namespace, i.e kubect get ns | grep trivy should return:

  • trivy-test
  • trivy-test-older

and helm ls -A | grep trivy should return:

  • trivy-test
  • trivy-test-older
  • trivy-test-very-old

But, the actual result is different:

k get ns | grep trivy
trivy-test-older              Active   7m40s
helm ls -A | grep trivy
trivy-test-older        	trivy-test-older   	2       	2022-06-02 10:55:52.263803342 +0000 UTC	deployed	trivy-0.4.13                                                                              0.25.0

Rancher UI shows as if everything has been installed:

2.5.8

Dashboard 2022-06-02 13-58-25

2.6.5

Rancher 2022-06-02 13-58-53

So AFAIU the suggested solution doesn't solve the problem of deploying a chart to the same cluster more than once. As I stated above, I tend to assume that this happens due to https://github.com/rancher/fleet/blame/master/docs/gitrepo-structure.md#L143-L144.

Also, notice that in targetCustomizations I stated a different helm chart version, and fleet didn't care about it and installed the one stated at the top of the file. Another bug?

To sum up, multiple deployments in the same cluster are crucial, is there a way to overcome this issue?

Thank you
Shaked

EDIT:

Any chance this is why fleet allows one deployment per cluster? https://cs.github.com/rancher/fleet/blob/a28fee1d8763cf3aa2b86acffca015909e4f2e67/pkg/bundle/match.go?q=Targets+language%3AGo#L102.

@aiyengar2 aiyengar2 self-assigned this Jun 23, 2022
@aiyengar2 aiyengar2 added this to the v2.6.x milestone Jun 23, 2022
@aiyengar2
Copy link
Contributor

@Shaked thanks for your response and tests! I've added this issue to be triaged so we can investigate.

@Shaked
Copy link
Author

Shaked commented Jul 5, 2022

Hey @aiyengar2 is there any workaround for this? I find it quite strange that there aren't lots of issues about this topic. Deploying multiple versions to the same cluster sounds very reasonable and a common thing, isn't it?

@kingyzf
Copy link

kingyzf commented Oct 22, 2022

any update plan?

@alexvkff
Copy link

alexvkff commented Mar 6, 2023

Hello,

What is the status if this enhancement? It would be great to have this function

Kind Regards
Alex

@kkaempf kkaempf removed this from the v2.6.x milestone Mar 30, 2023
@manno
Copy link
Member

manno commented May 4, 2023

Interesting use case. Do many people need that?

I don't think you can use target customizations to multiply a bundle into bundle deployments for the same cluster. The basic idea of a bundle is that it represents a folder in your git repo:

Fleet will create bundles from a git repository. This happens either explicitly by specifying paths, or when a fleet.yaml is found.
Each bundle is created from paths in a GitRepo and modified further by reading the discovered fleet.yaml file.

Target customization is described here:

Target customization are used to determine how resources should be modified per target
Targets are evaluated in order and the first one to match a cluster is used for that cluster.

So, what you should be able to do, is to add folders, each with a fleet.yaml inside. That's a bit verbose, but you could generate the content before committing it to git?

However, helm multiple chart will only work with external helm charts. Otherwise you would have to duplicate the chart, too. Relative paths are not allowed in GitRepo's paths:.

And you need to make sure resources don't conflict, Fleet will happily adopt existing resources.

Additionally, I'm not sure Fleet will behave nicely when deploying multiple bundles into the same namespace, maybe Fleet will delete the namespace when one of the bundles is removed?

One good news, though, this bug should be fixed in recent versions:

Also, notice that in targetCustomizations I stated a different helm chart version, and fleet didn't care about it and installed the one stated at the top of the file. Another bug?

@sridhav
Copy link

sridhav commented Jun 10, 2023

Just found a solution to this problem. may be try doing the following in your fleet.yaml to get same deployment working in multiple nameaspaces or same namespaces. this worked for me on fleet 0..6. hope this can be helpful for you guys as well

namespace: lab
targetCustomizations:
- name: dev
  helm:
    values:
      replicas: 1
  clusterSelector:
    matchLabels:
      env: dev

namespace: lab2
targetCustomizations:
- name: dev
  helm:
    values:
      replicas: 3
  clusterSelector:
    matchLabels:
      env: dev

@manno
Copy link
Member

manno commented Jun 14, 2023

Currently I want to use the same fleet deployment (GitRepo via Rancher) to deploy the different modes to the same cluster within the same namespace.

One gitrepo resource cannot deploy the same path/bundle to the same cluster multiple times.

@manno manno closed this as not planned Won't fix, can't repro, duplicate, stale Jun 14, 2023
@zube zube bot closed this as completed Jun 14, 2023
@zube zube bot removed the [zube]: Done label Sep 13, 2023
@b2sc
Copy link

b2sc commented Sep 9, 2024

Just found a solution to this problem. may be try doing the following in your fleet.yaml to get same deployment working in multiple nameaspaces or same namespaces. this worked for me on fleet 0..6. hope this can be helpful for you guys as well

namespace: lab
targetCustomizations:
- name: dev
  helm:
    values:
      replicas: 1
  clusterSelector:
    matchLabels:
      env: dev

namespace: lab2
targetCustomizations:
- name: dev
  helm:
    values:
      replicas: 3
  clusterSelector:
    matchLabels:
      env: dev

This does not work (anymore). And it isn't valid YAML because it contains certain keys (namespace and targetCustomizations) twice. Our Fleet implementation uses the latest definition of the keys. In the example above that would be namespace: lab2 and the customization of 3 replicas.

But I must say @manno s comment and closing this issue as "not planned" is not very customer friendly. I hear Argo CD would support such use cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants