Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Importing existing k8s resources into a release #1281

Open
dudicoco opened this issue May 29, 2020 · 13 comments
Open

Importing existing k8s resources into a release #1281

dudicoco opened this issue May 29, 2020 · 13 comments

Comments

@dudicoco
Copy link
Contributor

I would like to import existing resources into my release.
This new helm feature allows you to adopt existing resources by annotating them: helm/helm#7649

I was thinking of implementing this using the new Kustomize feature #1172 but I couldn't get it to work.
Is it possible to use this feature just for patching existing k8s resources? If not, is there a different way to achieve this?

Thanks

@mumoshu
Copy link
Collaborator

mumoshu commented May 29, 2020

@dudicoco Hey! Sorry but I don't get it. For example, how would you expect your helmfile.yaml to look like when importing existing k8s resources? That may help me understanding what you are trying to.

Anyway, I was thinking that you would just manually modify your existing resources with kubectl annotate and then run helmfile apply as usual, so that helmfile/helm will adopt resources like explained in helm/helm#7649. Im not sure how Helmfile can help that.

@dudicoco
Copy link
Contributor Author

dudicoco commented May 29, 2020

Hey @mumoshu, thanks for the reply! The helmfile.yaml will look like this:

releases:
- name: foo
  chart: my-repo/foo
  version: 1.0.0
  namespace: default
  installed: true
  jsonPatches:
  - target:
      version: v1
      kind: Deployment
      name: foo
      namespace: default
    patch:
    - op: replace
      path: /metadata/annotations
      value:
      - meta.helm.sh/release-name: foo
      - meta.helm.sh/release-namespace: default
    - op: replace
      path: /metadata/labels
      value:
      - app.kubernetes.io/managed-by: Helm

Obviously I could patch the resources manually, but I would like the process to be automated with infrastructure as code.
When creating a new cluster with AWS EKS (and I assume with other providers as well) there are many resources that are created by default, such as aws-node and coredns which I would like to manage using helm. A manual step during the provisioning process is less ideal.

@mumoshu
Copy link
Collaborator

mumoshu commented May 29, 2020

@dudicoco Thanks!

I believe you need another mechanism than jsonPatches for that. jsonPatches in helmfile works by patching "desired" resources to be applied. On the other hand, what you need to let helm adopt existing resources is to patch existing/live resources.

I do understand your use-case though. If I were you, I would probably enhance eksctl OR terraform-provider-eksctl OR helmfile to enable you to patch live resources. But implementing that in Helmfile seems to bloat Helmfile's scope? 🤔 WDYT?

@dudicoco
Copy link
Contributor Author

@mumoshu I wonder if this feature would provide the solution:
#746

Not sure how it works exactly.

@mumoshu
Copy link
Collaborator

mumoshu commented May 29, 2020

Please let met me dive into my memory... anyway, I might have possibly missed porting that feature in #1172 so it may not work now

@mumoshu
Copy link
Collaborator

mumoshu commented May 29, 2020

#746 was intended to work by importing existing resources before the first helm upgrade --install run, and the imported resources are specified via a list of NAMESPACE/KIND/NAME entries.

Would that work for you if it worked as advertised?

@mumoshu
Copy link
Collaborator

mumoshu commented May 29, 2020

I thought it was used like this:

releases:
- name: aws-auth
   chart: ...
   adopt:
   - kube-system/configmap/aws-auth

@dudicoco
Copy link
Contributor Author

@mumoshu i'm getting the following error: Error: unknown flag: --adopt

Perhaps using the helm-x binary is needed for this feature to work?

Regarding patching via terraform, this is not possible at the moment:
hashicorp/terraform-provider-kubernetes#723

@mumoshu
Copy link
Collaborator

mumoshu commented May 29, 2020

Yes, I believe you're correct. But on the other hand I thought helm x --adopt supported Helm 2. Probably I need to rework on that.

Otherwise letting Helmfile to leverage helm 3's ability to adopt resources would be nice as well.

@dudicoco
Copy link
Contributor Author

Thanks @mumoshu.
Do you think this feature might be added any time soon?

In the meantime, I have found a workaround using hooks:

  hooks:
  - events:
    - presync
    showlogs: true
    command: kubectl
    args:
    - --context={{ $kubecontext }}
    - --namespace={{ $namespace }}
    - annotate
    - serviceaccount
    - my-svc
    - meta.helm.sh/release-name={{ .name }}
    - meta.helm.sh/release-namespace={{ $namespace }}
  - events:
    - presync
    showlogs: true
    command: kubectl
    args:
    - --context={{ $kubecontext }}
    - --namespace={{ $namespace }}
    - label
    - serviceaccount
    - my-svc
    - app.kubernetes.io/managed-by=Helm

This is working as expected, however I have a problem - as part of our CI we are running helmfile apply --args "--dry-run".
This command will actually trigger the hook during the CI test, a solution for that would be to add an if block to not include the hook during a dry run, but then helm would fail on conflict with existing resources: Error: rendered manifests contain a resource that already exists..
Do you have an advise on how to handle this chicken and egg problem?

@abatilo
Copy link
Contributor

abatilo commented Nov 4, 2020

Would love to follow up on this. I'd like to see a way to modify resources like the EKS installed coredns

@dudicoco
Copy link
Contributor Author

dudicoco commented Nov 4, 2020

@abatilo i've ended up importing the resources with a script on all of our existing clusters (just annotate and add labels to the resources).
I've also created a script which deletes the EKS resources (coredns, kube-proxy, aws-node) on new clusters prior to installing these components with helmfile.

@morremeyer
Copy link

I’m a bit confused as this seems to automagically work for me already. I migrated the storage for some of my releases (and will soon write a blog post about that, can link it here if anybody is interested).

During that, I deleted the PVC resource and later recreated it (for the new PV where the data was migrated to) without the labels helm uses to denote ownership (managed-by etc.).

After that, I had a PVC with the same name as the old one, but without the labels/annotations for helm.

I then executed helmfile diff which came back empty as the helm release did not change (but the cluster state did).

To then rectify the missing labels, I ran helmfile sync, after which the labels were present (and still are).

I have a gut feeling that this has to do with helm/helm#7649, but if somebody can clarify why that worked for me, I’d be really happy!

I’m running:

  • helm v3.4.2
  • helmfile v0.135.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants