Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

operator sdk FATA[0001] failed to create helm chart: failed to fetch chart dependencies from requirements.yaml #3785

Closed
vmohariya opened this issue Aug 26, 2020 · 8 comments
Assignees
Labels
language/helm Issue is related to a Helm operator project lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.
Milestone

Comments

@vmohariya
Copy link

vmohariya commented Aug 26, 2020

Test-1:
The operator-sdk version "v0.19.2" fails to create a new operator using as parameter a helm-chart=//.tgz

With the same Package Tar.gz, Helm is able to deploy the Application on Kubernetes Cluster.

Version Used:
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.1 LTS
Release: 20.04
Codename: focal

$operator-sdk version:
operator-sdk version: "v0.19.2", commit: "4282ce9acdef6d7a1e9f90832db4dc5a212ae850", kubernetes version: "v1.18.2", go version: "go1.13.10 linux/amd64"

$helm version:
version.BuildInfo{Version:"v3.1.3", GitCommit:"0a9a9a88e8afd6e77337a3e2ad744756e191429a", GitTreeState:"clean", GoVersion:"go1.13.10"}

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.1", GitCommit:"b9b84e0", GitTreeState:"clean", BuildDate:"2020-04-26T20:16:35Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Command Used:

$operator-sdk new sample-core --type=helm --kind=SampleCore --helm-chart=/home//example-5g-core-3.2.2-1.tgz --api-version=sample.com/v1

##ErrorGetting:
operator-sdk new sample-core --type=helm --kind=SampleCore --helm-chart=/home//example-5g-core-3.2.2-1.tgz --api-version=sample.com/v1
INFO[0000] Creating new Helm operator 'sample-core'.
FATA[0001] failed to create helm chart: failed to fetch chart dependencies: directory /mnt/c/Users//sample-core/helm-charts/5g-core not found

Whereas with the same-helm-pacakage Helm is able to deploy the Application,
Our Observation is Inside the Helm, package, There is requirements.yaml file, which operator-sdk seems unable to convert into dependency while creating operartor, wheras same Requirement file is able to processed by Helm and Application getting deployed.

Providing Here the Outut of Requirement.Yaml file

#reqruirements.yaml output:

dependencies:

  • name: 5g-core
    version: 3.2.2
    repository: "file://../5g-core"
    condition: 5g-core.enabled

  • name: 4g-epc
    version: "0.3.0"
    repository: "https://artifactory.masked-reponame.com/helm-virtual"
    condition: 4g-epc.enabled

  • name: prometheus-adapter
    version: 3.2.2
    repository: "file://../prometheus-adapter"
    condition: 5g-core.upf.upfAutoscaling.enabled

  • name: simon-metrics
    alias: metrics

    version: "0.2.0"
    repository: "https://artifactory.masked-reponame.com/helm-virtual"
    condition: metrics.enabled

  • name: sas

    version: "0.1.2-5g-rc15"
    repository: "https://artifactory.masked-reponame/helm-virtual"
    condition: sas.enabled
    tags:

    • sas-5g
  • name: fluentd-elasticsearch
    version: 3.2.2
    repository: "file://../fluentd-elasticsearch"
    condition: fluentd.enabled
    tags:

    • fluentd-5g
  • name: elasticsearch
    version: 1.32.0
    repository: "https://kubernetes-charts.storage.googleapis.com"
    condition: elasticsearch.enabled
    tags:

    • elasticsearch-5g
  • name: kibana
    version: 3.2.2
    repository: "file://../kibana"
    condition: kibana.enabled
    tags:

    • kibana-5g
  • name: example-provision
    version: 3.2.2
    repository: "file://../example-provision"
    condition: example-provision.enabled

Test:2:
As a workaround, If we tried to remove requirements.yaml file and try to create the operator-sdk again,

    1. Its giving following warnings and failed messages.
    1. Its creating Operator folder structure
    1. when we implement CR.yaml at the end, its deploy all the default and not maintain the filter/dependency which was maintained by Helm with the help of Requirements.yaml file (which we have removed as a part of Test2 Testing)

So Our requirements is to have , operator-sdk should able to convert Helm-Package with requirements.yaml into operator sdk ture. (Test-1)

Please provide the way to solve the dependencies using requirements.yaml while using operator-sdk to create the operator folder.

$tree of Tar File:
$ tree

.
└── example-5g-core
├── charts
│   ├── 4g-epc
│   │   ├── Chart.yaml
│   │   ├── config
│   │   │   ├── Mutliples Files, removed name of files
│   │   │   ├── Mutliples Files, removed name of files
│   │   ├── templates
│   │   │   ├── _helpers.tpl
│   │   │   ├── Mutliples Files, removed name of files
│   │   │   ├── Mutliples Files, removed name of files
│   │   └── values.yaml
│   ├── 4g-testbed
│   │   ├── Chart.yaml
│   │   ├── README.md
│   │   ├── templates
│   │   │   ├── Mutliples Files, removed name of files
│   │   │   ├── Mutliples Files, removed name of files
│   │   └── values.yaml
│   ├── 5g-core
│   │   ├── Chart.lock
│   │   ├── charts
│   │   │   ├── smf
│   │   │   │   ├── charts
│   │   │   │   │   ├── 5g-lib
│   │   │   │   │   │   ├── Chart.yaml
│   │   │   │   │   │   ├── templates
│   │   │   │   │   │   │   └── _helpers.tpl
│   │   │   │   │   │   └── values.yaml
│   │   │   │   │   ├── astaire
│   │   │   │   │   │   ├── charts
│   │   │   │   │   │   │   └── 5g-lib
│   │   │   │   │   │   │   ├── Chart.yaml
│   │   │   │   │   │   │   ├── templates
│   │   │   │   │   │   │   │   └── _helpers.tpl
│   │   │   │   │   │   │   └── values.yaml
│   │   │   │   │   │   ├── Chart.yaml
│   │   │   │   │   │   ├── templates
│   │   │   │   │   │   │   ├── cluster.yaml
│   │   │   │   │   │   │   └── tests
│   │   │   │   │   │   │   └── test-connection.yaml
│   │   │   │   │   │   └── values.yaml
│   │   │   │   │   └── astaire-operator
│   │   │   │   │   ├── charts
│   │   │   │   │   │   └── 5g-lib
│   │   │   │   │   │   ├── Chart.yaml
│   │   │   │   │   │   ├── templates
│   │   │   │   │   │   │   └── _helpers.tpl
│   │   │   │   │   │   └── values.yaml
│   │   │   │   │   ├── Chart.yaml
│   │   │   │   │   ├── crds
│   │   │   │   │   │   └── astaireclusters.yaml
│   │   │   │   │   ├── templates
│   │   │   │   │   │   ├── deployment.yaml
│   │   │   │   │   │   └── serviceaccount.yaml
│   │   │   │   │   └── values.yaml
│   │   │   │   ├── Chart.yaml
│   │   │   │   ├── README.md
│   │   │   │   ├── templates
│   │   │   │   │   ├── config_map.yaml
│   │   │   │   │   ├── deployment.yaml
│   │   │   │   │   └── service.yaml
│   │   │   │   └── values.yaml
│   │   │   └── upf
│   │   │   ├── Chart.lock
│   │   │   ├── charts
│   │   │   │   └── 5g-lib
│   │   │   │   ├── Chart.yaml
│   │   │   │   ├── templates
│   │   │   │   │   └── _helpers.tpl
│   │   │   │   └── values.yaml
│   │   │   ├── Chart.yaml
│   │   │   ├── templates
│   │   │   │   ├── Mutliples Files, removed name of files
│   │   │   │   ├── Mutliples Files, removed name of files
│   │   │   └── values.yaml
│   │   ├── Chart.yaml
│   │   ├── config
│   │   │   ├── Mutliples Files, removed name of files
│   │   │   ├── Mutliples Files, removed name of files
│   │   ├── README.md
│   │   ├── templates
│   │   │   ├── Mutliples Files, removed name of files
│   │   │   ├── Mutliples Files, removed name of files
│   │   └── values.yaml
│   ├── elasticsearch
│   │   ├── Chart.yaml
│   │   ├── ci
│   │   │   ├── Mutliples Files, removed name of files
│   │   │   ├── Mutliples Files, removed name of files
│   │   ├── README.md
│   │   ├── templates
│   │   │   ├──Mutliples Files, removed name of files
│   │   │   ├──Mutliples Files, removed name of files
│   │   └── values.yaml
│   ├── fluentd-elasticsearch
│   │   ├── Chart.yaml
│   │   ├── OWNERS
│   │   ├── README.md
│   │   ├── templates
│   │   │   ├── clusterrolebinding.yaml

│   │   └── values.yaml
│   ├── example-provision
│   │   ├── Chart.yaml
│   │   ├── templates
│   │   │   ├──Mutliples Files, removed name of files
│   │   │   └── _helpers.tpl
│   │   └── values.yaml
│   ├── kibana
│   │   ├── Chart.yaml
│   │   ├── ci
│   │   │   ├──Mutliples Files, removed name of files

│   │   ├── OWNERS
│   │   ├── README.md
│   │   ├── templates
│   │   │   ├── Mutliples Files, removed name of files
│   │   └── values.yaml
│   ├── prometheus-adapter
│   │   ├── Chart.yaml
│   │   ├── OWNERS
│   │   ├── README.md
│   │   ├── templates
│   │   │   ├── Mutliples Files, removed name of files
│   │   └── values.yaml
│   ├── sas
│   │   ├── Chart.yaml
│   │   ├── README.md
│   │   ├── scripts
│   │   │   └── wait_database_init.sh
│   │   ├── templates
│   │   │   ├──Mutliples Files, removed name of files
│   │   └── values.yaml
│   └── simon-metrics
│       ├── charts
│       │   ├── grafana
│       │   │   ├── Chart.yaml
│       │   │   ├── dashboards
│       │   │   │   └── custom-dashboard.json
│       │   │   ├── README.md
│       │   │   ├── templates
│       │   │   │   ├── Mutliples Files, removed name of files
│       │   │   └── values.yaml
│       │   └── prometheus
│       │       ├── Chart.yaml
│       │       ├── OWNERS
│       │       ├── README.md
│       │       ├── templates
│       │       │   ├── Mutliples Files, removed name of files
│       │       └── values.yaml
│       ├── Chart.yaml
│       ├── templates
│       │   ├── config_map_datasource.yaml
│       │   ├── NOTES.txt
│       │   └── prometheus_roles.yaml
│       └── values.yaml
├── Chart.yaml
├── dashbds
│   ├── Mutliples Files, removed name of files
├── requirements.yaml
├── templates
│   ├──Mutliples Files, removed name of files
└── values.yaml

tree of partial-operator folder created:
.
├── example-5g-core-3.2.2-1.tgz
└── example-5g-core
└── helm-charts
└── sample-core
├── charts
│   ├── 4g-epc-0.3.0.tgz
│   ├── 4g-testbed-3.2.2.tgz
│   ├── 5g-core-3.2.2.tgz
│   ├── elasticsearch-1.32.0.tgz
│   ├── fluentd-elasticsearch-3.2.2.tgz
│   ├── example-provision-3.2.2.tgz
│   ├── kibana-3.2.2.tgz
│   ├── prometheus-adapter-3.2.2.tgz
│   ├── sas-0.1.2-5g-rc15.tgz
│   └── simon-metrics-0.2.0.tgz
├── Chart.yaml
├── dashbds
│   ├── example-core.json
│   ├── kibana_dashboard.json
│   └── kube-metrics.json
├── requirements.yaml
├── templates
│   ├── dashbd.yaml
│   ├── NOTES.txt
│   └── promsvc.yaml
└── values.yaml

@joelanford
Copy link
Member

@vmohariya This seems like a duplicate of (or maybe just related to) #2942

One possible workaround would be to go with the test 2 route and then re-introduce your requirements.yaml in <projectDir>/helm-charts/sample-core after using operator-sdk to create the folder structure and generate the default RBAC rules.

Here's the issue I filed in the Helm project about this: helm/helm#8120

Also, I would highly recommend you upgrade to v1.0 if you're creating a new project. I don't think it will fix this particular issue for you, but v0.19 and earlier is now deprecated and we will eventually stop providing fixes on the v0.19 branch.

Lastly, can you provide the output for the operator-sdk new command in your test 2 that you mention has warnings and failed messages?

@vmohariya
Copy link
Author

These are the Warning I am getting while running operator-sdk new commands without Requirements.yaml.

$ operator-sdk new sample-core --type=helm --kind=SampleCore --helm-chart=/home/<>/example-5g-core-3.2.2-1.tgz --api-version=sample.com/v1
INFO[0000] Creating new Helm operator 'sample-core'.
INFO[0000] Created helm-charts/example-5g-core
INFO[0000] Generating RBAC rules
I0826 20:54:56.017870 15332 request.go:621] Throttling request took 1.0469623s, request: GET:https://:6443/apis/config.openshift.io/v1?timeout=32s
WARN[0012] Skipping rule generation for manifest-27. Failed to determine resource apiVersion.
WARN[0012] Skipping rule generation for manifest-30. Failed to determine resource apiVersion.
WARN[0012] Skipping rule generation for manifest-34. Failed to determine resource apiVersion.
WARN[0012] Skipping rule generation for manifest-37. Failed to determine resource apiVersion.
WARN[0012] Skipping rule generation for manifest-41. Failed to determine resource apiVersion.
WARN[0012] Skipping rule generation for manifest-44. Failed to determine resource apiVersion.
WARN[0012] Skipping rule generation for manifest-47. Failed to determine resource apiVersion.
INFO[0012] Scaffolding ClusterRole and ClusterRolebinding for cluster scoped resources in the helm chart
WARN[0012] The RBAC rules generated in deploy/role.yaml are based on the chart's default manifest. Some rules may be missing for resources that are only enabled with custom values, and some existing rules may be overly broad. Double check the rules generated in deploy/role.yaml to ensure they meet the operator's permission requirements.
INFO[0012] Created build/Dockerfile
INFO[0012] Created deploy/service_account.yaml
INFO[0012] Created deploy/role.yaml
INFO[0012] Created deploy/role_binding.yaml
INFO[0012] Created deploy/operator.yaml
INFO[0012] Created deploy/crds/sample.com_v1_samplecore_cr.yaml
INFO[0012] Generated CustomResourceDefinition manifests.
INFO[0012] Project creation complete.

In test 2 route and I re-introduce requirements.yaml in /helm-charts/sample-core after using operator-sdk to create the folder structure and generated the default RBAC rules.

Now pods are coming as per Requirement. Yaml List. The Only difference i have seen that there is 1 additonal pods are coming up using operator whereas its not coming up in Helm.

@joelanford
Copy link
Member

WARN[0012] Skipping rule generation for manifest-27. Failed to determine resource apiVersion.

These errors are basically saying that the SDK couldn't parse that resource from the default manifest. Without seeing the manifest, one guess is that you have something like this in the output of helm template ./helm-charts/core-sample:

---
apiVersion: v1
kind: Service
spec:
  ...
---
---
apiVersion: apps/v1
kind: Deployment
spec:
  ...

Where there are empty YAML sections that are only enabled when a chart value is changed or set to enabled: true or something.

If that's the case, those warnings can be safely ignored.

WARN[0012] The RBAC rules generated in deploy/role.yaml are based on the chart's default manifest. Some rules may be missing for resources that are only enabled with custom values, and some existing rules may be overly broad. Double check the rules generated in deploy/role.yaml to ensure they meet the operator's permission requirements.

This warning is ALWAYS printed for Helm RBAC generation. The gist is that SDK does a best guess of what RBAC rules the operator will need based on what it finds in the default manifest. But it can't find everything because it isn't possible to parse the template files directly since the apiVersion and kind values can depend on chart values and other templating logic.

@joelanford
Copy link
Member

Now pods are coming as per Requirement. Yaml List. The Only difference i have seen that there is 1 additional pods are coming up using operator whereas its not coming up in Helm.

Only thing I can think of would be if your chart (or subcharts) have templating logic that would cause an extra pod to show up after an upgrade. The Helm operator runs upgrade dry runs during reconciliation and will automatically upgrade if it detects that the dry-run manifest is different that the currently deployed manifest.

@joelanford joelanford self-assigned this Aug 31, 2020
@joelanford joelanford added language/helm Issue is related to a Helm operator project triage/needs-information Indicates an issue needs more information in order to work on it. labels Aug 31, 2020
@jberkhahn jberkhahn added this to the Backlog milestone Aug 31, 2020
@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 29, 2020
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 29, 2020
@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci-robot
Copy link

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
language/helm Issue is related to a Helm operator project lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

5 participants