-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
operator sdk FATA[0001] failed to create helm chart: failed to fetch chart dependencies from requirements.yaml #3785
Comments
@vmohariya This seems like a duplicate of (or maybe just related to) #2942 One possible workaround would be to go with the test 2 route and then re-introduce your requirements.yaml in Here's the issue I filed in the Helm project about this: helm/helm#8120 Also, I would highly recommend you upgrade to v1.0 if you're creating a new project. I don't think it will fix this particular issue for you, but v0.19 and earlier is now deprecated and we will eventually stop providing fixes on the v0.19 branch. Lastly, can you provide the output for the |
These are the Warning I am getting while running operator-sdk new commands without Requirements.yaml. $ operator-sdk new sample-core --type=helm --kind=SampleCore --helm-chart=/home/<>/example-5g-core-3.2.2-1.tgz --api-version=sample.com/v1 In test 2 route and I re-introduce requirements.yaml in /helm-charts/sample-core after using operator-sdk to create the folder structure and generated the default RBAC rules. Now pods are coming as per Requirement. Yaml List. The Only difference i have seen that there is 1 additonal pods are coming up using operator whereas its not coming up in Helm. |
These errors are basically saying that the SDK couldn't parse that resource from the default manifest. Without seeing the manifest, one guess is that you have something like this in the output of ---
apiVersion: v1
kind: Service
spec:
...
---
---
apiVersion: apps/v1
kind: Deployment
spec:
... Where there are empty YAML sections that are only enabled when a chart value is changed or set to If that's the case, those warnings can be safely ignored.
This warning is ALWAYS printed for Helm RBAC generation. The gist is that SDK does a best guess of what RBAC rules the operator will need based on what it finds in the default manifest. But it can't find everything because it isn't possible to parse the template files directly since the |
Only thing I can think of would be if your chart (or subcharts) have templating logic that would cause an extra pod to show up after an upgrade. The Helm operator runs upgrade dry runs during reconciliation and will automatically upgrade if it detects that the dry-run manifest is different that the currently deployed manifest. |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Test-1:
The operator-sdk version "v0.19.2" fails to create a new operator using as parameter a helm-chart=//.tgz
With the same Package Tar.gz, Helm is able to deploy the Application on Kubernetes Cluster.
Version Used:
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.1 LTS
Release: 20.04
Codename: focal
$operator-sdk version:
operator-sdk version: "v0.19.2", commit: "4282ce9acdef6d7a1e9f90832db4dc5a212ae850", kubernetes version: "v1.18.2", go version: "go1.13.10 linux/amd64"
$helm version:
version.BuildInfo{Version:"v3.1.3", GitCommit:"0a9a9a88e8afd6e77337a3e2ad744756e191429a", GitTreeState:"clean", GoVersion:"go1.13.10"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.1", GitCommit:"b9b84e0", GitTreeState:"clean", BuildDate:"2020-04-26T20:16:35Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Command Used:
$operator-sdk new sample-core --type=helm --kind=SampleCore --helm-chart=/home//example-5g-core-3.2.2-1.tgz --api-version=sample.com/v1
##ErrorGetting:
operator-sdk new sample-core --type=helm --kind=SampleCore --helm-chart=/home//example-5g-core-3.2.2-1.tgz --api-version=sample.com/v1
INFO[0000] Creating new Helm operator 'sample-core'.
FATA[0001] failed to create helm chart: failed to fetch chart dependencies: directory /mnt/c/Users//sample-core/helm-charts/5g-core not found
Whereas with the same-helm-pacakage Helm is able to deploy the Application,
Our Observation is Inside the Helm, package, There is requirements.yaml file, which operator-sdk seems unable to convert into dependency while creating operartor, wheras same Requirement file is able to processed by Helm and Application getting deployed.
Providing Here the Outut of Requirement.Yaml file
#reqruirements.yaml output:
dependencies:
name: 5g-core
version: 3.2.2
repository: "file://../5g-core"
condition: 5g-core.enabled
name: 4g-epc
version: "0.3.0"
repository: "https://artifactory.masked-reponame.com/helm-virtual"
condition: 4g-epc.enabled
name: prometheus-adapter
version: 3.2.2
repository: "file://../prometheus-adapter"
condition: 5g-core.upf.upfAutoscaling.enabled
name: simon-metrics
alias: metrics
version: "0.2.0"
repository: "https://artifactory.masked-reponame.com/helm-virtual"
condition: metrics.enabled
name: sas
version: "0.1.2-5g-rc15"
repository: "https://artifactory.masked-reponame/helm-virtual"
condition: sas.enabled
tags:
name: fluentd-elasticsearch
version: 3.2.2
repository: "file://../fluentd-elasticsearch"
condition: fluentd.enabled
tags:
name: elasticsearch
version: 1.32.0
repository: "https://kubernetes-charts.storage.googleapis.com"
condition: elasticsearch.enabled
tags:
name: kibana
version: 3.2.2
repository: "file://../kibana"
condition: kibana.enabled
tags:
name: example-provision
version: 3.2.2
repository: "file://../example-provision"
condition: example-provision.enabled
Test:2:
As a workaround, If we tried to remove requirements.yaml file and try to create the operator-sdk again,
So Our requirements is to have , operator-sdk should able to convert Helm-Package with requirements.yaml into operator sdk ture. (Test-1)
Please provide the way to solve the dependencies using requirements.yaml while using operator-sdk to create the operator folder.
$tree of Tar File:
$ tree
.
└── example-5g-core
├── charts
│ ├── 4g-epc
│ │ ├── Chart.yaml
│ │ ├── config
│ │ │ ├── Mutliples Files, removed name of files
│ │ │ ├── Mutliples Files, removed name of files
│ │ ├── templates
│ │ │ ├── _helpers.tpl
│ │ │ ├── Mutliples Files, removed name of files
│ │ │ ├── Mutliples Files, removed name of files
│ │ └── values.yaml
│ ├── 4g-testbed
│ │ ├── Chart.yaml
│ │ ├── README.md
│ │ ├── templates
│ │ │ ├── Mutliples Files, removed name of files
│ │ │ ├── Mutliples Files, removed name of files
│ │ └── values.yaml
│ ├── 5g-core
│ │ ├── Chart.lock
│ │ ├── charts
│ │ │ ├── smf
│ │ │ │ ├── charts
│ │ │ │ │ ├── 5g-lib
│ │ │ │ │ │ ├── Chart.yaml
│ │ │ │ │ │ ├── templates
│ │ │ │ │ │ │ └── _helpers.tpl
│ │ │ │ │ │ └── values.yaml
│ │ │ │ │ ├── astaire
│ │ │ │ │ │ ├── charts
│ │ │ │ │ │ │ └── 5g-lib
│ │ │ │ │ │ │ ├── Chart.yaml
│ │ │ │ │ │ │ ├── templates
│ │ │ │ │ │ │ │ └── _helpers.tpl
│ │ │ │ │ │ │ └── values.yaml
│ │ │ │ │ │ ├── Chart.yaml
│ │ │ │ │ │ ├── templates
│ │ │ │ │ │ │ ├── cluster.yaml
│ │ │ │ │ │ │ └── tests
│ │ │ │ │ │ │ └── test-connection.yaml
│ │ │ │ │ │ └── values.yaml
│ │ │ │ │ └── astaire-operator
│ │ │ │ │ ├── charts
│ │ │ │ │ │ └── 5g-lib
│ │ │ │ │ │ ├── Chart.yaml
│ │ │ │ │ │ ├── templates
│ │ │ │ │ │ │ └── _helpers.tpl
│ │ │ │ │ │ └── values.yaml
│ │ │ │ │ ├── Chart.yaml
│ │ │ │ │ ├── crds
│ │ │ │ │ │ └── astaireclusters.yaml
│ │ │ │ │ ├── templates
│ │ │ │ │ │ ├── deployment.yaml
│ │ │ │ │ │ └── serviceaccount.yaml
│ │ │ │ │ └── values.yaml
│ │ │ │ ├── Chart.yaml
│ │ │ │ ├── README.md
│ │ │ │ ├── templates
│ │ │ │ │ ├── config_map.yaml
│ │ │ │ │ ├── deployment.yaml
│ │ │ │ │ └── service.yaml
│ │ │ │ └── values.yaml
│ │ │ └── upf
│ │ │ ├── Chart.lock
│ │ │ ├── charts
│ │ │ │ └── 5g-lib
│ │ │ │ ├── Chart.yaml
│ │ │ │ ├── templates
│ │ │ │ │ └── _helpers.tpl
│ │ │ │ └── values.yaml
│ │ │ ├── Chart.yaml
│ │ │ ├── templates
│ │ │ │ ├── Mutliples Files, removed name of files
│ │ │ │ ├── Mutliples Files, removed name of files
│ │ │ └── values.yaml
│ │ ├── Chart.yaml
│ │ ├── config
│ │ │ ├── Mutliples Files, removed name of files
│ │ │ ├── Mutliples Files, removed name of files
│ │ ├── README.md
│ │ ├── templates
│ │ │ ├── Mutliples Files, removed name of files
│ │ │ ├── Mutliples Files, removed name of files
│ │ └── values.yaml
│ ├── elasticsearch
│ │ ├── Chart.yaml
│ │ ├── ci
│ │ │ ├── Mutliples Files, removed name of files
│ │ │ ├── Mutliples Files, removed name of files
│ │ ├── README.md
│ │ ├── templates
│ │ │ ├──Mutliples Files, removed name of files
│ │ │ ├──Mutliples Files, removed name of files
│ │ └── values.yaml
│ ├── fluentd-elasticsearch
│ │ ├── Chart.yaml
│ │ ├── OWNERS
│ │ ├── README.md
│ │ ├── templates
│ │ │ ├── clusterrolebinding.yaml
tree of partial-operator folder created:
.
├── example-5g-core-3.2.2-1.tgz
└── example-5g-core
└── helm-charts
└── sample-core
├── charts
│ ├── 4g-epc-0.3.0.tgz
│ ├── 4g-testbed-3.2.2.tgz
│ ├── 5g-core-3.2.2.tgz
│ ├── elasticsearch-1.32.0.tgz
│ ├── fluentd-elasticsearch-3.2.2.tgz
│ ├── example-provision-3.2.2.tgz
│ ├── kibana-3.2.2.tgz
│ ├── prometheus-adapter-3.2.2.tgz
│ ├── sas-0.1.2-5g-rc15.tgz
│ └── simon-metrics-0.2.0.tgz
├── Chart.yaml
├── dashbds
│ ├── example-core.json
│ ├── kibana_dashboard.json
│ └── kube-metrics.json
├── requirements.yaml
├── templates
│ ├── dashbd.yaml
│ ├── NOTES.txt
│ └── promsvc.yaml
└── values.yaml
The text was updated successfully, but these errors were encountered: