Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vSphere cloudProvider - failed to get folder #1820

Open
steled opened this issue Mar 17, 2023 · 10 comments
Open

vSphere cloudProvider - failed to get folder #1820

steled opened this issue Mar 17, 2023 · 10 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/feature Categorizes issue or PR as related to a new feature. priority/normal Not that urgent, but is important sig/cluster-management Denotes a PR or issue as being assigned to SIG Cluster Management.

Comments

@steled
Copy link

steled commented Mar 17, 2023

What happened?

I'm trying to setup a k8s cluster via KubeOne with the vSphere cloudProvider.
The setup of the VMs is done via Terraform, see the output of the command terraform output -json > tf.json below:

# tf.json
{
  "kubeone_api": {
    "sensitive": false,
    "type": [
      "object",
      {
        "apiserver_alternative_names": [
          "list",
          "string"
        ],
        "endpoint": "string"
      }
    ],
    "value": {
      "apiserver_alternative_names": [],
      "endpoint": "x.x.x.x"
    }
  },
  "kubeone_hosts": {
    "sensitive": false,
    "type": [
      "object",
      {
        "control_plane": [
          "object",
          {
            "bastion": "string",
            "bastion_host_key": "string",
            "bastion_port": "number",
            "bastion_user": "string",
            "cloud_provider": "string",
            "cluster_name": "string",
            "hostnames": [
              "list",
              "string"
            ],
            "private_address": [
              "tuple",
              []
            ],
            "public_address": [
              "tuple",
              [
                "string",
                "string",
                "string"
              ]
            ],
            "ssh_agent_socket": "string",
            "ssh_hosts_keys": [
              "list",
              "string"
            ],
            "ssh_port": "number",
            "ssh_private_key_file": "string",
            "ssh_user": "string"
          }
        ]
      }
    ],
    "value": {
      "control_plane": {
        "bastion": "",
        "bastion_host_key": null,
        "bastion_port": 22,
        "bastion_user": "",
        "cloud_provider": "vsphere",
        "cluster_name": "kkp-test",
        "hostnames": [
          "kkp-test-cp-1",
          "kkp-test-cp-2",
          "kkp-test-cp-3"
        ],
        "private_address": [],
        "public_address": [
          "x.x.x.x",
          "x.x.x.x",
          "x.x.x.x"
        ],
        "ssh_agent_socket": "env:SSH_AUTH_SOCK",
        "ssh_hosts_keys": null,
        "ssh_port": 22,
        "ssh_private_key_file": "",
        "ssh_user": "kubeone"
      }
    }
  },
  "kubeone_workers": {
    "sensitive": false,
    "type": [
      "object",
      {
        "kkp-test-pool1": [
          "object",
          {
            "providerSpec": [
              "object",
              {
                "annotations": [
                  "object",
                  {
                    "cluster.k8s.io/cluster-api-autoscaler-node-group-max-size": "string",
                    "cluster.k8s.io/cluster-api-autoscaler-node-group-min-size": "string",
                    "k8c.io/operating-system-profile": "string"
                  }
                ],
                "cloudProviderSpec": [
                  "object",
                  {
                    "allowInsecure": "bool",
                    "cluster": "string",
                    "cpus": "number",
                    "datacenter": "string",
                    "datastore": "string",
                    "datastoreCluster": "string",
                    "diskSizeGB": "number",
                    "folder": "string",
                    "memoryMB": "number",
                    "resourcePool": "string",
                    "templateVMName": "string",
                    "vmNetName": "string"
                  }
                ],
                "operatingSystem": "string",
                "operatingSystemSpec": [
                  "object",
                  {
                    "distUpgradeOnBoot": "bool"
                  }
                ],
                "sshPublicKeys": [
                  "tuple",
                  [
                    "string"
                  ]
                ]
              }
            ],
            "replicas": "number"
          }
        ]
      }
    ],
    "value": {
      "kkp-test-pool1": {
        "providerSpec": {
          "annotations": {
            "cluster.k8s.io/cluster-api-autoscaler-node-group-max-size": "2",
            "cluster.k8s.io/cluster-api-autoscaler-node-group-min-size": "2",
            "k8c.io/operating-system-profile": ""
          },
          "cloudProviderSpec": {
            "allowInsecure": true,
            "cluster": "CLUSTER",
            "cpus": 2,
            "datacenter": "DATACENTER",
            "datastore": "DATASTORE",
            "datastoreCluster": "",
            "diskSizeGB": 10,
            "folder": "/Customers/TEST/kubermatic/kubeone",
            "memoryMB": 2048,
            "resourcePool": "Test_pool",
            "templateVMName": "ubuntu-22.04-server-cloudimg-kubeone-amd64",
            "vmNetName": "NETWORK"
          },
          "operatingSystem": "ubuntu",
          "operatingSystemSpec": {
            "distUpgradeOnBoot": false
          },
          "sshPublicKeys": [
            "ecdsa-sha2-nistp521 <REDACTED> kubeone"
          ]
        },
        "replicas": 2
      }
    }
  }
}

When I run kubeone apply -m kubeone.yml -t tf.json -c credentials.yml I get the following error message at the step Creating worker machines...:

WARN[10:30:51 CET] Task failed, error was: kubernetes: creating *v1alpha1.MachineDeployment kube-system/kkp-test-pool1
admission webhook "machinedeployments.machine-controller.kubermatic.io" denied the request: validation failed: failed to get folder "/Customers/TEST/kubermatic/kubeone": folder '/Customers/TEST/kubermatic/kubeone' not found

Expected behavior

I expect that the worker nodes will be created in the specified vSphere folder.

How to reproduce the issue?

Setup the KubeOne VMs via Terraform and use the following value in the terraform.tfvars file:

folder_name = "/Customers/TEST/kubermatic/kubeone"

What KubeOne version are you using?

$ kubeone version
{
  "kubeone": {
    "major": "1",
    "minor": "6",
    "gitVersion": "1.6.0",
    "gitCommit": "8b0973ca77856dca920798bbd5ff6b5c0f3f4856",
    "gitTreeState": "",
    "buildDate": "2023-02-23T19:25:26Z",
    "goVersion": "go1.19.6",
    "compiler": "gc",
    "platform": "linux/amd64"
  },
  "machine_controller": {
    "major": "1",
    "minor": "56",
    "gitVersion": "v1.56.0",
    "gitCommit": "",
    "gitTreeState": "",
    "buildDate": "",
    "goVersion": "",
    "compiler": "",
    "platform": "linux/amd64"
  }
}

Provide your KubeOneCluster manifest here (if applicable)

apiVersion: kubeone.k8c.io/v1beta2
kind: KubeOneCluster
versions:
  kubernetes: '1.24.8'
cloudProvider:
  vsphere: {}
  cloudConfig: |
    [Global]
    secret-name = "vsphere-ccm-credentials"
    secret-namespace = "kube-system"
    port = "443"
    insecure-flag = "1"

    [VirtualCenter "VCENTER"]

    [Workspace]
    server = "SERVER"
    datacenter = "DATACENTER"
    default-datastore="DATASTORE"
    resourcepool-path="Test_pool"
    folder = "kubeone"

    [Disk]
    scsicontrollertype = pvscsi

    [Network]
    public-network = "NETWORK"

What cloud provider are you running on?

VMware vSphere

What operating system are you running in your cluster?

Ubuntu 22.04

Additional information

If I update the value of the key kubeone_workers.value.kkp-test-pool1.cloudProviderSpec.folder in the file tf.json to /DATACENTER/vm/Customers/TEST/kubermatic/kubeone the creation of the worker nodes is working.

I tried to setup the full path for the folder as the value in the terraform.tfvars file (folder_name = "/DATACENTER/vm/Customers/TEST/kubermatic/kubeone").
But with this configuration it fails directly at the Terraform run with the following message:

vsphere_virtual_machine.control_plane[1]: Creating...
╷
│ Error: folder '/DATACENTER/vm/DATACENTER/vm/Customers/TEST/kubermatic/kubeone' not found
│
│   with vsphere_virtual_machine.control_plane[1],
│   on main.tf line 152, in resource "vsphere_virtual_machine" "control_plane":
│  152: resource "vsphere_virtual_machine" "control_plane" {

For me it looks like that the full folder path should be used as value for the key kubeone_workers.value.kkp-test-pool1.cloudProviderSpec.folder in the tf.json file.

@steled steled added kind/bug Categorizes issue or PR as related to a bug. sig/cluster-management Denotes a PR or issue as being assigned to SIG Cluster Management. labels Mar 17, 2023
@kubermatic-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubermatic-bot kubermatic-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 15, 2023
@steled
Copy link
Author

steled commented Jun 20, 2023

/remove-lifecycle stale

@kubermatic-bot kubermatic-bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 20, 2023
@kubermatic-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubermatic-bot kubermatic-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 18, 2023
@xmudrii
Copy link
Member

xmudrii commented Sep 18, 2023

/remove-lifecycle stale

@kubermatic-bot kubermatic-bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 18, 2023
@kubermatic-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubermatic-bot kubermatic-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 17, 2023
@xmudrii
Copy link
Member

xmudrii commented Jan 8, 2024

/remove-lifecycle stale

@kubermatic-bot kubermatic-bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 8, 2024
@kubermatic-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@kubermatic-bot kubermatic-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 11, 2024
@xmudrii
Copy link
Member

xmudrii commented Jun 11, 2024

/remove-lifecycle stale

@kubermatic-bot kubermatic-bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 11, 2024
@xmudrii xmudrii added the priority/normal Not that urgent, but is important label Jun 24, 2024
@xmudrii
Copy link
Member

xmudrii commented Jun 27, 2024

/transfer-issue machine-controller

@kubermatic-bot kubermatic-bot transferred this issue from kubermatic/kubeone Jun 27, 2024
@xmudrii
Copy link
Member

xmudrii commented Jun 27, 2024

/kind feature

@kubermatic-bot kubermatic-bot added the kind/feature Categorizes issue or PR as related to a new feature. label Jun 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. kind/feature Categorizes issue or PR as related to a new feature. priority/normal Not that urgent, but is important sig/cluster-management Denotes a PR or issue as being assigned to SIG Cluster Management.
Projects
None yet
Development

No branches or pull requests

3 participants