Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not an issue but request for example? #1174

Closed
coltonshipley opened this issue Mar 30, 2024 · 7 comments
Closed

Not an issue but request for example? #1174

coltonshipley opened this issue Mar 30, 2024 · 7 comments
Labels
❓ question Further information is requested

Comments

@coltonshipley
Copy link

I'm using this provider in my homelab. I'm using a centOS9 generic cloud image to provision VM's with.
However, I'm having issues with ssh access for a provisioner. I'd also like to get away from remote-exec so a cloud-config would be ideal. But I'm not quite sure how to set that up.

I was using this example from Hashicorp - https://github.com/hashicorp/learn-terraform-provisioning/blob/cloudinit/instances/main.tf but it uses the AWS provider and this provider doesn't seem to have a "user_data" stanza in the vm resource.

Any guidance would be awesome. Bonus points if I don't have to upload the file to the proxmox server itself.

Here is a sample of my main.tf to spin up some nomadagent servers. The idea is to use this as building blocks to make a virtual_machines module after I get this working. Some of the node_logic is due to proxmox not having any sort of DRS features like vmware.

data "proxmox_virtual_environment_nodes" "available_nodes" {}

resource "proxmox_virtual_environment_vm" "virtual_machine" {
  count = 1
  name        = "${format("nomadagent%02d.localdomain", count.index + 4)}"
  node_name   = element(data.proxmox_virtual_environment_nodes.available_nodes.names, count.index)
  tags        = ["terraform"]
  description = "Managed by Terraform."

  agent {
    enabled = true
  }

  initialization {
    datastore_id = "pve-storage-ssd-ceph"

    user_account {
      username = "root"
      password = "securepassword"
    }

    ip_config {
      ipv4 {
        address = "dhcp"
      }
    }
  }

  cpu {
    cores = 8
    numa  = true
    type  = "host"    
  }

  memory {
    dedicated = 32768
  }  

  disk {
    datastore_id = "pve-storage-ssd-ceph"
    file_id      = proxmox_virtual_environment_download_file.centos9_cloud_image.id
    interface    = "scsi0"
    iothread     = false
    discard      = "on"
    size         = 80
    ssd          = "true"
  }

  network_device {
    bridge = "vmbr0"
  }  

  provisioner "remote-exec" {
    inline = [
      "sudo rpm --import https://repo.saltproject.io/salt/py3/redhat/9/x86_64/3007/SALTSTACK-GPG-KEY2.pub",
      "curl -fsSL https://repo.saltproject.io/salt/py3/redhat/9/x86_64/3007.repo | sudo tee /etc/yum.repos.d/salt.repo",
      "dnf install -y salt-minion"
    ]

    connection {
      type        = "ssh"
      user        = "root"
      host        = proxmox_virtual_environment_vm.virtual_machine[count.index].ipv4_addresses[1][0]
      password = "securepassword"
      timeout     = "5m"
      agent = false
    }
  }
}

resource "tls_private_key" "root" {
  algorithm = "RSA"
  rsa_bits  = 2048
}

resource "proxmox_virtual_environment_download_file" "centos9_cloud_image" {
  content_type = "iso"
  datastore_id = "unraid"
  node_name    = "pve1"
  url          = "https://cloud.centos.org/centos/9-stream/x86_64/images/CentOS-Stream-GenericCloud-9-latest.x86_64.qcow2"
  file_name    = "CentOS-Stream-GenericCloud-9-latest.x86_x64.img"
}

The remote-exec block is what I would be looking to maybe replace with something like this:

#cloud-config

password: securepassword
chpasswd: {expire: False}
ssh_pwauth: True

# Add the SaltStack repository
repo_update: true
repo_upgrade: all
repos:
  saltstack:
    baseurl: https://repo.saltproject.io/py3/redhat/9/x86_64/latest/
    gpgcheck: true
    gpgkey: https://repo.saltproject.io/py3/redhat/9/x86_64/latest/SALTSTACK-GPG-KEY.pub
    enabled: true

# Install Salt Minion
packages:
  - salt-minion

I have intentions to explore SSH keys more, but for now pwauth is the 'easier' approach internally.

@resnostyle
Copy link

resnostyle commented Mar 31, 2024

This documentation example here goes exactly into what you're looking to do.
https://registry.terraform.io/providers/bpg/proxmox/latest/docs/guides/cloud-init
I cant get the formating right, but you shouldn't have to upload the cloud config, terraform can do it all.

The specific parts are this.
initialization { ip_config { ipv4 { address = "dhcp" } } user_data_file_id = proxmox_virtual_environment_file.cloud_config.id }

`resource "proxmox_virtual_environment_file" "cloud_config" {
content_type = "snippets"
datastore_id = "local"
node_name = "pve"

source_raw {
data = <<EOF
#cloud-config
users:

  • default

  • name: ubuntu
    groups:

    • sudo
      shell: /bin/bash
      ssh_authorized_keys:
    • ${trimspace(data.local_file.ssh_public_key.content)}
      sudo: ALL=(ALL) NOPASSWD:ALL
      runcmd:
    • apt update
    • apt install -y qemu-guest-agent net-tools
    • timedatectl set-timezone America/Toronto
    • systemctl enable qemu-guest-agent
    • systemctl start qemu-guest-agent
    • echo "done" > /tmp/cloud-config.done
      EOF

    file_name = "cloud-config.yaml"
    }
    }`

@coltonshipley
Copy link
Author

Thanks, I was hoping there was a way without having to upload it. I understand terraform can do the upload for me, but the file still lives on the proxmox box. So, then it becomes hard to determine which node needs the file, lets say you want 3 instances and you have 5 nodes, but you don't really care where the vm ends up. Although I think I might have ideas on how to tackle that.

@coltonshipley
Copy link
Author

coltonshipley commented Apr 1, 2024

So I tried the example and it doesn't seem to be taking any of the cloud config. Also to add to this, if you use user_account in the initialization block, it doesn't work along with the user-data.

Here is my main.tf

data "proxmox_virtual_environment_nodes" "available_nodes" {}

resource "proxmox_virtual_environment_vm" "virtual_machine" {
  count = 1
  name        = "${format("nomadagent%02d.localdomain", count.index + 4)}"
  node_name   = element(data.proxmox_virtual_environment_nodes.available_nodes.names, count.index)
  tags        = ["terraform"]
  description = "Managed by Terraform."

  agent {
    enabled = true
  }

  initialization {
    datastore_id = "unraid"

    user_data_file_id = proxmox_virtual_environment_file.cloud_config.id

    ip_config {
      ipv4 {
        address = "dhcp"
      }
    }
  }

  cpu {
    cores = 8
    numa  = true
    type  = "host"    
  }

  memory {
    dedicated = 32768
  }  

  disk {
    datastore_id = "pve-storage-ssd-ceph"
    file_id      = proxmox_virtual_environment_download_file.centos9_cloud_image.id
    interface    = "scsi0"
    iothread     = false
    discard      = "on"
    size         = 80
    ssd          = "true"
  }

  network_device {
    bridge = "vmbr0"
  }  
}

resource "proxmox_virtual_environment_file" "cloud_config" {
  content_type = "snippets"
  datastore_id = "unraid"
  node_name    = "pve1"

  source_raw {
    data = <<EOF
#cloud-config
password: securepassword
chpasswd: { expire: False }
ssh_pwauth: True

repo_update: true
repo_upgrade: all
repos:
  saltstack:
    baseurl: https://repo.saltproject.io/py3/redhat/9/x86_64/latest/
    gpgcheck: true
    gpgkey: https://repo.saltproject.io/py3/redhat/9/x86_64/latest/SALTSTACK-GPG-KEY.pub
    enabled: true

packages:
  - salt-minion
    EOF

    file_name = "cloud-config.yaml"
  }
}

resource "proxmox_virtual_environment_download_file" "centos9_cloud_image" {
  content_type = "iso"
  datastore_id = "unraid"
  node_name    = "pve1"
  url          = "https://cloud.centos.org/centos/9-stream/x86_64/images/CentOS-Stream-GenericCloud-9-latest.x86_64.qcow2"
  file_name    = "CentOS-Stream-GenericCloud-9-latest.x86_x64.img"
}

@bpg
Copy link
Owner

bpg commented Apr 2, 2024

Hi @coltonshipley! 👋🏼

Thanks, I was hoping there was a way without having to upload it. I understand terraform can do the upload for me, but the file still lives on the proxmox box. So, then it becomes hard to determine which node needs the file, lets say you want 3 instances and you have 5 nodes, but you don't really care where the vm ends up. Although I think I might have ideas on how to tackle that.

Unfortunately, Proxmox does not provide an API for that. It can either generate the cloud-init "on-the-fly" from individual parameters (username, password, keys, etc), or take it as a whole when referenced by a file ID from a datastore. So in the clustered environment a shared datastore (cephfs, NFS, etc) is the most convenient way to manage that file.

Also to add to this, if you use user_account in the initialization block, it doesn't work along with the user-data.

This is a known limitation of the available PVE API as explained above, documented here.

So I tried the example and it doesn't seem to be taking any of the cloud config.

The cloud-init processing on the VM is a sole responsibility of the OS it bootstraps with. I've seen a number of issues with different version of CentOS and Ubuntu where the same cloud init worked on one version but did not work on another.

I'm regularly testing the Ubuntu cloud-init example, so I have a bit of confidence that it works. I know other people successfully used centos8 with cloud-init.

How do you check if cloud init is applied or not? Could you provide some additional details?
Perhaps check cloud init logs on the VM, if there anything relevant there?

One thing I noted, your template has agent enabled, which means the OS you're provisioning must have the qemu agent package installed, but it is missing from your cloud-init config.

And lastly, you can check #586 if anything from there is applicable to your use case.

@bpg bpg added the ❓ question Further information is requested label Apr 2, 2024
@bpg
Copy link
Owner

bpg commented Apr 9, 2024

Hey @coltonshipley, do you need any more help with your configuration?

@bpg bpg added the ⌛ pending author's response Requested additional information from the reporter label Apr 9, 2024
@EugenMayer
Copy link

@coltonshipley just wanted to second @bpg that cloud-init and the support is massively determined by the OS used.

Usually, ubuntu, plays out the best du the the roots of cloud init (AFAICS canonical spawned cloud-init and is one of the main drivers).
For example, debian, even though ubuntu is based on debian, does not support half of the cloud-init things. Network and even user, ssh key management fails to properly work.

But, that said, proxmox (not the the tf provider here) plays a role too and generally does a rather mediocre job. So my cloud-init experiences with debian under openstack are much better then with proxmox, just due to the reason that openstack does use own tools base on DHCP and other things (metdata server) to fix network and ssh keys.

All that said, terraform-provider-proxmox is entirely un-involved in the process. I tried cloud-init on proxmox for years now, and it slowly got better.

@coltonshipley
Copy link
Author

@bpg ,

Actually, I think I got it figured out. I'm working though some other issues now for my particular use case (I'm totally new to terraform as well so I'm learning as I got) However, as of now I did get cloud-init working properly with the centos9 generic cloud image (mostly). I'll tinker some more in the coming days and update.

@bpg bpg removed the ⌛ pending author's response Requested additional information from the reporter label Apr 12, 2024
@bpg bpg closed this as completed Apr 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
❓ question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants