-
-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not an issue but request for example? #1174
Comments
This documentation example here goes exactly into what you're looking to do. The specific parts are this. `resource "proxmox_virtual_environment_file" "cloud_config" { source_raw {
|
Thanks, I was hoping there was a way without having to upload it. I understand terraform can do the upload for me, but the file still lives on the proxmox box. So, then it becomes hard to determine which node needs the file, lets say you want 3 instances and you have 5 nodes, but you don't really care where the vm ends up. Although I think I might have ideas on how to tackle that. |
So I tried the example and it doesn't seem to be taking any of the cloud config. Also to add to this, if you use user_account in the initialization block, it doesn't work along with the user-data. Here is my main.tf data "proxmox_virtual_environment_nodes" "available_nodes" {}
resource "proxmox_virtual_environment_vm" "virtual_machine" {
count = 1
name = "${format("nomadagent%02d.localdomain", count.index + 4)}"
node_name = element(data.proxmox_virtual_environment_nodes.available_nodes.names, count.index)
tags = ["terraform"]
description = "Managed by Terraform."
agent {
enabled = true
}
initialization {
datastore_id = "unraid"
user_data_file_id = proxmox_virtual_environment_file.cloud_config.id
ip_config {
ipv4 {
address = "dhcp"
}
}
}
cpu {
cores = 8
numa = true
type = "host"
}
memory {
dedicated = 32768
}
disk {
datastore_id = "pve-storage-ssd-ceph"
file_id = proxmox_virtual_environment_download_file.centos9_cloud_image.id
interface = "scsi0"
iothread = false
discard = "on"
size = 80
ssd = "true"
}
network_device {
bridge = "vmbr0"
}
}
resource "proxmox_virtual_environment_file" "cloud_config" {
content_type = "snippets"
datastore_id = "unraid"
node_name = "pve1"
source_raw {
data = <<EOF
#cloud-config
password: securepassword
chpasswd: { expire: False }
ssh_pwauth: True
repo_update: true
repo_upgrade: all
repos:
saltstack:
baseurl: https://repo.saltproject.io/py3/redhat/9/x86_64/latest/
gpgcheck: true
gpgkey: https://repo.saltproject.io/py3/redhat/9/x86_64/latest/SALTSTACK-GPG-KEY.pub
enabled: true
packages:
- salt-minion
EOF
file_name = "cloud-config.yaml"
}
}
resource "proxmox_virtual_environment_download_file" "centos9_cloud_image" {
content_type = "iso"
datastore_id = "unraid"
node_name = "pve1"
url = "https://cloud.centos.org/centos/9-stream/x86_64/images/CentOS-Stream-GenericCloud-9-latest.x86_64.qcow2"
file_name = "CentOS-Stream-GenericCloud-9-latest.x86_x64.img"
} |
Hi @coltonshipley! 👋🏼
Unfortunately, Proxmox does not provide an API for that. It can either generate the cloud-init "on-the-fly" from individual parameters (username, password, keys, etc), or take it as a whole when referenced by a file ID from a datastore. So in the clustered environment a shared datastore (cephfs, NFS, etc) is the most convenient way to manage that file.
This is a known limitation of the available PVE API as explained above, documented here.
The cloud-init processing on the VM is a sole responsibility of the OS it bootstraps with. I've seen a number of issues with different version of CentOS and Ubuntu where the same cloud init worked on one version but did not work on another. I'm regularly testing the Ubuntu cloud-init example, so I have a bit of confidence that it works. I know other people successfully used centos8 with cloud-init. How do you check if cloud init is applied or not? Could you provide some additional details? One thing I noted, your template has And lastly, you can check #586 if anything from there is applicable to your use case. |
Hey @coltonshipley, do you need any more help with your configuration? |
@coltonshipley just wanted to second @bpg that cloud-init and the support is massively determined by the OS used. Usually, ubuntu, plays out the best du the the roots of cloud init (AFAICS canonical spawned cloud-init and is one of the main drivers). But, that said, proxmox (not the the tf provider here) plays a role too and generally does a rather mediocre job. So my cloud-init experiences with debian under openstack are much better then with proxmox, just due to the reason that openstack does use own tools base on DHCP and other things (metdata server) to fix network and ssh keys. All that said, |
@bpg , Actually, I think I got it figured out. I'm working though some other issues now for my particular use case (I'm totally new to terraform as well so I'm learning as I got) However, as of now I did get cloud-init working properly with the centos9 generic cloud image (mostly). I'll tinker some more in the coming days and update. |
I'm using this provider in my homelab. I'm using a centOS9 generic cloud image to provision VM's with.
However, I'm having issues with ssh access for a provisioner. I'd also like to get away from remote-exec so a cloud-config would be ideal. But I'm not quite sure how to set that up.
I was using this example from Hashicorp - https://github.com/hashicorp/learn-terraform-provisioning/blob/cloudinit/instances/main.tf but it uses the AWS provider and this provider doesn't seem to have a "user_data" stanza in the vm resource.
Any guidance would be awesome. Bonus points if I don't have to upload the file to the proxmox server itself.
Here is a sample of my main.tf to spin up some nomadagent servers. The idea is to use this as building blocks to make a virtual_machines module after I get this working. Some of the node_logic is due to proxmox not having any sort of DRS features like vmware.
The remote-exec block is what I would be looking to maybe replace with something like this:
I have intentions to explore SSH keys more, but for now pwauth is the 'easier' approach internally.
The text was updated successfully, but these errors were encountered: