Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The second net1 network device cannot be removed from VM #1027

Open
maksimsamt opened this issue May 31, 2024 · 6 comments
Open

The second net1 network device cannot be removed from VM #1027

maksimsamt opened this issue May 31, 2024 · 6 comments
Labels
bug enhancement upstream the problem needs to be (partially) fixed in an upstream library

Comments

@maksimsamt
Copy link
Contributor

System details:

  • Proxmox VE 8.2.2
  • Terraform v1.8.4
  • terraform-provider-proxmox v3.0.1-rc2

Creating new resources works as expected, two network devices are correctly created.
Initial config (network snippet):

...
resource "proxmox_vm_qemu" "cloud_vm_from_packer_template" {
...
  network = {
    model    = "virtio"
    bridge   = "vmbr0"
    tag      = 1110
    firewall = false
  }
  network = {
    model    = "virtio"
    bridge   = "vmbr1"
    tag      = 1111
    firewall = false
  }
...
}
...

Decide to remove the second net1 network device, so there is only one network block in the config:

...
resource "proxmox_vm_qemu" "cloud_vm_from_packer_template" {
...
  network = {
    model    = "virtio"
    bridge   = "vmbr0"
    tag      = 1110
    firewall = false
  }
...
}
...

In terraform apply output all looks fine, the second network block should be removed and result shows as Apply complete! Resources: 0 added, 1 changed, 0 destroyed.:

proxmox_vm_qemu.cloud_vm_from_packer_template: Refreshing state... [id=***/qemu/***]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # proxmox_vm_qemu.cloud_vm_from_packer_template will be updated in-place
  ~ resource "proxmox_vm_qemu" "cloud_vm_from_packer_template" {
        id                     = "***/qemu/***"
        name                   = "***"
        tags                   = "***"
        # (65 unchanged attributes hidden)

      - network {
          - bridge    = "vmbr1" -> null
          - firewall  = false -> null
          - link_down = false -> null
          - macaddr   = "****" -> null
          - model     = "virtio" -> null
          - mtu       = 0 -> null
          - queues    = 0 -> null
          - rate      = 0 -> null
          - tag       = 1111 -> null
        }

        # (5 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

proxmox_vm_qemu.cloud_vm_from_packer_template: Modifying... [id=***/qemu/***]
proxmox_vm_qemu.cloud_vm_from_packer_template: Modifications complete after 1s [id=***/qemu/***]

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

But as a result, the second network device net1 still exists in the PVE virtual machine.

When I try to run terraform apply again, it tries to remove this network device again and again with no success.

This network device can only be removed through the Proxmox GUI.

Most likely, the same behavior will happen with the third, fourth, etc. network devices.

@Tinyblargon
Copy link
Collaborator

@maksimsamt This is an issue with a lot more things. Basically, the provider treats lots of things like lists, but proxmox has a dedicated slot for each adapter. So we can't convert between them neatly. This will get rewritten at some point.

@maksimsamt
Copy link
Contributor Author

@Tinyblargon,
So there is currently no way to handle removing additional network devices through the provider, just manually in the GUI until this is rewritten and fixed in the provider?

@Tinyblargon
Copy link
Collaborator

@maksimsamt you are correct.

@Tinyblargon
Copy link
Collaborator

@maksimsamt Telmate/proxmox-api-go#341

@maksimsamt
Copy link
Contributor Author

maksimsamt commented Jun 13, 2024

@maksimsamt Telmate/proxmox-api-go#341

Great!
@Tinyblargon ,
Maybe it makes sense to make the same scheme as for disks? For example:

...
resource "proxmox_vm_qemu" "cloud_vm_from_packer_template" {
...
  networks = {
    network0 = {
      model    = "virtio"
      bridge   = "vmbr0"
      tag      = 1110
      firewall = false
    }
    network1 = {
      model    = "virtio"
      bridge   = "vmbr1"
      tag      = 1111
      firewall = false
    }
  }
...
}
...

@Tinyblargon
Copy link
Collaborator

Probably gonna do both approaches from the start.

@Tinyblargon Tinyblargon added the upstream the problem needs to be (partially) fixed in an upstream library label Sep 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug enhancement upstream the problem needs to be (partially) fixed in an upstream library
Projects
None yet
Development

No branches or pull requests

2 participants