Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform is not keeping azurerm_virtual_machine.storage_os_disk.image_uri properly #838

Closed
dstori opened this issue Feb 14, 2018 · 17 comments · Fixed by #1799
Closed

Terraform is not keeping azurerm_virtual_machine.storage_os_disk.image_uri properly #838

dstori opened this issue Feb 14, 2018 · 17 comments · Fixed by #1799

Comments

@dstori
Copy link

dstori commented Feb 14, 2018

Terraform Version

Terraform v0.11.3

  • provider.azurerm v1.1.1

Affected Resource(s)

  • azurerm_virtual_machine

Terraform Configuration Files

provider "azurerm" {
  version = ">= 1.1.1"
  subscription_id = "x"
  client_id       = "x"
  client_secret   = "x"
  tenant_id       = "x"
}

resource "azurerm_resource_group" "test_rg" {
  name     = "test-rg"
  location = "Brazil South"
}

resource "azurerm_virtual_network" "vnet" {
  name                = "vnet-test"
  location            = "Brazil South"
  address_space       = ["10.1.0.0/16"]
  resource_group_name = "${azurerm_resource_group.test_rg.name}"
}

resource "azurerm_subnet" "subnet" {
  name                 = "subnet-test"
  resource_group_name  = "test-rg"
  virtual_network_name = "${azurerm_virtual_network.vnet.name}"
  address_prefix       = "10.1.1.0/24"
}

resource "azurerm_network_interface" "test_ni" {
  name                      = "test-ni"
  location                  = "Brazil South"
  resource_group_name       = "${azurerm_resource_group.test_rg.name}"
  ip_configuration {
    name                          = "test-config"
    private_ip_address_allocation = "dynamic"
    subnet_id                     = "${azurerm_subnet.subnet.id}"    
  }
}

resource "azurerm_virtual_machine" "teste_instance" {
  name                             = "test-virtual-machine"
  location                         = "Brazil South"
  resource_group_name              = "${azurerm_resource_group.test_rg.name}"
  network_interface_ids            = ["${azurerm_network_interface.test_ni.id}"]
  vm_size                          = "Standard_DS2_V2"
  delete_os_disk_on_termination    = true
  delete_data_disks_on_termination = false

  os_profile {
    computer_name  = "teste-computer"
    admin_username = "testUser"
    admin_password = "Test1234@Abcd"
  }

  os_profile_linux_config {
    disable_password_authentication = false
  }

  storage_os_disk {
    name          = "test-local-disk"
    image_uri     = "https://xxxxxxxx.blob.core.windows.net/system/Microsoft.Compute/Images/images/test.vhd"
    vhd_uri       = "https://xxxxxxxx.blob.core.windows.net/vhds/test.vhd"
    create_option = "FromImage"
    os_type       = "Linux"
  }
}

Expected Behavior

When you execute terraform apply terraform previews correctly the storage_os_disk.0.image_uri state:

storage_os_disk.0.image_uri:  "https://xxxx.blob.core.windows.net/system/Microsoft.Compute/Images/images/xxxx.vhd"

Then you confirm with yes and all the resources are properly created.
When you execute terraform show storage_os_disk.0.image_uri should be listed with the desired state.

Actual Behavior

When you execute terraform show storage_os_disk.0.image_uri is empty:

storage_os_disk.0.image_uri =

This implies that future executions of terraform apply will try to update the state, even we don't have anything to update.

Steps to Reproduce

Create a terraform file as the example above

  1. terraform apply
  2. terraform show
@kcheemz
Copy link

kcheemz commented Feb 22, 2018

I am also seeing this behavior.

@achandmsft achandmsft added this to the 1.4.0 milestone Mar 8, 2018
@achandmsft achandmsft added M1 and removed M1 labels Mar 10, 2018
@marcinbojko
Copy link

Observed also in 0.11.5 with azurerm v1.3.0

@jakubigla
Copy link

I'm also experiencing this behaviour, which is annoying and complicates my life big time.
Is there any hack, that prevents vm rebuild?

@shtratos
Copy link

shtratos commented Apr 9, 2018

@jakubigla

You can just mark the property as ignored:

resource "azurerm_virtual_machine" "virtual_machine" {
    ...
    lifecycle {
        ignore_changes = [
            "storage_os_disk" // Terraform tries to change empty storage_os_disk.0.image_uri on every provisioning.
            // To avoid this we just ignore the changes to this attribute when diffing.
        ]
    }
}

@jakubigla
Copy link

jakubigla commented Apr 9, 2018

Amazing. Thanks for the quick answer, this is the type of the hack I was expected.

@jakubigla
Copy link

But then I experienced the following limitation:
https://social.msdn.microsoft.com/Forums/en-US/6a205f68-cfe3-4a90-9e76-828fc884c37a/arm-create-vm-from-custom-image-error-due-to-source-and-destination-storage-accounts-for-disk?forum=WAVirtualMachinesforWindows

Haha, Now my journey with Packer can be over very soon.
Imagine you have hundred subscriptions and you did a small change in the golden image.

@shtratos
Copy link

shtratos commented Apr 9, 2018

In our current setup we reprovision all VMs whenever the golden image changes. AFAIK there's no way to do that without destroying the VMs.

We use a script like this for rolling update:

#!/usr/bin/env bash

set -eu 

setup-remote-state.sh prod

VM_COUNT=${1:-20}
echo "Number of VMs in the cluster: $VM_COUNT"

# Bump number of VMs temporarily so number of available VMs is the same
# This is to avoid affecting currently running jobs
BUMPED_VM_COUNT=$(($VM_COUNT+1))
terraform apply -var-file=env-prod.tfvars -var vm_count=$BUMPED_VM_COUNT

LAST_VM_INDEX=$(($VM_COUNT-1))
for i in `seq 0 ${LAST_VM_INDEX}`
do
  echo "azurerm_virtual_machine.virtual_machine.$i"
  terraform taint "azurerm_virtual_machine.virtual_machine.$i"
  terraform apply -var-file=env-prod.tfvars -var vm_count=${VM_COUNT}
done

# decrease number of VMs back to original
terraform apply -var-file=env-prod.tfvars -var vm_count=${VM_COUNT}

echo "Done."

It can be modified obviously to taint and reprovision VMs in batches if you have a lot of machines.

@jakubigla
Copy link

That's fine.
The issue is, that to deploy VM from custom image, the source and destination image need to be in the same storage account.
In enterprise area, when you have 100 subscriptions and one master golden image, you would have to copy the master image to 100 storage accounts, so they can be used.

@shtratos
Copy link

shtratos commented Apr 9, 2018

Sorry, I haven't read the link properly.
Yes, that's a known limitation with custom images that golden image must be in the same storage account where you keep the VMs.

We currently copy golden image between accounts and try to keep its size to minimum.

@shtratos
Copy link

shtratos commented Apr 9, 2018

An alternative would be to use standard image and install everything via custom script extension.
But that could be slow and very flaky :( Copying the image between accounts is not that bad in that case.

A better approach would be to use Docker containers, but that depends on your use-case, obviously.

@jakubigla
Copy link

Yea, copying is not the worst (and 100x better than standard image and custom script / ansible).
And yea docker is completely different story :)

Also I'm quite new to Azure. Do you have a single storage account for subscription? Or per resource group? I'm asking how you logically keep things isolated (if there's a need for that).
Let say I have a dev that has an access only to RG1 and another dev that can only access RG2. How would they access the golden image for their new VMs?

@marcinbojko
Copy link

Hi Jakub. In my case i have one golden image per subscription/storage account. We do not have needs to have all vm's on same level as Dmitry mentioned, so periodically I am creating a new image (just to shorten deploys).

@shtratos
Copy link

shtratos commented Apr 9, 2018

@jakubigla In our case we structure things like this:

  • Dev subscription

    • storage account resource group
      • VM storage account
        • system blob container (where golden image is kept)
        • cluster1 blob container (keeps cluster1 VM disks)
        • cluster2 blob container (keeps cluster2 VM disks)
    • cluster1 resource group
      • cluster1 VMs
    • cluster2 resource group
      • cluster2 VMs
  • Prod subscription

    • storage account resource group
      • VM storage account
        • system blob container (where golden image is kept)
        • cluster3 blob container (keeps cluster3 VM disks)
    • cluster3 resource group
      • cluster3 VMs

In our case one team owns all the clusters, so there's no requirement to have separate storage accounts per team.

@jakubigla
Copy link

Thanks guys, this was very very helpful.

@metacpp metacpp self-assigned this Apr 10, 2018
@metacpp metacpp removed this from the 1.4.0 milestone Apr 10, 2018
@metacpp
Copy link
Contributor

metacpp commented May 23, 2018

@dstori is this still an issue for you?

@dstori
Copy link
Author

dstori commented May 24, 2018

@metacpp to be honest the project requirements changed (as usual) so we are not using azure for now, I will return to this subject later.

@JunyiYi JunyiYi assigned JunyiYi and unassigned metacpp Aug 8, 2018
JunyiYi pushed a commit that referenced this issue Aug 20, 2018
tombuildsstuff pushed a commit that referenced this issue Aug 23, 2018
* Add repro test for bug #838

* Set image_uri back to schema

* Add image_uri check in test case
@ghost
Copy link

ghost commented Mar 30, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost unassigned JunyiYi Mar 30, 2020
@ghost ghost locked and limited conversation to collaborators Mar 30, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants