Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concurrently peering vnets and adding subnets to them fails #2605

Closed
jomeier opened this issue Jan 5, 2019 · 7 comments · Fixed by #3392
Closed

Concurrently peering vnets and adding subnets to them fails #2605

jomeier opened this issue Jan 5, 2019 · 7 comments · Fixed by #3392

Comments

@jomeier
Copy link

jomeier commented Jan 5, 2019

Hi,

the last few ours I had the problem that I tried to peer two vnets where the first one still exists in a different subscription and the second one is newly created with three subnets.

The problem is that sometimes the peering does'nt work and Terraform stops with an error message like
"resource is still in updating state. Last command was add subnet to vnet".

* azurerm_virtual_network_peering.management-hub-to-openshift: Error Creating/Updating Virtual Network Peering "management-hub-to-os13" (Network "management-vnet" / Resource Group "management-hub-1eu1v1"): network.VirtualNetworkPeeringsClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="ReferencedResourceNotProvisioned" Message="Cannot proceed with operation because resource /subscriptions/DELETED/resourceGroups/openshift-terra-13/providers/Microsoft.Network/virtualNetworks/openshift-virtual-network used by resource /subscriptions/DELETED/resourceGroups/management-hub-1eu1v1/providers/Microsoft.Network/virtualNetworks/management-vnet/virtualNetworkPeerings/management-hub-to-os13 is not in Succeeded state. Resource is in Updating state and the last operation that updated/is updating the resource is PutSubnetOperation." Details=[]

If I add a dependency on the peering that all subnets must be ready before peering can occur, it always works.

It seems as if concurrent updates to vnets are not working. The terraform documentation states something about that.

In my opinion the azurerm provider should take care about such effects.

Terraform v0.11.11
Terraform-provider-azurerm: v1.20.0_x4

Affected resources:

  • azurerm_virtual_network
  • azurerm_virtual_network_peering
  • azurerm_subnet

Thanks guys and best regards,

Josef

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@jomeier jomeier changed the title Cuncurrently peering vnets and adding subnets to them fails Concurrently peering vnets and adding subnets to them fails Jan 5, 2019
@katbyte
Copy link
Collaborator

katbyte commented Jan 5, 2019

Hi @jomeier,

I'm sorry to hear that you are having this problem. Could you possibly attach a working terraform configuration exhibiting this behaviour so we can investigate? Thanks 🙂

Additionally, is there a reason you cannot just add that dependancy?

@katbyte katbyte removed the question label Jan 5, 2019
@jomeier
Copy link
Author

jomeier commented Jan 6, 2019

@katbyte
Thanks for your answer. I will prepare a config. But it's rather simple. I have to modify both VNETs (they reside in different subscriptions).

  • VNET 1: contains my bastion host VM (included to Terraform through a data source)
  • VNET 2: a spoke VM is created with three Subnets

If the subnets get attached to VNET 2 during the peering of the both VNETs, the error occurs very often.

@ghost ghost removed the waiting-response label Jan 6, 2019
@ewierschke
Copy link

I have experienced this issue as well and agree/hope that the azurerm provider could account for the timing(?) issues if the Azure API cannot.

In my use case I am creating over a dozen subnets, each created by an instantiation of a subnet module (in order to associate an NSG and Route Table in a defined fashion, and to keep the main code more manageable; 50 lines less per subnet). Since the subnets are created by the module I'm unable to create dependencies between the subnets and the peering resources.

In my experiences this has been hit or miss in my larger code where, potentially a race condition is allowing it to succeed sometimes but fail others (I'm just guessing).

I was working on posting my own issue, but then found this one, hopefully I don't hijack it (too much 🙂).

I am curious @jomeier whether your configuration contains NSG and/or Route Table associations to those new subnets? If I create a sample configuration creating subnets and peerings without NSG and Route Table associations it succeeds every time. This makes me wonder if this is related to the issues with the subnet and route table association resources (ie #2489)

Below is the example code I was able to come up with which appears to fail every time (subnet lifecycle workaround added per another suggestion found in other subnet/route table association issues). A subsequent apply after the error will successfully create the peerings. Like mentioned earlier, this isn't exactly what I am running as I need to move the subnet creation into a module, so I can't add dependencies.

Open to any ideas on how to ensure the peering succeeds every time. Thanks.

provider "azurerm" {
  subscription_id = "${var.sub_id}"
  tenant_id       = "${var.tenant_id}"
  client_id       = "${var.tf_sp_appid}"
  client_secret   = "${var.tf_sp_secret}"
  version         = "1.20"
}

data "azurerm_resource_group" "existingrg" {
  name     = "${var.rg_name}"
}

data "azurerm_virtual_network" "existingvnet" {
  name = "${var.existing_vnet_name}"
  resource_group_name = "${data.azurerm_resource_group.existingrg.name}"
}

data "azurerm_network_security_group" "required_nsg" {
  name     = "${var.nsg_name}"
  resource_group_name = "${data.azurerm_resource_group.existingrg.name}"
}

resource "azurerm_virtual_network" "test2" {
  name                = "peernetwork2"
  resource_group_name = "${data.azurerm_resource_group.existingrg.name}"
  address_space       = ["172.16.0.0/16"]
  location            = "${var.location}"
}

resource "azurerm_route_table" "routetable" {
  name     = "${azurerm_virtual_network.test2.name}-test-Routetable"
  location     = "${data.azurerm_resource_group.existingrg.location}"
  resource_group_name     = "${data.azurerm_resource_group.existingrg.name}"
}
# ##
resource "azurerm_subnet" "subnet1" {
  name                 = "subnet1"
  resource_group_name  = "${data.azurerm_resource_group.existingrg.name}"
  virtual_network_name = "${azurerm_virtual_network.test2.name}"
  address_prefix       = "172.16.1.0/24"
  service_endpoints    = ["Microsoft.Sql","Microsoft.Storage","Microsoft.AzureCosmosDB"]
  lifecycle {
    ignore_changes = ["route_table_id", "network_security_group_id"]
  }
}

resource "azurerm_subnet_network_security_group_association" "nsgassociation1" {
  subnet_id                 = "${azurerm_subnet.subnet1.id}"
  network_security_group_id = "${data.azurerm_network_security_group.required_nsg.id}"
}

resource "azurerm_subnet_route_table_association" "routetableassociation1" {
  subnet_id                 = "${azurerm_subnet.subnet1.id}"
  route_table_id = "${azurerm_route_table.routetable.id}"
}
# ##
resource "azurerm_subnet" "subnet2" {
  name                 = "subnet2"
  resource_group_name  = "${data.azurerm_resource_group.existingrg.name}"
  virtual_network_name = "${azurerm_virtual_network.test2.name}"
  address_prefix       = "172.16.2.0/24"
  service_endpoints    = ["Microsoft.Sql","Microsoft.Storage","Microsoft.AzureCosmosDB"]
  lifecycle {
    ignore_changes = ["route_table_id", "network_security_group_id"]
  }
}

resource "azurerm_subnet_network_security_group_association" "nsgassociation2" {
  subnet_id                 = "${azurerm_subnet.subnet2.id}"
  network_security_group_id = "${data.azurerm_network_security_group.required_nsg.id}"
}

resource "azurerm_subnet_route_table_association" "routetableassociation2" {
  subnet_id                 = "${azurerm_subnet.subnet2.id}"
  route_table_id = "${azurerm_route_table.routetable.id}"
}
# ##
resource "azurerm_subnet" "subnet3" {
  name                 = "subnet3"
  resource_group_name  = "${data.azurerm_resource_group.existingrg.name}"
  virtual_network_name = "${azurerm_virtual_network.test2.name}"
  address_prefix       = "172.16.3.0/24"
  service_endpoints    = ["Microsoft.Sql","Microsoft.Storage","Microsoft.AzureCosmosDB"]
  lifecycle {
    ignore_changes = ["route_table_id", "network_security_group_id"]
  }
}

resource "azurerm_subnet_network_security_group_association" "nsgassociation3" {
  subnet_id                 = "${azurerm_subnet.subnet3.id}"
  network_security_group_id = "${data.azurerm_network_security_group.required_nsg.id}"
}

resource "azurerm_subnet_route_table_association" "routetableassociation3" {
  subnet_id                 = "${azurerm_subnet.subnet3.id}"
  route_table_id = "${azurerm_route_table.routetable.id}"
}
# ##
resource "azurerm_subnet" "subnet4" {
  name                 = "subnet4"
  resource_group_name  = "${data.azurerm_resource_group.existingrg.name}"
  virtual_network_name = "${azurerm_virtual_network.test2.name}"
  address_prefix       = "172.16.4.0/24"
  service_endpoints    = ["Microsoft.Sql","Microsoft.Storage","Microsoft.AzureCosmosDB"]
  lifecycle {
    ignore_changes = ["route_table_id", "network_security_group_id"]
  }
}

resource "azurerm_subnet_network_security_group_association" "nsgassociation4" {
  subnet_id                 = "${azurerm_subnet.subnet4.id}"
  network_security_group_id = "${data.azurerm_network_security_group.required_nsg.id}"
}

resource "azurerm_subnet_route_table_association" "routetableassociation4" {
  subnet_id                 = "${azurerm_subnet.subnet4.id}"
  route_table_id = "${azurerm_route_table.routetable.id}"
}
# ##
resource "azurerm_virtual_network_peering" "test1" {
  name                      = "peer1to2"
  resource_group_name       = "${data.azurerm_resource_group.existingrg.name}"
  virtual_network_name      = "${data.azurerm_virtual_network.existingvnet.name}"
  remote_virtual_network_id = "${azurerm_virtual_network.test2.id}"
}

resource "azurerm_virtual_network_peering" "test2" {
  name                      = "peer2to1"
  resource_group_name       = "${data.azurerm_resource_group.existingrg.name}"
  virtual_network_name      = "${azurerm_virtual_network.test2.name}"
  remote_virtual_network_id = "${data.azurerm_virtual_network.existingvnet.id}"
}

@erikanderson
Copy link

We are peering across subscriptions and get this consistently:

* azurerm_virtual_network_peering.shared-to-site: Error Creating/Updating Virtual Network Peering "shared-to-SUPP10XEC" (Network "DomainServices" / Resource Group "DomainServices"): network.VirtualNetworkPeeringsC
lient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="ReferencedResourceNotProvisioned" Message="Cannot proceed with operation because resource /subscriptions/4020a4c9-2597-467d-ba
40-ec5a3f0f45c2/resourceGroups/SUPP10XEC_NET_RG/providers/Microsoft.Network/virtualNetworks/SUPP10XEC_VNET used by resource /subscriptions/7211212c-c0e7-42cb-8c9a-cc2d377d5159/resourceGroups/DomainServices/provide
rs/Microsoft.Network/virtualNetworks/DomainServices/virtualNetworkPeerings/shared-to-SUPP10XEC is not in Succeeded state. Resource is in Updating state and the last operation that updated/is updating the resource
is PutSubnetOperation." Details=[]

@iamthecloudguy
Copy link

iamthecloudguy commented Mar 11, 2019

i am also facing same issue while peering between subscriptions , My setup - creating nsg + routetable+subnets+vnets+vms +peering. peering is getting failed --->>Resource is in Updating state and the last operation that updated/is updating the resource
is PutSubnetOperation." Details=[]

once its failed , again i enter a command - terraform apply ...and it got successful but i have to run same script two time. very frustrating.

can anyone from confirm this issue please.

@cloudpea
Copy link

cloudpea commented May 1, 2019

I have also experienced this issue and i seem to have worked around this by creating a dependency within each vnet peering resource for all the subnets that are being created within the vNet.

Once all the subnets are created the vnet peerings then seem to provision without issue.

@ghost
Copy link

ghost commented Jun 7, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Jun 7, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants