-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concurrently peering vnets and adding subnets to them fails #2605
Comments
Hi @jomeier, I'm sorry to hear that you are having this problem. Could you possibly attach a working terraform configuration exhibiting this behaviour so we can investigate? Thanks 🙂 Additionally, is there a reason you cannot just add that dependancy? |
@katbyte
If the subnets get attached to VNET 2 during the peering of the both VNETs, the error occurs very often. |
I have experienced this issue as well and agree/hope that the azurerm provider could account for the timing(?) issues if the Azure API cannot. In my use case I am creating over a dozen subnets, each created by an instantiation of a subnet module (in order to associate an NSG and Route Table in a defined fashion, and to keep the main code more manageable; 50 lines less per subnet). Since the subnets are created by the module I'm unable to create dependencies between the subnets and the peering resources. In my experiences this has been hit or miss in my larger code where, potentially a race condition is allowing it to succeed sometimes but fail others (I'm just guessing). I was working on posting my own issue, but then found this one, hopefully I don't hijack it (too much 🙂). I am curious @jomeier whether your configuration contains NSG and/or Route Table associations to those new subnets? If I create a sample configuration creating subnets and peerings without NSG and Route Table associations it succeeds every time. This makes me wonder if this is related to the issues with the subnet and route table association resources (ie #2489) Below is the example code I was able to come up with which appears to fail every time (subnet lifecycle workaround added per another suggestion found in other subnet/route table association issues). A subsequent apply after the error will successfully create the peerings. Like mentioned earlier, this isn't exactly what I am running as I need to move the subnet creation into a module, so I can't add dependencies. Open to any ideas on how to ensure the peering succeeds every time. Thanks. provider "azurerm" {
subscription_id = "${var.sub_id}"
tenant_id = "${var.tenant_id}"
client_id = "${var.tf_sp_appid}"
client_secret = "${var.tf_sp_secret}"
version = "1.20"
}
data "azurerm_resource_group" "existingrg" {
name = "${var.rg_name}"
}
data "azurerm_virtual_network" "existingvnet" {
name = "${var.existing_vnet_name}"
resource_group_name = "${data.azurerm_resource_group.existingrg.name}"
}
data "azurerm_network_security_group" "required_nsg" {
name = "${var.nsg_name}"
resource_group_name = "${data.azurerm_resource_group.existingrg.name}"
}
resource "azurerm_virtual_network" "test2" {
name = "peernetwork2"
resource_group_name = "${data.azurerm_resource_group.existingrg.name}"
address_space = ["172.16.0.0/16"]
location = "${var.location}"
}
resource "azurerm_route_table" "routetable" {
name = "${azurerm_virtual_network.test2.name}-test-Routetable"
location = "${data.azurerm_resource_group.existingrg.location}"
resource_group_name = "${data.azurerm_resource_group.existingrg.name}"
}
# ##
resource "azurerm_subnet" "subnet1" {
name = "subnet1"
resource_group_name = "${data.azurerm_resource_group.existingrg.name}"
virtual_network_name = "${azurerm_virtual_network.test2.name}"
address_prefix = "172.16.1.0/24"
service_endpoints = ["Microsoft.Sql","Microsoft.Storage","Microsoft.AzureCosmosDB"]
lifecycle {
ignore_changes = ["route_table_id", "network_security_group_id"]
}
}
resource "azurerm_subnet_network_security_group_association" "nsgassociation1" {
subnet_id = "${azurerm_subnet.subnet1.id}"
network_security_group_id = "${data.azurerm_network_security_group.required_nsg.id}"
}
resource "azurerm_subnet_route_table_association" "routetableassociation1" {
subnet_id = "${azurerm_subnet.subnet1.id}"
route_table_id = "${azurerm_route_table.routetable.id}"
}
# ##
resource "azurerm_subnet" "subnet2" {
name = "subnet2"
resource_group_name = "${data.azurerm_resource_group.existingrg.name}"
virtual_network_name = "${azurerm_virtual_network.test2.name}"
address_prefix = "172.16.2.0/24"
service_endpoints = ["Microsoft.Sql","Microsoft.Storage","Microsoft.AzureCosmosDB"]
lifecycle {
ignore_changes = ["route_table_id", "network_security_group_id"]
}
}
resource "azurerm_subnet_network_security_group_association" "nsgassociation2" {
subnet_id = "${azurerm_subnet.subnet2.id}"
network_security_group_id = "${data.azurerm_network_security_group.required_nsg.id}"
}
resource "azurerm_subnet_route_table_association" "routetableassociation2" {
subnet_id = "${azurerm_subnet.subnet2.id}"
route_table_id = "${azurerm_route_table.routetable.id}"
}
# ##
resource "azurerm_subnet" "subnet3" {
name = "subnet3"
resource_group_name = "${data.azurerm_resource_group.existingrg.name}"
virtual_network_name = "${azurerm_virtual_network.test2.name}"
address_prefix = "172.16.3.0/24"
service_endpoints = ["Microsoft.Sql","Microsoft.Storage","Microsoft.AzureCosmosDB"]
lifecycle {
ignore_changes = ["route_table_id", "network_security_group_id"]
}
}
resource "azurerm_subnet_network_security_group_association" "nsgassociation3" {
subnet_id = "${azurerm_subnet.subnet3.id}"
network_security_group_id = "${data.azurerm_network_security_group.required_nsg.id}"
}
resource "azurerm_subnet_route_table_association" "routetableassociation3" {
subnet_id = "${azurerm_subnet.subnet3.id}"
route_table_id = "${azurerm_route_table.routetable.id}"
}
# ##
resource "azurerm_subnet" "subnet4" {
name = "subnet4"
resource_group_name = "${data.azurerm_resource_group.existingrg.name}"
virtual_network_name = "${azurerm_virtual_network.test2.name}"
address_prefix = "172.16.4.0/24"
service_endpoints = ["Microsoft.Sql","Microsoft.Storage","Microsoft.AzureCosmosDB"]
lifecycle {
ignore_changes = ["route_table_id", "network_security_group_id"]
}
}
resource "azurerm_subnet_network_security_group_association" "nsgassociation4" {
subnet_id = "${azurerm_subnet.subnet4.id}"
network_security_group_id = "${data.azurerm_network_security_group.required_nsg.id}"
}
resource "azurerm_subnet_route_table_association" "routetableassociation4" {
subnet_id = "${azurerm_subnet.subnet4.id}"
route_table_id = "${azurerm_route_table.routetable.id}"
}
# ##
resource "azurerm_virtual_network_peering" "test1" {
name = "peer1to2"
resource_group_name = "${data.azurerm_resource_group.existingrg.name}"
virtual_network_name = "${data.azurerm_virtual_network.existingvnet.name}"
remote_virtual_network_id = "${azurerm_virtual_network.test2.id}"
}
resource "azurerm_virtual_network_peering" "test2" {
name = "peer2to1"
resource_group_name = "${data.azurerm_resource_group.existingrg.name}"
virtual_network_name = "${azurerm_virtual_network.test2.name}"
remote_virtual_network_id = "${data.azurerm_virtual_network.existingvnet.id}"
} |
We are peering across subscriptions and get this consistently:
|
i am also facing same issue while peering between subscriptions , My setup - creating nsg + routetable+subnets+vnets+vms +peering. peering is getting failed --->>Resource is in Updating state and the last operation that updated/is updating the resource once its failed , again i enter a command - terraform apply ...and it got successful but i have to run same script two time. very frustrating. can anyone from confirm this issue please. |
I have also experienced this issue and i seem to have worked around this by creating a dependency within each vnet peering resource for all the subnets that are being created within the vNet. Once all the subnets are created the vnet peerings then seem to provision without issue. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks! |
Hi,
the last few ours I had the problem that I tried to peer two vnets where the first one still exists in a different subscription and the second one is newly created with three subnets.
The problem is that sometimes the peering does'nt work and Terraform stops with an error message like
"resource is still in updating state. Last command was add subnet to vnet".
If I add a dependency on the peering that all subnets must be ready before peering can occur, it always works.
It seems as if concurrent updates to vnets are not working. The terraform documentation states something about that.
In my opinion the azurerm provider should take care about such effects.
Terraform v0.11.11
Terraform-provider-azurerm: v1.20.0_x4
Affected resources:
Thanks guys and best regards,
Josef
Community Note
The text was updated successfully, but these errors were encountered: