-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AKS error when trying to upgrade cluster #3239
Comments
Creating a new cluster still works |
Thanks for opening this issue :) Would it be possible for you to provide the Terraform Configuration that you're using so that we can take a look? Thanks! |
@tombuildsstuff - Sure, please see the following. backend "azurerm" {
storage_account_name = "crdsterraformstate"
container_name = "tfstate"
key = "terraform.tfstate"
resource_group_name = "terraform"
}
}
locals {
resource_group_name = "${var.resource_group_name_prefix}_${var.env}"
aks_name = "aks-crds-${var.env}"
nsg_name = "crds_${var.env}-nsg"
frontdoor_ip_name = "crossroads-${var.env}"
api_ip_name = "api-${var.env}"
virtual_network_name = "crds_${var.env}-vnet"
}
# Configure the Azure Provider
provider "azurerm" {
subscription_id = "${var.arm_subscription_id}"
client_id = "${var.arm_client_id}"
client_secret = "${var.arm_client_secret}"
tenant_id = "${var.arm_tenant_id}"
version = "~> 1.24"
}
resource "azurerm_resource_group" "crds" {
name = "${local.resource_group_name}"
location = "${var.resource_group_location}"
}
resource "azurerm_network_security_group" "crds" {
name = "${local.nsg_name}"
location = "${var.resource_group_location}"
resource_group_name = "${azurerm_resource_group.crds.name}"
}
# NSG Rules
resource "azurerm_network_security_rule" "allow-http" {
name = "Allow-HTTP"
priority = 1000
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "80"
source_address_prefix = "*"
destination_address_prefix = "*"
resource_group_name = "${azurerm_resource_group.crds.name}"
network_security_group_name = "${azurerm_network_security_group.crds.name}"
}
resource "azurerm_network_security_rule" "allow-https" {
name = "Allow-HTTPS"
priority = 1010
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "443"
source_address_prefix = "*"
destination_address_prefix = "*"
resource_group_name = "${azurerm_resource_group.crds.name}"
network_security_group_name = "${azurerm_network_security_group.crds.name}"
}
# Static IPS
# Front Door
resource "azurerm_public_ip" "frontdoor" {
name = "${local.frontdoor_ip_name}"
location = "${var.resource_group_location}"
resource_group_name = "${azurerm_kubernetes_cluster.aks.node_resource_group}"
allocation_method = "Static"
}
# API
resource "azurerm_public_ip" "api" {
name = "${local.api_ip_name}"
location = "${var.resource_group_location}"
resource_group_name = "${azurerm_kubernetes_cluster.aks.node_resource_group}"
allocation_method = "Static"
}
resource "azurerm_virtual_network" "crds" {
name = "${local.virtual_network_name}"
location = "${var.resource_group_location}"
resource_group_name = "${azurerm_resource_group.crds.name}"
address_space = ["10.0.0.0/18"]
}
resource "azurerm_subnet" "aks_subnet" {
name = "aks-subnet"
resource_group_name = "${azurerm_resource_group.crds.name}"
network_security_group_id = "${azurerm_network_security_group.crds.id}"
address_prefix = "10.0.8.0/21" # 10.0.8.0-10.0.15.255
virtual_network_name = "${azurerm_virtual_network.crds.name}"
}
resource "azurerm_subnet" "db_subnet" {
name = "db-subnet"
resource_group_name = "${azurerm_resource_group.crds.name}"
network_security_group_id = "${azurerm_network_security_group.crds.id}"
address_prefix = "10.0.16.0/24"
virtual_network_name = "${azurerm_virtual_network.crds.name}"
}
resource "azurerm_subnet" "vm_subnet" {
name = "vm-subnet"
resource_group_name = "${azurerm_resource_group.crds.name}"
network_security_group_id = "${azurerm_network_security_group.crds.id}"
address_prefix = "10.0.17.0/24"
virtual_network_name = "${azurerm_virtual_network.crds.name}"
}
resource "azurerm_kubernetes_cluster" "aks" {
name = "${local.aks_name}"
location = "${var.resource_group_location}"
dns_prefix = "${local.aks_name}"
resource_group_name = "${azurerm_resource_group.crds.name}"
linux_profile {
admin_username = "${var.linux_admin_username}"
ssh_key {
key_data = "${var.linux_admin_ssh_publickey}"
}
}
kubernetes_version = "1.11.9"
agent_pool_profile {
name = "agentpool"
count = "3"
vm_size = "Standard_DS3_v2"
os_type = "Linux"
# Required for advanced networking
vnet_subnet_id = "${azurerm_subnet.aks_subnet.id}"
}
service_principal {
client_id = "${var.arm_client_id}"
client_secret = "${var.arm_client_secret}"
}
network_profile {
network_plugin = "azure"
dns_service_ip = "10.0.0.10"
docker_bridge_cidr = "172.17.0.1/16"
service_cidr = "10.0.0.0/21" # 10.0.0.0-10.0.7.255
}
} |
The error seems to refer to the oms_agent within the addon_profile block, but I can't see it defined? Also which region are you running? When I look at the supported versions and their upgrade paths for West Europe 1.11.5 isn't even there, I'd be interested to see which upgrade paths it shows in the portal. As you can see below the lowest 1.11.x version is 1.11.8 Get versions by running (change region as required)
Output will be like:
|
This is eastus region. You are right that an upgrade path for 1.11.5 -> 1.11.9 is not defined. Azure seems to be very aggressive about deprecating k8s versions as this cluster is only a couple months old. I know this has worked in the past even though the specific upgrade path was not defined. Also, the portal is offering me an upgrade. I could try it in the portal and see what happens. |
Hello.
TF template: variable "subscription-id" {} provider "azurerm" { resource "azurerm_kubernetes_cluster" "aks" { agent_pool_profile { service_principal { lifecycle { tags = "${var.tags}" |
I can confirm that I was able to upgrade 1.11.5 -> 1.11.9 using Azure Portal. Seems as though I should have been able to do the same using terraform. |
I am having a somewhat similar issue (#2993) and happens after the upgrade via portal. |
Also, terraform the same error when trying to update k8s service principal secret for example |
I'm facing exactly the same issue, i can't upgrade my cluster with Terraform since i updated via the interface. |
I don't have the probleme anymore ! look like it was something temporarily |
Hi, |
I am also experiencing the same issue since I disable the oms agent addon profile on one cluster. Upgrades are still available through the portal interface, but no upgrade will work through terraform, resulting in the error message from the initially described issue. |
Closed the wrong issue - reopening |
This fixes hashicorp#3239 where a kubernetes cluster that had first the OMS agent profile addon enabled and then disabled not usable anymore by the terraform provider. All subsequent update requests would return a bad request response complaining that the `logAnalayticsWorkspaceId` needs to be a fully qualified resource id.
This has been released in version 1.36.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example: provider "azurerm" {
version = "~> 1.36.0"
}
# ... other configuration ... |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks! |
Community Note
Terraform (and AzureRM Provider) Version
Provider version 1.24.
Affected Resource(s)
azurerm_kubernetes_cluster
Description
Trying to upgrade aks cluster from 1.11.5 -> 1.11.9. Receiving the following error:
containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="LinkedInvalidPropertyId" Message="Property id ''at path 'properties.addonProfiles.omsagent.config.logAnalyticsWorkspaceResourceID' is invalid. Expect fully qualified resource Id that start with '/subscriptions/{subscriptionId}' or '/providers/{resourceProviderNamespace}/'."
Unsure if this happens on new cluster creation, only tried the upgrade.
The text was updated successfully, but these errors were encountered: