-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hd_insight_kafka cluster always gets recreated #7067
Comments
I think it's caused by gateway settings:
In the plan I can see:
It means that gateway wasn't really disabled by default and apply tries to recreated cluster. Current azurerm implementation doesn't support changing gateway settings. However, it should be possible with the most recent GoSDK: https://godoc.org/github.com/Azure/azure-sdk-for-go/services/preview/hdinsight/mgmt/2018-06-01-preview/hdinsight#ClustersClient.UpdateGatewaySettings I was able to reproduce that by replacing true to false in following test: https://github.com/terraform-providers/terraform-provider-azurerm/blob/194c86e6ecf6eb0a5aeede9ca7cf4be71c1cbb8b/azurerm/internal/services/hdinsight/tests/hdinsight_hadoop_cluster_resource_test.go#L454 I'll try to figure out why gateway isn't configured properly. |
Attempt to disable Gateway using API fails with "Linux clusters do not support revoking HTTP credentials." error. It's possible that gateway must be always enabled and |
Update to avoid issues like this: #7111 |
Even when I set it to create the gateway, the result is the same: initial plan that I then apply: # azurerm_hdinsight_kafka_cluster.this will be created
+ resource "azurerm_hdinsight_kafka_cluster" "this" {
+ cluster_version = "4.0.1000.0"
+ https_endpoint = (known after apply)
+ id = (known after apply)
+ location = "northeurope"
+ name = "my-test-kafka-cluster"
+ resource_group_name = "di-nonprod"
+ ssh_endpoint = (known after apply)
+ tags = {
+ "ClusterName" = "di"
+ "Environment" = "nonprod"
+ "created-by" = "terraform"
+ "owner" = "<snip>"
}
+ tier = "Standard"
+ component_version {
+ kafka = "1.0"
}
+ gateway {
+ enabled = true
+ password = (sensitive value)
+ username = (known after apply)
}
+ roles {
+ head_node {
+ password = (sensitive value)
+ subnet_id = "/subscriptions/<snip>/resourceGroups/<snip>/providers/Microsoft.Network/virtualNetworks/<snip>/subnets/aks-di"
+ username = (known after apply)
+ virtual_network_id = "/subscriptions/<snip>/resourceGroups/<snip>/providers/Microsoft.Network/virtualNetworks/<snip>"
+ vm_size = "Standard_D3_V2"
}
+ worker_node {
+ min_instance_count = 3
+ number_of_disks_per_node = 1
+ password = (sensitive value)
+ subnet_id = "/subscriptions/<snip>/resourceGroups/<snip>/providers/Microsoft.Network/virtualNetworks/<snip>/subnets/aks-di"
+ target_instance_count = 3
+ username = (known after apply)
+ virtual_network_id = "/subscriptions/<snip>/resourceGroups/<snip>/providers/Microsoft.Network/virtualNetworks/<snip>"
+ vm_size = "Standard_D3_V2"
}
+ zookeeper_node {
+ password = (sensitive value)
+ subnet_id = "/subscriptions/<snip>/resourceGroups/<snip>/providers/Microsoft.Network/virtualNetworks/<snip>/subnets/aks-di"
+ username = (known after apply)
+ virtual_network_id = "/subscriptions/<snip>/resourceGroups/<snip>/providers/Microsoft.Network/virtualNetworks/<snip>"
+ vm_size = "Standard_D3_V2"
}
}
+ storage_account {
+ is_default = true
+ storage_account_key = (sensitive value)
+ storage_container_id = (known after apply)
}
}
# azurerm_storage_account.tnt_kafka will be created
+ resource "azurerm_storage_account" "tnt_kafka" {
+ access_tier = (known after apply)
+ account_kind = "StorageV2"
+ account_replication_type = "GRS"
+ account_tier = "Standard"
+ enable_https_traffic_only = true
+ id = (known after apply)
+ is_hns_enabled = false
+ location = "northeurope"
+ name = "std9e625770f860d2663b1b8"
+ primary_access_key = (sensitive value)
+ primary_blob_connection_string = (sensitive value)
+ primary_blob_endpoint = (known after apply)
+ primary_blob_host = (known after apply)
+ primary_connection_string = (sensitive value)
+ primary_dfs_endpoint = (known after apply)
+ primary_dfs_host = (known after apply)
+ primary_file_endpoint = (known after apply)
+ primary_file_host = (known after apply)
+ primary_location = (known after apply)
+ primary_queue_endpoint = (known after apply)
+ primary_queue_host = (known after apply)
+ primary_table_endpoint = (known after apply)
+ primary_table_host = (known after apply)
+ primary_web_endpoint = (known after apply)
+ primary_web_host = (known after apply)
+ resource_group_name = "di-nonprod"
+ secondary_access_key = (sensitive value)
+ secondary_blob_connection_string = (sensitive value)
+ secondary_blob_endpoint = (known after apply)
+ secondary_blob_host = (known after apply)
+ secondary_connection_string = (sensitive value)
+ secondary_dfs_endpoint = (known after apply)
+ secondary_dfs_host = (known after apply)
+ secondary_file_endpoint = (known after apply)
+ secondary_file_host = (known after apply)
+ secondary_location = (known after apply)
+ secondary_queue_endpoint = (known after apply)
+ secondary_queue_host = (known after apply)
+ secondary_table_endpoint = (known after apply)
+ secondary_table_host = (known after apply)
+ secondary_web_endpoint = (known after apply)
+ secondary_web_host = (known after apply)
+ tags = {
+ "ClusterName" = "di"
+ "Environment" = "nonprod"
+ "created-by" = "terraform"
+ "owner" = "<snip>"
}
+ blob_properties {
+ cors_rule {
+ allowed_headers = (known after apply)
+ allowed_methods = (known after apply)
+ allowed_origins = (known after apply)
+ exposed_headers = (known after apply)
+ max_age_in_seconds = (known after apply)
}
+ delete_retention_policy {
+ days = (known after apply)
}
}
+ identity {
+ principal_id = (known after apply)
+ tenant_id = (known after apply)
+ type = (known after apply)
}
+ network_rules {
+ bypass = (known after apply)
+ default_action = (known after apply)
+ ip_rules = (known after apply)
+ virtual_network_subnet_ids = (known after apply)
}
+ queue_properties {
+ cors_rule {
+ allowed_headers = (known after apply)
+ allowed_methods = (known after apply)
+ allowed_origins = (known after apply)
+ exposed_headers = (known after apply)
+ max_age_in_seconds = (known after apply)
}
+ hour_metrics {
+ enabled = (known after apply)
+ include_apis = (known after apply)
+ retention_policy_days = (known after apply)
+ version = (known after apply)
}
+ logging {
+ delete = (known after apply)
+ read = (known after apply)
+ retention_policy_days = (known after apply)
+ version = (known after apply)
+ write = (known after apply)
}
+ minute_metrics {
+ enabled = (known after apply)
+ include_apis = (known after apply)
+ retention_policy_days = (known after apply)
+ version = (known after apply)
}
}
}
# azurerm_storage_container.this will be created
+ resource "azurerm_storage_container" "this" {
+ container_access_type = "private"
+ has_immutability_policy = (known after apply)
+ has_legal_hold = (known after apply)
+ id = (known after apply)
+ metadata = (known after apply)
+ name = "test-kafka-storage"
+ resource_manager_id = (known after apply)
+ storage_account_name = "std9e625770f860d2663b1b8"
}
# random_password.gateway_password will be created
+ resource "random_password" "gateway_password" {
+ id = (known after apply)
+ length = 18
+ lower = true
+ min_lower = 1
+ min_numeric = 1
+ min_special = 1
+ min_upper = 1
+ number = true
+ result = (sensitive value)
+ special = false
+ upper = true
}
Then, without changing anything the plan output is: Terraform will perform the following actions:
# azurerm_hdinsight_kafka_cluster.this must be replaced
-/+ resource "azurerm_hdinsight_kafka_cluster" "this" {
cluster_version = "4.0.1000.0"
~ https_endpoint = "my-test-kafka-cluster.azurehdinsight.net" -> (known after apply)
~ id = "/subscriptions/<snip>/resourceGroups/di-nonprod/providers/Microsoft.HDInsight/clusters/my-test-kafka-cluster" -> (known after apply)
location = "northeurope"
name = "my-test-kafka-cluster"
resource_group_name = "di-nonprod"
~ ssh_endpoint = "my-test-kafka-cluster-ssh.azurehdinsight.net" -> (known after apply)
tags = {
"ClusterName" = "di"
"Environment" = "nonprod"
"created-by" = "terraform"
"owner" = "<snip>"
}
~ tier = "standard" -> "Standard"
component_version {
kafka = "1.0"
}
~ gateway {
enabled = true
~ password = (sensitive value)
username = "l0drn4oqdvp7"
}
~ roles {
~ head_node {
password = (sensitive value)
- ssh_keys = [] -> null
subnet_id = "/subscriptions/<snip>/resourceGroups/<snip>/providers/Microsoft.Network/virtualNetworks/<snip>/subnets/aks-di"
username = "95q2cbpruto3"
virtual_network_id = "/subscriptions/<snip>/resourceGroups/<snip>/providers/Microsoft.Network/virtualNetworks/<snip>"
vm_size = "Standard_D3_V2"
}
~ worker_node {
~ min_instance_count = 0 -> 3 # forces replacement
number_of_disks_per_node = 1
password = (sensitive value)
- ssh_keys = [] -> null
subnet_id = "/subscriptions/<snip>/resourceGroups/<snip>/providers/Microsoft.Network/virtualNetworks/<snip>/subnets/aks-di"
target_instance_count = 3
username = "95q2cbpruto3"
virtual_network_id = "/subscriptions/<snip>/resourceGroups/<snip>/providers/Microsoft.Network/virtualNetworks/<snip>"
vm_size = "Standard_D3_V2"
}
~ zookeeper_node {
password = (sensitive value)
- ssh_keys = [] -> null
subnet_id = "/subscriptions/<snip>/resourceGroups/<snip>/providers/Microsoft.Network/virtualNetworks/<snip>/subnets/aks-di"
username = "lziisv28avek"
virtual_network_id = "/subscriptions/<snip>/resourceGroups/<snip>/providers/Microsoft.Network/virtualNetworks/<snip>"
vm_size = "Standard_D3_V2"
}
}
storage_account {
is_default = true
storage_account_key = (sensitive value)
storage_container_id = "https://std9e625770f860d2663b1b8.blob.core.windows.net/test-kafka-storage"
}
} Note that the |
Ok, so it seems that when I ignore changes to |
@yhekma You're right, the |
@magodo This is indeed a valid workaround, but it makes it so that I cannot extend clusters which is not ideal of course. |
@yhekma Can you just modify the |
@magodo sorry, you are correct, I can. |
This address part of #7067. min_instance_count has no effect during resource creation for hdinsight cluster resource set. Besides, it actually cause plan skew if user specified it to a non-zero value.
@yhekma The two PRs are merged now and should be available in the next release. Hence, I'm going to close this issue for now. |
This has been released in version 2.17.0 of the provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading. As an example: provider "azurerm" {
version = "~> 2.17.0"
}
# ... other configuration ... |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks! |
Community Note
Terraform (and AzureRM Provider) Version
Terraform version: v0.12.24 (via TFE)
Azurerm version: 2.11.0
Affected Resource(s)
azurerm_hdinsight_kafka_cluster
Terraform Configuration Files
Initial output during creation (only relevant bits)
Output of replan (no code changed)
Note that setting
ssh_keys = null
has no effectExpected Behavior
Nothing would change
Actual Behavior
CLuster will be rerolled
Steps to Reproduce
terraform apply
terraform plan
Important Factoids
This seems to be the same behaviour witnessed in #4485
However, that incident is closed and also I am not convinced this is due to the password field as suggested in that issue. Furthermore it is suggested the issue would be resolved as of provider version 1.35 while I am running newer than that
The text was updated successfully, but these errors were encountered: