You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
If an issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to "hashibot", a community member has claimed the issue already.
We include this module two times, with identical settings except the name. We use max_node_count = 10 and min_node_count = 1 in both of those.
Debug Output
terraform plan is clean:
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
[...]
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
However, autoscaling is disabled for one of the node pools:
State shows the proper autoscaling block:
$ terraform state show module.gke-production-node-pool-02.google_container_node_pool.custom | grep -A3 autoscaling
autoscaling {
max_node_count = 10
min_node_count = 1
}
Expected Behavior
I don't know how autoscaling got disabled, good thing we had another pool. We're trying to figure that out. But I would expect terraform to have noticed and tried to fix the situation.
Actual Behavior
terraform plan is clean.
Steps to Reproduce
I could reproduce by disabling autoscaling manually in another node pool. So:
Have a running and managed node pool with terraform.
Manually disable the node pool's autoscaling from the Google Cloud Console.
terraform plan or terraform apply show no changes.
The text was updated successfully, but these errors were encountered:
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!
ghost
locked and limited conversation to collaborators
Aug 10, 2019
Community Note
Terraform Version
Affected Resource(s)
Terraform Configuration Files
We have a module to define node pools. This is the main thing: https://gist.github.com/mrsimo/bfe4adb409ec378ab248d4b5d2928c57
We include this module two times, with identical settings except the
name
. We usemax_node_count = 10
andmin_node_count = 1
in both of those.Debug Output
terraform plan
is clean:However, autoscaling is disabled for one of the node pools:
State shows the proper
autoscaling
block:Expected Behavior
I don't know how autoscaling got disabled, good thing we had another pool. We're trying to figure that out. But I would expect terraform to have noticed and tried to fix the situation.
Actual Behavior
terraform plan
is clean.Steps to Reproduce
I could reproduce by disabling autoscaling manually in another node pool. So:
terraform plan
orterraform apply
show no changes.The text was updated successfully, but these errors were encountered: