-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changing a node_pool's initial_node_count forces recreation of the whole container_cluster #6889
Comments
@mltsy where do you see
|
Oh - that's strange. That is the same thing I did, but that is not what I saw. The output I got showed:
I did do this about 2 weeks ago, and just hadn't gotten around to submitting the ticket (because I wanted to make sure the provider in the config was relatively up-to-date which it was) so, the GUI does look slightly different now from what I recall 2 weeks ago. It's possible that has changed and it has been fixed... I'll try to verify... Otherwise, if you happened to change the "location" of the cluster from |
@mltsy That is the same place where I updated the node_pool size. But I noticed below line in your plan which indicates new id is needed. That triggers the recreation plan. Could you share your debug log? Thanks
|
Well, practically everything in the plan is new, because of the re-creation, but if you look at the next line you can see it's saying the I verified this by using |
Here's a simpler way to reproduce the root cause, and figure out why you're not seeing the same thing as me:
|
Let me know if you still need a debug log - I think this should help you reproduce the issue (without needing a debug log?) |
@mltsy I see. In my test, I did not have below line in
|
Yes! In fact, that is the work-around we're currently using, and it seems to be working alright 😃 However, I think there are cases where one might want to actually set an |
@mltsy I am glad the workaround works for you. You have made good use cases. Please file an enhancement to continue for the requirements. I am closing this issue then. |
I'll try to open an enhancement request tomorrow. For now, I think it would make sense to at least document and warn users about this odd behavior (#6896) |
This documents a known issue outlined in #6889
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks! |
Community Note
modular-magician
user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned tohashibot
, a community member has claimed the issue already.Terraform Version
Terraform 0.12.24
Google provider version 3.22.0
Affected Resource(s)
Terraform Configuration Files
This issue should occur for any node_pool with
initial_node_count
set to a particular value. HEre's an exampleDebug Output
I can't gather this without launching a new nodepool to recreate the problem. Let me know if it's really necessary.
Expected Behavior
When I manually change the size of a node_pool in Google's Cloud Console (for whatever reason), it changes the
initial_node_count
setting for that node_pool based on my manual update of the current node_pool size. Running TF (assuming the config is unchanged) after this node_pool update should have no effect. It should not require re-creation of the entire cluster.Actual Behavior
Running TF after this node_pool update requires re-creation of the entire cluster, based on the fact that it sees
initial_node_count
has changed.Steps to Reproduce
terraform apply
(This may require some finagling of google permissions, etc.)terraform apply
Important Factoids
The most unexpected part here is that changing the current nodepool size from the Cloud Console actually alters the
initial_node_count
setting - that doesn't make any sense. Maybe we actually need to file a bug report in the GKE API project? But I don't think it would hurt anyone to just have the provider ignore theinitial_node_count
once the node_pool exists, right?initial_node_count
doesn't do anything after that point anyway...References
The provider docs do point out that "Changing this will force recreation of the resource." - however, they do not point out that resizing your nodepool manually will have the same effect (nor should they, I don't think). If we change the behavior - we may need to update that bit of the documentation to indicate that changing initial_node_count will not make any updates to an existing nodepool.
The text was updated successfully, but these errors were encountered: