-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
google_container_cluster: updating nodepool configuration recreates cluster #1712
Comments
I'm not sure how this is marked as an enhancement. This clearly is a bug. |
Chiming in to add more details. It seems any configuration changes related to the
If I then add "xyzzy" as one of the tags and therefore change it to
Now, some of the fields in Only workaround I can think of right now is to not use the |
Hey all, sorry for the problems here, and for taking so long to respond. There are a few different things going on, which I'll try to explain here:
So, what can you do right now? One option is to use the |
Hi Dana, Thanks for the extensive reply, very helpful. Regards, Roland |
I think so! If you use |
@danawillow thank you for the explanation and work around. Step 1 Feel free to add or remove node_pools, the cluster will not be destroyed. |
Hi Dana, Thanks a lot for the comments. There have been a lot of info added to this issue and i think of the main points from vtorhonen@ fall under the cracks. One the issues we noticed is when updating network tags on existing pools (regardless of how they were created) this causes the whole cluster to be re-created. Nodes Network tags are updatebale via the GCE API. Any idea if this is something that can be fixed easily before v2 of the provider ? |
Yup, that's point 3 in my giant post above, and the reason I'm keeping this issue open. I'm hoping to work on it soon, but I can't make any guarantees. I'd say it's likely, though. |
Oh no just kidding, it's not point 3. I don't see anything that lets tags be updated at https://cloud.google.com/kubernetes-engine/docs/reference/rest/, can you point me to the API you use to update them? |
Thank you @danawillow for your constructive response. I have couple of queries.
|
I'm not actually a GKE expert, I'm just very familiar with their API, so I can't really answer question 1. I'd recommend looking around the GKE docs to see if it answers that question. Likewise, I don't have an opinion on question 2 since it's not my area of expertise, but it seems to be a thing many of our Terraform users are happy to have the option of doing. |
Hello again everyone! I've looked into things some more and, at least with the way things are right now, I'm going to have to close this issue unsolved. The main thing in this that was still open was that changing an attribute of a node pool that's defined inside the cluster resource recreates the entire cluster instead of that node pool. The way that we would fix this would be to call nodePools.delete() and then nodePools.add() for the changed node pool. However, the new node pool won't necessarily be created at the same index in the list that the old one was in. This means that the next time you run Terraform, it could show a diff because the node pools were reordered. This is surprising behavior, and I don't want people to run into it. Being able to reorder the node pools falls into bullet point 1 in my earlier reply, which I spent a fair bit of time trying to fix. Unfortunately, it looks like that won't be possible for a number of reasons, which I'll write up (probably in #780) once I have a bit more time to. The only time that this would be ok is if you only define a single node pool inside the cluster, since you can't reorder a list of size one. However, that's a lot of hacky logic to write in to the resource. If there were absolutely no other workarounds, then I'd consider it. However, there is one- using the node pool resource. So what I can do is update the documentation to make it much clearer what the limitations are on using the node_pool list inside of the cluster resource so people can make a more aware decision on whether to use that or the node_pool resource. I'll be doing that shortly. In the meantime, sorry for the less-than-ideal outcome of this issue. |
add warning about node pools defined in clusters. See TPG hashicorp#1712.
add warning about node pools defined in clusters. See TPG hashicorp#1712.
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks! |
Hi there,
I've been reading through the issue history and found some related issues (#285, #408 and #474), but the issue I'm reporting is still not resolved. I'm also not quite sure if it can be solved in Terraform, but I do know for sure that updating node pool configuration of an existing cluster through Google Console can be done without recreating the cluster.
When using only the
google_container_cluster
resource (and not the node pool resource), the following actions all cause the cluster to be recreated. In practice, none of these require a recreation of the whole cluster as the Google console allows modifications of these:In addition, changing these also should not recreate the cluster but only the node pool (which is expected behaviour):
Terraform Version
Google provider 0.15.0
Affected Resource(s)
Please list the resources as a list, for example:
We are not using any of the
google_container_node_pool
resources in addition to thegoogle_container_cluster
resource.The text was updated successfully, but these errors were encountered: