Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not possible to use google_container_node_pool without the default node pool #773

Closed
rochdev opened this issue Nov 21, 2017 · 13 comments · Fixed by #1245
Closed

Not possible to use google_container_node_pool without the default node pool #773

rochdev opened this issue Nov 21, 2017 · 13 comments · Fixed by #1245

Comments

@rochdev
Copy link

rochdev commented Nov 21, 2017

Right now it is not possible to use google_container_node_pool without the default node pool created by google_container_cluster. It should be possible to create a google_container_cluster without the default node pool so they can be exclusively managed by google_container_node_pool.

I have confirmed that it is possible in GKE to create a cluster without any node pool.

See #285 and #475

@danawillow
Copy link
Contributor

Hey @rochdev, how did you create a cluster with no node pool? Did you do so by creating the cluster and then deleting the node pool, or some other way?

@rochdev
Copy link
Author

rochdev commented Nov 22, 2017

@danawillow I indeed created the cluster and then deleted the default node pool. It is unclear from the API documentation if it is possible to create a cluster without any node pool directly and I didn't try it.

Our current strategy is to create a cluster module that includes a google_container_cluster resource and a null_resource resource to delete the default node pool. The outputs of this module depend on the null_resource to ensure that external modules can only create node pools after the default node pool has been deleted (to avoid possible conflicts).

@burdiyan
Copy link

burdiyan commented Nov 22, 2017

We are also struggling with this issue. We finally had to manually delete default-pool and it seems like it is not tracked by Terraform at all, because the next plan execution is not complaining about discrepancies.

@danawillow
Copy link
Contributor

The null_resource is the way to go for this, at least for now. In this case, Terraform follows the exact same behavior as gcloud and the console- first you create the cluster, then you delete the default node pool.

@burdiyan yup, if node pools aren't set in the cluster schema, then terraform won't show a diff.

@Stono
Copy link

Stono commented Dec 28, 2017

Can anyone give me some sample code of their implementation with null_resource as I have this exact problem to fix too!

@mattdodge
Copy link

Here's an example of a container cluster, a separately managed node pool (how it should work, IMO), and then a null_resource that deletes the default pool created by the cluster after it is created.

resource "google_container_cluster" "cluster" {
  name = "my-cluster"
  zone = "us-west1-a"
  initial_node_count = 1
}

resource "google_container_node_pool" "pool" {
  name = "my-cluster-nodes"
  node_count = "3"
  zone = "us-west1-a"
  cluster = "${google_container_cluster.cluster.name}"
  node_config {
    machine_type = "n1-standard-1"
  }
  # Delete the default node pool before spinning this one up
  depends_on = ["null_resource.default_cluster_deleter"]
}

resource "null_resource" "default_cluster_deleter" {
  provisioner "local-exec" {
    command = <<EOF
      gcloud container node-pools \
	--project my-project \
	--quiet \
	delete default-pool \
	--cluster ${google_container_cluster.cluster.name}
EOF
  }
}

I would recommend using the --project flag in the gcloud command in the local provisioner to make sure that you aren't accidentally running in some other configuration. The --quiet flag bypasses the prompt to delete the node pool. For whatever reason, I could only get it to work by putting the --cluster argument after the delete command so pardon the ugliness. The depends_on in the new node pool makes sure that the default one gets deleted first, otherwise you will see some kind of error about trying to do two operations to the cluster (deleting and creating) at the same time.

@danawillow
Copy link
Contributor

Thanks @mattdodge! A quick note that with #937 (which made it into our most recent release), the depends on shouldn't be necessary anymore (though it also doesn't hurt at all to keep it)

@mattdodge
Copy link

Oh, interesting, glad to see that made it in. I think I would still have to tell terraform that the null resource is doing something to the cluster though, unless there's some super crazy magic going on behind the scenes. Is that right?

I'm running terraform 0.11.2 and 1.5.0 of the google provider and I needed to include it. I would rather mark the null resource as affecting the cluster rather than the fake depends on if possible though.

@danawillow
Copy link
Contributor

Oh nope you're totally right, sorry to mislead.

@geekflyer
Copy link

geekflyer commented Feb 24, 2018

here's a small enhancement to @mattdodge 's workaround:
When using the node_pool config in google_container_cluster one can actually initialize the cluster with an empty node pool (it still creates a node pool, but the node pool itself has 0 nodes).
This makes deletion of the default node-pool a bit faster. No intermediate VM is created, just the useless empty node pool.

To do so, instead of:

resource "google_container_cluster" "cluster" {
  name = "my-cluster"
  zone = "us-west1-a"
  initial_node_count = 1
}

use:

resource "google_container_cluster" "cluster" {
  name = "my-cluster"
  zone = "us-west1-a"
    node_pool = [{
    name = "default-pool"
    node_count= 0
  }]
}

@Stono
Copy link

Stono commented Feb 24, 2018 via email

@geekflyer
Copy link

@Stono

Well you don't wanna update that inline node_pool config, but you can just delete it.
In my case when I delete the node_pool config after the initial deployment, terraform actually doesn't detect any changes and just ignores it:

resource "google_container_cluster" "cluster" {
  name = "my-cluster"
  zone = "us-west1-a"
}

@ghost
Copy link

ghost commented Nov 19, 2018

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Nov 19, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants