Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

separately mananged node pool creation fails with autoscaling #7494

Closed
jba opened this issue Oct 12, 2020 · 5 comments
Closed

separately mananged node pool creation fails with autoscaling #7494

jba opened this issue Oct 12, 2020 · 5 comments
Assignees
Labels

Comments

@jba
Copy link

jba commented Oct 12, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.

Terraform Version

Terraform v0.12.28

  • provider.google v3.42.0
  • provider.google-beta v3.5.0

Affected Resource(s)

  • google_container_cluster
  • google_container_node_pool

Terraform Configuration Files

resource "google_container_cluster" "pkgsite" {
  project  = var.project
  name     = "exp-pkgsite5"
  location = "us-central1-a"

  # We can't create a cluster with no node pool defined, but we want to only use                                                                                                                                  
  # separately managed node pools. So we create the smallest possible default                                                                                                                                     
  # node pool and immediately delete it.                                                                                                                                                                          
  remove_default_node_pool = true
  initial_node_count       = 1

  master_auth {
    username = ""
    password = ""

    client_certificate_config {
      issue_client_certificate = false
    }
  }
}

resource "google_container_node_pool" "pkgsite" {
  project  = var.project
  name     = "default-pool"
  cluster  = google_container_cluster.pkgsite.name
  location = "us-central1-a"
                                                                                                                                                                                                                  
  autoscaling {
    min_node_count = 3
    max_node_count = 20
  }

  node_config {
    preemptible  = true
    machine_type = "e2-standard-2"

    metadata = {
      disable-legacy-endpoints = "true"
    }

    oauth_scopes = [
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
    ]
  }
}

(This differs from the "separately managed node pool" example at https://www.terraform.io/docs/providers/google/r/container_cluster.html only in that the node_count = 1 line has been replaced with an autoscaling block.)

Expected Behavior

Properly configured node pool.

Actual Behavior

GCP reports a valid cluster and node pool (both in UI and by gcloud), but kubectl get nodes says there are no nodes, and pods never start (they are in Pending state).

Steps to Reproduce

  1. terraform apply
  2. gcloud --project $PROJECT container clusters get-credentials exp-pkgsite5
  3. kubectl get nodes
@ghost ghost added the bug label Oct 12, 2020
@edwardmedia edwardmedia self-assigned this Oct 12, 2020
@edwardmedia
Copy link
Contributor

@jba can you share the debug log?

@edwardmedia
Copy link
Contributor

@jba can you repro the issue?

@jba
Copy link
Author

jba commented Oct 15, 2020

I got pulled off this project temporarily. I will try to get you a debug log by Monday.

@edwardmedia
Copy link
Contributor

edwardmedia commented Oct 19, 2020

Closing this issue. @jba Please reopen it once you have the debug log. Thanks

@ghost
Copy link

ghost commented Nov 19, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked as resolved and limited conversation to collaborators Nov 19, 2020
@github-actions github-actions bot added service/container forward/review In review; remove label to forward labels Jan 14, 2025
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants