You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
If you are interested in working on this issue or have submitted a pull request, please leave a comment.
If an issue is assigned to the modular-magician user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to hashibot, a community member has claimed the issue already.
Terraform Version
Terraform v0.12.28
provider.google v3.42.0
provider.google-beta v3.5.0
Affected Resource(s)
google_container_cluster
google_container_node_pool
Terraform Configuration Files
resource "google_container_cluster" "pkgsite" {
project = var.project
name = "exp-pkgsite5"
location = "us-central1-a"
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
master_auth {
username = ""
password = ""
client_certificate_config {
issue_client_certificate = false
}
}
}
resource "google_container_node_pool" "pkgsite" {
project = var.project
name = "default-pool"
cluster = google_container_cluster.pkgsite.name
location = "us-central1-a"
autoscaling {
min_node_count = 3
max_node_count = 20
}
node_config {
preemptible = true
machine_type = "e2-standard-2"
metadata = {
disable-legacy-endpoints = "true"
}
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
}
}
GCP reports a valid cluster and node pool (both in UI and by gcloud), but kubectl get nodes says there are no nodes, and pods never start (they are in Pending state).
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!
ghost
locked as resolved and limited conversation to collaborators
Nov 19, 2020
Community Note
modular-magician
user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned tohashibot
, a community member has claimed the issue already.Terraform Version
Terraform v0.12.28
Affected Resource(s)
Terraform Configuration Files
(This differs from the "separately managed node pool" example at https://www.terraform.io/docs/providers/google/r/container_cluster.html only in that the
node_count = 1
line has been replaced with anautoscaling
block.)Expected Behavior
Properly configured node pool.
Actual Behavior
GCP reports a valid cluster and node pool (both in UI and by
gcloud
), butkubectl get nodes
says there are no nodes, and pods never start (they are in Pending state).Steps to Reproduce
terraform apply
gcloud --project $PROJECT container clusters get-credentials exp-pkgsite5
kubectl get nodes
The text was updated successfully, but these errors were encountered: