Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regional K8 Cluster Created with Non-Functional Node Pools #2380

Closed
nathanwilk7 opened this issue Oct 31, 2018 · 2 comments
Closed

Regional K8 Cluster Created with Non-Functional Node Pools #2380

nathanwilk7 opened this issue Oct 31, 2018 · 2 comments
Labels

Comments

@nathanwilk7
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
  • If an issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to "hashibot", a community member has claimed the issue already.

Terraform Version

$ terraform -v
Terraform v0.11.10
+ provider.google-beta v1.19.0

Affected Resource(s)

  • google_container_cluster
  • google_container_node_pool

Terraform Configuration Files

provider "google-beta" {
  project = "my-project"
  region = "us-east1"
}

resource "google_container_cluster" "cluster" {
  name = "my-cluster"
  project = "my-project"
  region = "us-east1"
  description = "Multi-zonal cluster with node pools"
  provider = "google-beta"
  initial_node_count = 2
}

resource "google_container_node_pool" "my-node-pool" {
  name       = "my-node-pool"
  project    = "my-project"
  region     = "us-east1"
  cluster    = "my-cluster"
  provider = "google-beta"
  node_count = 2
  node_config {
    machine_type = "n1-standard-4"
  }
  depends_on = ["google_container_cluster.cluster"]
}

Debug Output

https://gist.github.com/nathanwilk7/cffc7658cd96a2e319476d1e10e326f4

Expected Behavior

After creating a kubernetes cluster and assigning to the cluster a node pool with nodes having 4vCPU's, a pod requesting 4 vCPU's should be assigned to the node pool and run.

Actual Behavior

After creating a kubernetes cluster and assigning to the cluster a node pool with nodes having 4vCPU's, a pod requesting 4 vCPU's gives this error:

FailedScheduling  1s (x5 over 8s)  default-scheduler  0/12 nodes are available: 12 Insufficient cpu.

This suggests to me that there is an issue with the way in which the node pool was associated with the cluster. The node pool does exist and appears to have been created without issue. It's possible this is a kubernetes issue and not a terraform issue, but we were able to successfully create this same setup via the gcloud web UI. Therefore, I believe my issue is either a mistake in my configuration or a bug.

Steps to Reproduce

  1. terraform init
  2. terraform apply
  3. Create a file called pod.yaml with the content below
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.7.9
    ports:
    - containerPort: 80
    resources:
      requests:
        cpu: "4"
  1. kubectl apply -f pod.yaml
  2. kubectl describe po/nginx
...
  Warning  FailedScheduling  1s (x5 over 8s)  default-scheduler  0/12 nodes are available: 12 Insufficient cpu.
  1. gcloud container node-pools list --cluster my-cluster --region us-east1
NAME          MACHINE_TYPE   DISK_SIZE_GB  NODE_VERSION
default-pool  n1-standard-1  100           1.9.7-gke.6
my-node-pool  n1-standard-4  100           1.9.7-gke.6

Important Factoids

References

I've also tried this config but had similar results.

@ghost ghost added the bug label Oct 31, 2018
@nathanwilk7
Copy link
Author

Closing because this issue was due to my mistakenly believing that a 4cpu pod could be scheduled on a 4cpu node in k8.

@ghost
Copy link

ghost commented Mar 29, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 hashibot-feedback@hashicorp.com. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 29, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

1 participant