Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

google_compute_autoscaler: Error 400: Required field 'autoscaler' not specified #15056

Closed
mattes opened this issue Jun 5, 2017 · 7 comments · Fixed by #15101
Closed

google_compute_autoscaler: Error 400: Required field 'autoscaler' not specified #15056

mattes opened this issue Jun 5, 2017 · 7 comments · Fixed by #15101

Comments

@mattes
Copy link

mattes commented Jun 5, 2017

Terraform Version

Terraform v0.9.6

Affected Resource(s)

  • google_compute_autoscaler

Terraform Configuration Files

resource "google_compute_autoscaler" "primary" {
  name   = "primary"
  zone   = "${google_container_cluster.primary.zone}"
  target = "${google_compute_instance_group_manager.primary.self_link}"

  autoscaling_policy = {
    max_replicas    = 5
    min_replicas    = 4
    cooldown_period = 60

    cpu_utilization {
      target = 0.75
    }
  }
}

Output

google_compute_autoscaler.primary: Error updating Autoscaler: googleapi: Error 400: Required field 'autoscaler' not specified, required
@danielcompton
Copy link
Contributor

That autoscaler looks almost identical to my working config. Can you share more of your config, I suspect there is something else that is going on here?

@mattes
Copy link
Author

mattes commented Jun 5, 2017

resource "google_container_cluster" "primary" {
  name               = "primary"
  zone               = "us-central1-a"
  initial_node_count = 3

  master_auth {
    username = "${var.cluster_master_auth_username}"
    password = "${var.cluster_master_auth_password}"
  }

  node_config {
    machine_type = "n1-standard-1"
    disk_size_gb = 100
  }
}


resource "google_compute_instance_group_manager" "primary" {
  name               = "${var.google_compute_instance_group_manager_primary_name}"
  zone               = "${google_container_cluster.primary.zone}"
  instance_template  = "https://www.googleapis.com/compute/v1/projects/${var.google_project_id}/global/instanceTemplates/${replace(var.google_compute_instance_group_manager_primary_name, "/-grp$/", "")}"
  base_instance_name = "${replace(var.google_compute_instance_group_manager_primary_name, "/-grp$/", "")}"

  target_size = 3
  named_port {
    name = "http-to-https-port"
    port = 31000
  }
}

If it's my config, maybe the target_size is causing problems?

@mattes
Copy link
Author

mattes commented Jun 5, 2017

Just tested without target_size. Same error.

@rileykarson
Copy link
Contributor

Hi @mattes,
The error isn't in your config - it's because Terraform is trying to perform an update, but that functionality is broken right now. You can work around this for now by changing the name to force a new resource, or by manually running terraform destroy with your resource as a -target parameter, followed by terraform apply.

Note that both approaches will involve destroying and recreating the resource - if you have other resources that depend on it, you may want to manually edit it in the Cloud Console/with the gCloud API until updating works properly again.

@danielcompton
Copy link
Contributor

For completeness, you can also use

lifecycle {
  create_before_destroy = true
}

although I'm not sure how well that will work with downstream resources.

@matschaffer
Copy link
Contributor

For anyone who lands here like I did, looks like a proper fix for this went out in 0.9.7

@ghost
Copy link

ghost commented Apr 11, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 11, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants