Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform timeout on AWS Launch Config change when using create_before_destroy #5415

Closed
bryanvaz opened this issue Mar 2, 2016 · 8 comments

Comments

@bryanvaz
Copy link

bryanvaz commented Mar 2, 2016

Hi, I'm not sure if it is a AWS region issue, but when on the ap-southeast-1 region, Terraform will timeout when changing a EC2 Launch Configuration if the following is included (there is no timeout when this is not specified and the config changes).

lifecycle {
  create_before_destroy = true
}

LC config:

##############################
#   anchor launch config (PIER-m3-medium)

resource "aws_launch_configuration" "yard_m3_medium_pier" {
    associate_public_ip_address = true
    enable_monitoring = false

    name = "${var.cluster_name}-pier-m3-medium-LC"
    image_id = "${var.weave_ami}"
    instance_type = "m3.medium"
    spot_price = "0.012"
    iam_instance_profile = "${aws_iam_instance_profile.fleetyards_role.name}"
    key_name = "${var.ssh_key_name}"
    security_groups = ["${aws_security_group.yard_sg.id}"]

    user_data = "${file("fleetyard-user-data.sh")}"

    root_block_device {
      volume_size = "${var.weave_ebs_root_size}"
      volume_type = "standard"
      delete_on_termination = true
    }
    ebs_block_device {
      device_name = "${var.weave_ebs_second_name}"
      volume_size = "${var.weave_ebs_second_size}"
      volume_type = "standard"
      delete_on_termination = true
      snapshot_id = "${var.weave_ebs_second_snap}"
    }

    lifecycle {
      create_before_destroy = true
    }
}

Error applying plan:

1 error(s) occurred:

  • aws_launch_configuration.yard_m3_medium_pier: Error creating launch configuration: timeout while waiting for state to become '[success]'

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

@ming535
Copy link

ming535 commented Mar 9, 2016

I have the same error.

@jamescarr
Copy link

I am also hitting this but even on a simple LC creation. In 0.6.12

@jamescarr
Copy link

For reference, here is my launch config:

resource "aws_launch_configuration" "production-worker_node" {
  lifecycle { create_before_destroy = true }
  name_prefix = "production-worker_node-"
  ephemeral_block_device = {
      device_name = "/dev/xvdb"
      virtual_name = "ephemeral0"
  }

  image_id = "${var.worker_node_image}"
  instance_type = "m3.2xlarge"
  key_name = "xxxxxxxxxxxxx"
  associate_public_ip_address = true
  security_groups = ["${var.sg_worker}"]
  iam_instance_profile = "${aws_iam_instance_profile.worker_profile.name}"
  user_data = <<EOF
{
  "zapier_environment":"production",
  "zapier_role":"worker_node"
}
EOF

}

@jamescarr
Copy link

Debugging output in case it helps any. I see no errors, just retry attempts.

2016/03/09 11:05:19 [DEBUG] terraform-provider-aws: 2016/03/09 11:05:19 [DEBUG] autoscaling create launch configuration: {
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws:   AssociatePublicIpAddress: true,
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws:   BlockDeviceMappings: [{
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws:       DeviceName: "/dev/xvdb",
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws:       VirtualName: "ephemeral0"
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws:     }],
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws:   EbsOptimized: false,
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws:   IamInstanceProfile: "production-worker_profile",
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws:   ImageId: "ami-xxxxxxxxx",
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws:   InstanceMonitoring: {
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws:     Enabled: true
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws:   },
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws:   InstanceType: "m3.2xlarge",
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws:   KeyName: "failsafe",
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws:   LaunchConfigurationName: "produciton-worker_node-sazjn72s35a4zayxz7tynqzpyy",
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws:   SecurityGroups: ["sg-xxxxxxxxxx"],
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws:   UserData: "ewogICJ6YXBpZXJfZW52aXJvbm1lbnQiOiJwcm9kdWN0aW9uIiwKICAiemFwaWVyX3JvbGUiOiJ3b3JrZXJfbm9kZSIKfQo="
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws: }
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws: 2016/03/09 11:05:19 [DEBUG] Waiting for state to become: [success]
2016/03/09 11:05:19 [DEBUG] terraform-provider-aws: 2016/03/09 11:05:19 [TRACE] Waiting 500ms before next try
  ephemeral_block_device.3043895074.virtual_name: "" => "ephemeral0"
  iam_instance_profile:                           "" => "production-worker_profile"
  image_id:                                       "" => "ami-xxxxxxxxxxxx"
  instance_type:                                  "" => "m3.2xlarge"
  key_name:                                       "" => "failsafe"
  name:                                           "" => "<computed>"
  name_prefix:                                    "" => "produciton-worker_node-"
  root_block_device.#:                            "" => "<computed>"
  security_groups.#:                              "" => "1"
  security_groups.2013537064:                     "" => "sg-xxxxxxxxxxx"
  user_data:                                      "" => "0fc8af54fcc329889c3058df117cedc75e16809f"
2016/03/09 11:05:20 [DEBUG] terraform-provider-aws: 2016/03/09 11:05:20 [TRACE] Waiting 500ms before next try
2016/03/09 11:05:21 [DEBUG] terraform-provider-aws: 2016/03/09 11:05:21 [TRACE] Waiting 500ms before next try
2016/03/09 11:05:22 [DEBUG] terraform-provider-aws: 2016/03/09 11:05:22 [TRACE] Waiting 800ms before next try
2016/03/09 11:05:23 [DEBUG] terraform-provider-aws: 2016/03/09 11:05:23 [TRACE] Waiting 1.6s before next try
2016/03/09 11:05:24 [DEBUG] vertex aws_autoscaling_group.worker_node, waiting for: aws_launch_configuration.production-worker_node
2016/03/09 11:05:24 [DEBUG] vertex root, waiting for: provider.aws (close)
2016/03/09 11:05:24 [DEBUG] vertex provider.aws (close), waiting for: aws_launch_configuration.production-worker_node
2016/03/09 11:05:25 [DEBUG] terraform-provider-aws: 2016/03/09 11:05:25 [TRACE] Waiting 3.2s before next try
2016/03/09 11:05:29 [DEBUG] vertex root, waiting for: provider.aws (close)
2016/03/09 11:05:29 [DEBUG] vertex aws_autoscaling_group.worker_node, waiting for: aws_launch_configuration.production-worker_node
2016/03/09 11:05:29 [DEBUG] terraform-provider-aws: 2016/03/09 11:05:29 [TRACE] Waiting 6.4s before next try
2016/03/09 11:05:29 [DEBUG] vertex provider.aws (close), waiting for: aws_launch_configuration.production-worker_node
2016/03/09 11:05:34 [DEBUG] vertex root, waiting for: provider.aws (close)
2016/03/09 11:05:34 [DEBUG] vertex aws_autoscaling_group.worker_node, waiting for: aws_launch_configuration.production-worker_node
2016/03/09 11:05:34 [DEBUG] vertex provider.aws (close), waiting for: aws_launch_configuration.production-worker_node
2016/03/09 11:05:36 [DEBUG] terraform-provider-aws: 2016/03/09 11:05:36 [TRACE] Waiting 10s before next try
2016/03/09 11:05:39 [DEBUG] vertex root, waiting for: provider.aws (close)
2016/03/09 11:05:39 [DEBUG] vertex aws_autoscaling_group.worker_node, waiting for: aws_launch_configuration.production-worker_node
2016/03/09 11:05:39 [DEBUG] vertex provider.aws (close), waiting for: aws_launch_configuration.production-worker_node
2016/03/09 11:05:44 [DEBUG] vertex root, waiting for: provider.aws (close)
2016/03/09 11:05:44 [DEBUG] vertex aws_autoscaling_group.worker_node, waiting for: aws_launch_configuration.production-worker_node
2016/03/09 11:05:44 [DEBUG] vertex provider.aws (close), waiting for: aws_launch_configuration.production-worker_node
2016/03/09 11:05:46 [DEBUG] terraform-provider-aws: 2016/03/09 11:05:46 [TRACE] Waiting 10s before next try
2016/03/09 11:05:49 [DEBUG] vertex root, waiting for: provider.aws (close)
2016/03/09 11:05:49 [DEBUG] vertex aws_autoscaling_group.worker_node, waiting for: aws_launch_configuration.production-worker_node
2016/03/09 11:05:49 [DEBUG] vertex provider.aws (close), waiting for: aws_launch_configuration.production-worker_node

@jamescarr
Copy link

My issue was actually due to specifying a non-existent AMI.

@jamescarr
Copy link

It actually might be helpful in my situation at least to have something that indicates that the AMI provided doesn't exist. Not sure why there wasn't an underlying AWS error here.

@catsby
Copy link
Contributor

catsby commented Mar 17, 2016

Hey @jamescarr – it's normally an underlying AWS error, we had another issue with Retries recently that's since been patched. That issue was preventing the real issue from surfacing. The latest v0.6.13 should have things sorted out!

Ref:

@catsby catsby closed this as completed Mar 17, 2016
@ghost
Copy link

ghost commented Apr 27, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 27, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants