Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Display "value of 'count' cannot be computed" when passing list or map with computed value to the module #10857

Closed
gongo opened this issue Dec 20, 2016 · 38 comments
Milestone

Comments

@gongo
Copy link

gongo commented Dec 20, 2016

Hi.

When passing a list or map with computed value to the module, terraform plan displays a message like a title.

$ terraform plan
Error configuring: 1 error(s) occurred:

* aws_sns_topic_subscription.event-test: value of 'count' cannot be computed

Terraform Version

$ terraform -v
Terraform v0.8.1

This error has occured from v0.8.0

Affected Resource(s)

(Maybe) Terraform's core

Terraform Configuration Files

https://gist.github.com/gongo/362975d478a9f4b85b3a213ddcc4d0cf

Debug Output

https://gist.github.com/gongo/362975d478a9f4b85b3a213ddcc4d0cf#gistcomment-1952568

Panic Output

No panic.

Expected Behavior

No error occurs.

Actual Behavior

Error occurs 😱

Steps to Reproduce

  1. Input Terraform configuration like above.
  2. terraform plan
@gongo
Copy link
Author

gongo commented Dec 20, 2016

Workaround 1

Pass count directory.

--- a/main.tf
+++ b/main.tf
@@ -23,4 +23,6 @@ module "sns_subscription_to_lambda" {
     "${aws_sns_topic.test1.arn}",
     "${aws_sns_topic.test2.arn}",
   ]
+
+  topic_arn_count = 2
 }
diff --git a/module/sns_topic_subscription.tf b/module/sns_topic_subscription.tf
index 366d76b..2a5de2c 100644
--- a/module/sns_topic_subscription.tf
+++ b/module/sns_topic_subscription.tf
@@ -2,8 +2,12 @@ variable "topic_arns" {
   type = "list"
 }

+variable "topic_arn_count" {
+  type = "string"
+}
+
 resource "aws_sns_topic_subscription" "event-test" {
-  count = "${length(var.topic_arns)}"
+  count = "${var.topic_arn_count}"

   topic_arn = "${element(var.topic_arns, count.index)}"
   protocol  = "lambda"

Workaround 2

Do not pass computed value

--- a/main.tf
+++ b/main.tf
@@ -20,7 +20,7 @@ module "sns_subscription_to_lambda" {
   source = "./module"

   topic_arns = [
-    "${aws_sns_topic.test1.arn}",
-    "${aws_sns_topic.test2.arn}",
+    "arn:aws:sns:ap-northeast-1:012345678901:event1",
+    "arn:aws:sns:ap-northeast-1:012345678901:event2",
   ]
 }

@mitchellh
Copy link
Contributor

This is the correct behavior not because you're passing in a list, but because you're performing a function call length on a computed value. We're hoping to relax this constraint over time but at the moment that is not allowed and it appears you've found the workarounds that work:

Workaround 1: You didn't use length
Workaround 2: You passed static values (non-computed)

@gongo
Copy link
Author

gongo commented Dec 20, 2016

@mitchellh

Thanks for reply! I understood.

We're hoping to relax this constraint over time

I expect it 😄

@pikeas
Copy link
Contributor

pikeas commented Jan 4, 2017

Just bit by this. Is there another issue to follow for progress on when this is supported?

Also, this error message is confusing. Could it be updated to value of "count" parameter cannot depend on a computed value (link to docs or this GH issue)?

@fgimian
Copy link

fgimian commented Jan 9, 2017

Likewise, just got caught out by this today too. Really hoping this is being tracked somewhere 😄

Huge thanks everyone
Fotis

@tata2000
Copy link

This was very useful to perform conditional updates in the module.
Very sad to see it gone in the new version.

@pdecat
Copy link
Contributor

pdecat commented Jan 18, 2017

Thanks for the workarounds.

Dupe of #1497 I believe.

@iroller
Copy link
Contributor

iroller commented Jan 25, 2017

We're also using computed count for conditionals. Can't see how to properly upgrade to a newer terraform version without braking the functional. Both workarounds make you switch from a dynamic to a static count value.

@johnrengelman
Copy link
Contributor

so, i find myself doing something like the following (especially when testing modules):

// module/foo.tf
variable "ids" {
  type = "list"
}

resource "some_resource" "foo" {
  count = "${length(var.ids)}"
}

// project.tf

module "foo" {
  ids = ["${some_resource.bar.id}", "${some_resource.baz.id}"]
}

In this case, I'm using length but couldn't the length value be computed statically even though each element is computed?

@hauleth
Copy link
Contributor

hauleth commented Jun 23, 2017

This is strange, but I have module that creates list of aws_instances and then I pass their IDs as an output value. Then that list is passed to another module. Everything is fine when I define instance like this:

resource "aws_instance" "client" {
  ami                  = "${var.ami}"
  instance_type        = "${var.instance_type}"
  subnet_id            = "${element(var.private_subnets, count.index)}"
  iam_instance_profile = "${aws_iam_instance_profile.client.name}"

  key_name = "${var.key_name}"

  count = 5

  vpc_security_group_ids = [
    "${var.security_group_ids}",
    "${aws_security_group.all_egress.id}",
  ]

  tags {
    App  = "client"
    Name = "client-${count.index}"
  }

  // user_data = "${file("${path.module}/files/client.cloud-config.yml")}"
}

However as soon as I uncomment user_data value everything goes south and I get an error:

* module.cluster-lb-test.aws_elb_attachment.instances: aws_elb_attachment.instances: value of 'count' cannot be computed

@apparentlymart
Copy link
Contributor

Hi @hauleth! Sorry for the confusing behavior there.

What's going on here is that changing user_data is a "forces new resource" change, which means that current the EC2 instance must be deleted and a new one replaced. At that point, Terraform doesn't know the ids of the new instances (since they've not been created yet) and due to the limitations described in this issue therefore count cannot be populated.

Eventually something like #4149 will make this smoother, but for now the workaround is to manually target that EC2 instance for replacement first, thus allowing Terraform to complete the instance replacement before trying to deal with the count attribute:

terraform plan -target=aws_instance.client -out=tfplan
(review the plan; should contain only the aws_instance and its dependencies)
terraform apply tfplan
terraform plan -out=tfplan
(review the plan; should now contain anything else that needs to be updated as a result)
terraform apply tfplan

In principle Terraform should be able to tell how many instances there are even though it doesn't yet have their ids, but currently that doesn't work due to some limitations of the interpolation language. Hopefully we can make that work better in future too, which may avoid the need for a two-step apply process in this specific situation.

@hafizullah
Copy link

hafizullah commented Aug 29, 2017

Just got bitten by this issue as well with TF: 0.10.2.

I am trying to create routes for multiple routing tables and I am calculating the count based of the number of routing tables in the list

@in4mer
Copy link

in4mer commented Sep 12, 2017

I'm now dealing with this issue, after stumbling through several others and trying workaround after workaround along the way.

I'm passing a list of strings (with a hard-coded number of elements) into a module. I'm interpolating into those strings in the module definition, but not in a way that changes the argument count. I'm trying to use count on the passed array from within the module, and I'm now seeing this issue.

It appears that this issue is much deeper; passing a simple count parameter doesn't actually fix this issue. Something is tainted further down in the data structure, somehow. I'm using a null_data_source to construct two different NDSs, and thereby lists, of the two arguments types that can be passed into the module:

data  "null_data_source"  "rule_splitter"  {
  count   = "${var.cidr_rule_count + var.src_rule_count}"
  inputs  = {
    rule_type = "${element(split(",", element(compact(var.rule_set), count.index)), 0)}"
    rule      = "${element(compact(var.rule_set), count.index)}"
  }
}

data  "null_data_source"  "cidr_rule_set" {
#  count   = "${length(matchkeys(data.null_data_source.rule_splitter.*.inputs.rule,
#                                data.null_data_source.rule_splitter.*.inputs.rule_type,
#                                list("cidr_block")))}"
  count   = "${var.cidr_rule_count}"
  inputs  = {
    d = "${element(matchkeys(data.null_data_source.rule_splitter.*.inputs.rule,
                              data.null_data_source.rule_splitter.*.inputs.rule_type,
                              list("cidr_block")), count.index)}"
  }
}

data  "null_data_source"  "sgsrc_rule_set" {
#  count   = "${length(matchkeys(data.null_data_source.rule_splitter.*.inputs.rule,
#                                data.null_data_source.rule_splitter.*.inputs.rule_type,
#                                list("source_sg")))}"
  count   = "${var.src_rule_count}"
  inputs  = {
    d = "${element(matchkeys(data.null_data_source.rule_splitter.*.inputs.rule,
                              data.null_data_source.rule_splitter.*.inputs.rule_type,
                              list("source_sg")), count.index)}"
  }
}

If I pass a fixed count into the main NDS "rule_splitter", that one works. But the next failure is at the next NDS trying to construct its count using length(data.nds.rule_splitter.*.inputs.d), which should be a known result based on the original var.rule_count (formerly length(var.rule_set)). When explicitly setting count to length(var.rule_count), it's still breaking downstream (even though the count is known).

Now I've completely fallen down this rabbit hole, and I'm to the point where subsequent list compilation is failing for a plethora of crazy reasons. I'm abandoning this attempt, even though it worked for a while.

@zoltrain
Copy link

I've also just been bitten by this.

use case, prefix lists. I want to restrict access to a S3 endpoint based on not just the prefix list target in a security group, but also a network ACL. I don't think there's any way of knowing the CIDR block count ahead of time so it needs to be calculated.

I want to add a network rule per CIDR block to allow access to S3. I can't think of a way to get a length value without having to use a computed value. Even accessing a prefix list via a data type still doesn't help.

The example in the docs just directly accessing this list value by index, not very helpful if you don't know how many you need to make.

E.g: the S3 prefix list in europe-west-2 returns three disparate CIDR blocks, you need them all too. as the packages domain for the yum repos is bound to the last CIDR block in that list.

https://www.terraform.io/docs/providers/aws/d/prefix_list.html

Can anyone think of a workaround for this use case?

@tata2000
Copy link

@zoltrain
Check if you can run only a single target , if you can - run it with the target that creates the CIDRs and later run it in full.

@seanjfellows
Copy link

I don't understand why this ticket is marked closed. The broken behavior still exists in Terraform 0.11.2

@HighwayofLife
Copy link

There is another discussion in an open ticket regarding this issue... #12570

@eschwartz
Copy link

The problem I see with the "explicit count variable" workaround, is that it introduces room for human error, such that the var.resource_count does not match count(resouce_list). In this case, terraform may appear to succeed, but actually not deploy the configuration you expected.

To mitigate this concern, I propose...

Workaround 3:

Verify resource_count == count(resource_list)

variable "topic_arns" {
  type = "list"
}
variable "topic_arns_count" {
}

resource "aws_sns_topic_subscription" "event-test" {
  # Explicitly define count (not computed)
  count = "${var.topic_arns_count}"

  topic_arn = "${element(var.topic_arns, count.index)}"
  protocol  = "lambda"
  endpoint  = "arn:aws:lambda:ap-northeast-1:012345678901:function:test-event"
}

# Verify that the count matches the list
resource "null_resource" "verify_list_count" {
  provisioner "local-exec" {
    command = <<SH
if [ ${var.topic_arns_count} -ne ${length(var.topic_arns)} ]; then
  echo "var.topic_arns_count must match the actual length of var.topic_arns";
  exit 1;
fi
SH
  }

  # Rerun this script, if the input values change
  triggers {
    topic_arns_count_computed = "${length(var.topic_arns)}"
    topic_arns_count_provided = "${var.topic_arns_count}"
  }
}

At least this way, you'll get a useful error message, instead of silently-broken behavior.

@seanjfellows
Copy link

@eschwartz I appreciate your efforts but I'm pretty sure we're at the point where we will be writing a script to emit a .tf file (with no variables) instead of resorting to trying to use the Terraform language to express this. :-/

MuriloDalRi pushed a commit to alphagov/prometheus-aws-configuration-beta that referenced this issue Jun 13, 2018
Count is hardcoded due to hashicorp/terraform#10857 meaning that we can
not have a count based on a computed value on the first deploy. `3` represents one record for the
aws_acm_certificate.monitoring_cert domain and two for it's subject_alternative_names
@Zambozo
Copy link

Zambozo commented Jun 14, 2018

Just hit this issue my self:

`resource "aws_security_group_rule" "allow_all_bastion" {
  count = "${length(var.security_group_ids)}"
  type            = "ingress"
  from_port       = 0
  to_port         = 0
  protocol        = "-1"
  source_security_group_id = "${element(var.security_group_ids,  count.index)}"
  security_group_id = "${aws_security_group.bastion_ssh_sg.id}"
}`
`  security_group_ids      = ["${module.vpc.blqblq}",
                             "${module.vpc.blqblq2}",
                             "${module.vpc.blqblq3}",
                             "${module.vpc.blqblq4}",
                             "${module.vpc.blqblq4}",
                             "${module.vpc.blqblq5}",]
}`

@AbelGuti
Copy link

AbelGuti commented Jun 16, 2018

Terraform v0.11.7
In this version for some reason only works for some attributes refence. For example, it works for this two

  • ${aws_db_instance.mysql.address}
  • ${aws_elasticache_cluster.redis.port}

But not for this one

  • ${aws_elasticache_cluster.redis.address}

¯\(ツ)

@joe-bowman
Copy link

None of the workarounds here are practical.

I have a two modules - one to create a vpc, which is called twice, and a second to peer to two vpcs, including adding the appropriate routes; the second is passed the vpc_id, from which I can interpolate the subnets, and from there the route tables. I want to add routes to both tables, and this works when the VPCs exist, but when I'm running the entire state from scratch it doesn't.

As the number of subnets is defined in the same 'plan/apply' operation, terraform does know how many subnets and route tables exist, but yet we still get the error.

@mitchellh stated above that this may be relaxed; do we have any idea on timescales of this as it massively limits the power of modules in its current state.

@luisamador
Copy link
Contributor

I've just hit the same issue :(

kerin pushed a commit to ministryofjustice/analytics-platform-ops that referenced this issue Jul 2, 2018
see: hashicorp/terraform#10857

`length()` cannot be used on variables that aren’t known ahead of running terraform plan. To work around this, a `num_subnets` variable and output value has been added to the `aws_vpc` module, which is then used as to calculate the `count` attribute of `module.efs_volume.aws_efs_mount_target.mount_target`.
xoen pushed a commit to ministryofjustice/analytics-platform-ops that referenced this issue Jul 4, 2018
see: hashicorp/terraform#10857

`length()` cannot be used on variables that aren’t known ahead of running terraform plan. To work around this, a `num_subnets` variable and output value has been added to the `aws_vpc` module, which is then used as to calculate the `count` attribute of `module.efs_volume.aws_efs_mount_target.mount_target`.
@Daniel-Houston
Copy link

I hit this issue today as well. Just commenting to vote for adding this support

@ghost
Copy link

ghost commented Jul 24, 2018

Versions

$ terraform --version
Terraform v0.11.7
+ provider.aws v1.28.0
+ provider.mysql v1.1.0
+ provider.null v1.0.0
+ provider.random v1.3.1

Description

I have just hit this issue with the following code-base:

  • An abstract layer (module from Terraform perspective), we do internally call stacks -- stack-vpc.

  • The above abstract layer uses another layer of abstraction, we do, internally, call molecules.
    Another module, from Terraform perspective.

  • stack-vpc includes:

    • molecule-vpc. (creates vpc)
    • molecule-subnets (creates subnets in vpc)
    • molecule-route-table (creates route-table and routa-table-association)

As you may have guessed, molecule-subnets module exposes subnet_ids and these subnet_ids are passed to molecule-route-table.

Pretty, the same use-case, like @hafizullah had.

kerin pushed a commit to ministryofjustice/analytics-platform-ops that referenced this issue Jul 26, 2018
see: hashicorp/terraform#10857

`length()` cannot be used on variables that aren’t known ahead of running terraform plan. To work around this, a `num_subnets` variable and output value has been added to the `aws_vpc` module, which is then used as to calculate the `count` attribute of `module.efs_volume.aws_efs_mount_target.mount_target`.
@Talanor
Copy link

Talanor commented Aug 3, 2018

Adding my vote to the pool. Hit that issue as well today

kerin pushed a commit to ministryofjustice/analytics-platform-ops that referenced this issue Aug 7, 2018
see: hashicorp/terraform#10857

`length()` cannot be used on variables that aren’t known ahead of running terraform plan. To work around this, a `num_subnets` variable and output value has been added to the `aws_vpc` module, which is then used as to calculate the `count` attribute of `module.efs_volume.aws_efs_mount_target.mount_target`.
kerin added a commit to ministryofjustice/analytics-platform-ops that referenced this issue Aug 7, 2018
* Replace per-env terraform config with workspaces

Terraform’s new(ish) workspace feature allows us to define terraform resource configuration once, and apply it to multiple conceptual environments, each with a different remote state behind them.

This means that applying a change to two environments changes from:

```
cd infra/terraform/environments/dev
terraform apply
cd ../alpha
terraform apply
```

to:

```
cd infra/terraform/platform
terraform workspace select dev
terraform apply -var-file=dev.tfvars
terraform workspace select alpha
terraform apply -var-file=alpha.tfvars
```

Crucially, the use of workspaces means that the same collection of terraform resources is used for both environments, so we no longer need to copy and paste changes between the two environments.

**Important**: enabling workspaces changes the remote state store behind an environment, so simply running `terraform apply` on a workspace-enabled version will create duplicates of all resources, name clashes from the existing state, dogs and cats living together, and more. So as and when this PR is merged, we must manually move the existing state to the new workspace-enabled state store using `terraform state` commands.

* Add softnas num instances and volume size platform variables

* Define softnas vars for alpha and dev workspaces

* Linted terraform

* Added new Auth0 thumbprints to alpha and dev vars

* Bumped AWS provider version

* Added airflow db user/pass to variables

* Added workaround for length() on a calculated value for efs_volume

see: hashicorp/terraform#10857

`length()` cannot be used on variables that aren’t known ahead of running terraform plan. To work around this, a `num_subnets` variable and output value has been added to the `aws_vpc` module, which is then used as to calculate the `count` attribute of `module.efs_volume.aws_efs_mount_target.mount_target`.

* Changed platform state S3 key to ‘platform’

Terraform will prefix the S3 key with the workspace name, so the main `key` attribute should be generic and not environment-specific.

* Fixed bungled rebase

An error while resolving conflicts led to the reappearance of $var.env and incorrect module paths - this commit reinstates those changes

* Add S3 workspace_key_prefix to main remote state backend

Terraform workspaces add a prefix to S3 keys, `env:` by default. To avoid ambiguity and confusion with the `global` S3 state, this has been renamed to `platform:` and “platform” removed from the `key`. This means that remote state objects will have keys like:

`/platform:/$workspace_name/terraform.tfstate`

instead of the previous:

`/env:/$workspace_name/platform/terraform.tfstate`

* Added SAML config for workspace-test env

* Removed unused `env` var

* Make sure Circle uses a specific Terraform version

* Reinstated original alpha/dev env terraform resources

We should only remove these resources once the migration to workspaces is complete, so we can run both versions in parallel

* Added tvfars for workspace-test env

* formatted terraform files

* Update subnet tags to match kops scheme

Tags on subnets have drifted from those applied by kops, causing unnecessary and possibly incorrect changes to be applied when terraform is run - this commit gets them back in sync.

This also refactors tags to separate out common tags, and make subnet-specific tags more readable.

* Add temporary terraform state import script

Terraform does not have a command to _copy_ resources from one state object to another, only _move_. The ability to copy resources is a useful tool when testing to move to terraform workspaces, so this bash script has been added.

This should be for *temporary use only*, as two state objects managing the same AWS resources is obviously quite dangerous.

* Remove unneeded state import script

* Add Airflow values to alpha workspace tfvars

* Remove old terraform environment definitions

The introduction of workspaces means there is one set of resource environments for all environments, so the existing per-env definitions are no longer required

* Update README to document terraform workspaces

* Remove unused workspace-test.tfvars

* Reformat CLI examples in docs for readability
@nunofernandes
Copy link

Also bitten by this... Will the terraform 0.12 solve this issue?

eilw pushed a commit to nsbno/app-infrastructure that referenced this issue Aug 30, 2018
Set default value of lists to empty list, to avoid having to pass an
empty list when creating a role without custom policies.

Workaround for a issue in terraform:
hashicorp/terraform#10857

Fix by passing the number of policies as a variable.
eilw pushed a commit to nsbno/app-infrastructure that referenced this issue Aug 30, 2018
Set default value of lists to empty list, to avoid having to pass an
empty list when creating a role without custom policies.

Workaround for a issue in terraform:
hashicorp/terraform#10857

Fix by passing the number of policies as a variable.
@jdn-za
Copy link

jdn-za commented Sep 12, 2018

hit this today again

@GMartinez-Sisti
Copy link

I've been having this issue for a few months now... Always resorting to hardcode the intended value.

I'm avoiding creating .tf files with scripts to allow readability from the repo, otherwise you have to document everything elsewhere.

@yuriydee
Copy link

Hitting this issue as well. Im trying to count iam policies that im creating and it fails when starting from scratch. Works fine when the policies are already created though, just like others mentioned above.

Any updates on a fix?

@mildwonkey
Copy link
Contributor

Hi! I'm sorry to everyone running into this problem. We are aware of it, there are several related open issues, and you can follow the main discussions in #14677 and #17421.

I am going to lock this particular issue to consolidate the conversation - please check out the issues I've linked and add your 👍 to the main comments.

@hashicorp hashicorp locked and limited conversation to collaborators Sep 19, 2018
@apparentlymart apparentlymart added this to the v0.12.0 milestone Oct 31, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests