-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implicit Reference Ordering with Data Sources in 0.13 #25961
Comments
Thanks @bflad! I don't think this was documented as working in 0.12, since References in core are always evaluated through the state. During Apply this is obvious, since resources are written to the state as they are created. During Plan we have a temporary working state, in which partial resources are written in order to be evaluated. During Refresh, only resources that exist already are written to the state, which means that even if new resource configurations contain static values, they cannot be evaluated at that time. This is what causes the In 0.13 we now have the ability to plan data sources (in preparation for the future handling of Refresh during plan as well). The above failure in 0.13 is happening during Plan, where we do have the static config values available in the state for evaluation. This means that when checking the config of Since this is also the behavior of resources within the configuration, I'm not sure yet how we want to go about handling the change. This better aligns the behavior with the interpolation in other contexts (resources, providers, locals, etc), where the config can be treated as if it were evaluated as a whole, so I'm not sure about creating an exception for data sources during Plan (or if that's possible yet). For now we can consider the use of |
There's definitely some mulling to be done here, because we have both some historical tradeoffs and some forward-looking tradeoffs to consider. On a purely historical note: this idea of using the "known-ness" of the data resource configuration to decide whether to defer it was always a "best we can do" tradeoff back in the original data source implementation in 0.7, because our design principles at the time said that we would not do any network requests during planning. That forced moving the data source reads into the refresh phase, which in turn meant relying on this imprecise heuristic to decide whether to defer to the apply phase. We've since changed that "no network requests during planning" principle, in recognition of the fact that accurate planning sometimes just requires making network requests, and our forthcoming planned merger of the refresh and plan walks brings that to its logical conclusion. That means we have an opportunity to revisit the original design tradeoffs for data sources, with some different (looser?) design constraints. I can imagine a few sorts of static analysis we could do against the diff once we have the refresh and plan walks merged -- in principle during the plan walk we could see that Therefore that sort of static analysis would, in a lot of cases, just degenerate to something simpler to implement and simpler to explain: treating any expression reference to That would upset some other use-cases that are possible today by relying on certain resource arguments being known at plan time, but I think those cases are less common and users can work around them by factoring out the known-value expression into a local value and referring to that in both places, rather than having the data resource refer to the managed resource: locals {
ssm_document_name = "bflad-testing"
}
resource "aws_ssm_document" "test" {
name = local.ssm_document_name
# etc...
}
data "aws_ssm_document" "test" {
# Refer to the local value if for some reason you _don't_ want
# this data resource to be deferred until after the managed
# resource is applied.
name = local.ssm_document_name
} Such a change won't be possible in the 0.13.x series because it would be breaking for some less-common cases, but perhaps we can do something like this in 0.14. |
Just got asked internally about this behavior for provider testing so commenting here to highlight the challenges on our side.
From the provider testing perspective, this represents the behavior of Terraform core prior to 0.13 (at least from my knowledge/testing in 0.10, 0.11, and 0.12). Effectively without doing anything, Terraform Providers cannot use the new Terraform Plugin SDK binary testing framework against 0.13.0 and later without significant code changes, which puts us in a bind between these choices:
I'll work with the other official provider teams to gather the number of affected tests/configurations for those who can test with the binary testing framework. Understandably, changing this behavior potentially represents a breaking change. I'm just worried we may want to expedite that to remove this confusion in the Terraform Provider ecosystem, unless we want to say this is a regression and treat it as a bug fix. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Terraform Version
Terraform Configuration Files
Debug Output
Please ask if you need this.
Expected Behavior
Actual Behavior
On
terraform apply
with no existing state/resources:Using implicit attribute references ordered operations correctly in Terraform 0.12 and seems to imply that Terraform is now ordering the data source read before the resource creation. If we uncomment the
depends_on
configuration, it works properly:Steps to Reproduce
In an account without the named SSM Document:
terraform init
terraform apply
Additional Context
depends_on
explicit ordering. This change would be an anti-pattern comparatively.The text was updated successfully, but these errors were encountered: