-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS Credentials when working with s3 Remote State #5839
Comments
This is limited specifically to s3 remote state, working with terraform and AWS is fine. |
i believe PR #5270 may address this issue. |
I believe you might be right. I'll work on testing |
I've just tried v0.6.16 which included #5270 with our code-base which uses S3 for remote state and it actually doesn't fix this issue. If I set the required access key, secret access key and region variables it doesn't work. I haven't tried the |
I hit the same problem. Somehow terraform is not using |
Same here. The following resource "terraform_remote_state" "project" {
backend = "s3"
config {
bucket = "${var.project_state_bucket}"
key = "${var.project_state_key}"
region = "${var.project_aws_region}"
access_key = "${var.project_aws_access_key}"
secret_key = "${var.project_aws_secret_key}"
}
} always ends with
At the same time I was able to list and download content of the this S3 bucket using Workaround in my case:
|
I've resorted to just using |
@jwadolowski: I'm curious, which version of terraform are you running? The shared_credentials_file and profile options seem new to me. |
@marcboudreau I'm running v0.6.16. Both |
@jwadolowski Thanks. I'm running 0.6.15 and it's not in. I found the code change that adds support for it. |
this is still broken in 0.6.16 as far as i can tell: After rotating my keys, and supplying new ones into the config, I get these errors:
|
If you're getting:
Triple check your bucket names for your remote config. I for instance had the |
I began receiving this error, on 0.6.16, only after rotating my AWS access keys. |
It appears to be related to how we are pulling the remote config:
Once I changed this out to use profile instead, it worked fine:
|
In my post above, switching to profile worked, but only after manually deleting the offending key and secret lines from the state file. Terraform needs to be more flexible about pulling and applying remote state. |
I have also experienced this problem but discovered that an easier workaround is to run the terraform remote config a second time. When you run it the second time it then automatically updates the access_key and secret_key fields. |
I'm also getting this issue, but I have an added layer of complexity in that I have Multi Factor Auth enabled too. |
I just hit this bug. Quite silly this is actually taking so long to get resolved. |
I'm also getting the name error as @flyinprogrammer after rotating keys
Version 0.8.1 |
I've got around this by some combination of:
However, when I remove the key/secret from my config explicitly then it doesn't work. My .aws/credentials file has the same key/secret, This all resulted from rotating my user's key/secret |
Is there any update on this bug? I just ran into it today. v0.8.6. Created remote s3 state, now whenever I do a plan I get an error saying no credential sources found. I originally had just a default profile in my aws credential file, but recently added a development and production profile with no default. I reverted back to my original profile of default, but am still getting this error. |
Confirmed the bug is still present in v0.8.8. Worked around by explicitly setting profile. |
Still seeing this error in 0.9.1 |
Confirmed the bug is still present in v0.9.1, it works if I'm replacing aws my config.tfvars (not working)
➜ terraform init -backend-config=config.tfvars
Downloading modules (if any)...
Get: git::ssh://git@github.com/myaccount/terraform-aws-modules.git?ref=v0.0.3
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary. ➜ terraform plan
Error loading state: InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
status code: 403, request id: XXXXXXXXX new config.tfvars (working fine)delete the
➜ terraform init -backend-config=config.tfvars
Downloading modules (if any)...
Get: git::ssh://git@github.com/myaccount/terraform-aws-modules.git?ref=v0.0.3
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary. ➜ terraform plan
var.subnets
List of subnets DB should be available at. It might be one subnet.
Enter a value: |
Having this issue with 0.9.2. Maybe I have wrong expectations for how this should work. Typically I have been passing in the ACCESS and SECRET on the cmd line using -var which I reference in things like my aws provider section. I tried configuring an s3 backend in a terraform backend block, however, I get an error saying that terraform.backend: configuration cannot contain interpolations. If I leave access_key and secret_key out of the backend block I would assume it would leverage what is configured by the aws provider block but it does not and I get an S3 access error. It's also not clear to me what the functional differences is between using terraform backend and terraform_remote_state to configure the S3 backend. Obviously using the resource would let me reference different s3 state files within my terraform project. Other then that is there any reason to use one way over the other? |
@nwhite-sf terraform_remote_state is now a data source rather than a resource so you would use it to pull in things defined in statefile elsewhere to use in the config/module you are working in. I seem to be hitting the issue mentioned in this thread in 0.9.2 though. If I set the provider for AWS to use a profile and my terraform config defines a backend that uses the same profile for remote state I continually get an access denied message. I can confirm the profile in question has admin access to s3 so I'm not sure why this is happening yet but it definitely feels like a bug right now. |
I've fixed this with removing all |
Run into this error today, thought it was related to this bug... it wasn't. Showed a Dev how to use AWS credentials earlier in the week, and had bogus AWS credentials set in my environmental variables. This appears to take precedence over what is defined in the state file. So just make sure that if you have a profile defined in the terraform remote config, environmental variables will override that. |
Just noting this is related to #13589, right? |
Confirmed on 0.10.1... if I pull while specifying a different provider in provider "aws" it fails... if I rename my provider for this project to default it works. Means I can only work on one at a time. |
If you change profile in the config block of the remote provider it works. eg data "terraform_remote_state" "aws_account" { |
Can confirm that this behavior is present in 0.10 and that adding |
How is this still a bug in 2018? |
I ran into a similar problem today with:
I verified my credentials, checked that I was still active in AWS IAM, checked the IAM policy, checked the git repo in CodeCommit, ran a I still cannot run any TF plan or apply! Edit: I solved my problem, this was user error... it turns out I was unaware the Terraform backend was configured to use a "provider" with an alias and profile I did not recognize and in order to run any plan/apply, every resource needed to declare that specific provider name and alias or it would fail with this error. |
May not be helpful and sorry for creeping on an old Issue but I got around this by moving my .terraform directory out and rerunning an init. This came up:
It appears the EDIT: Full .terraform/terraform.tfstate backend block:
|
check to make sure your environment is using the correct AWS account. You may have multiple accounts on your .aws/credentials |
Can confirm that this behaviour is present in 0.11.7 and that adding profile (and region) to the backend config works. |
On 0.11.7 and didn't work until my config block looked like this (obvious variables): backend "s3" {
shared_credentials_file = "/location/of/credentials/file"
profile = "profile-name-in-credentials-file"
bucket = "bucket-name"
key = "whatever-your-keyname-is"
region = "us-east-1"
dynamodb_table = "terraform.state"
encrypt = "true"
}
} omit "dynamodb_table" line if you're not using that integration in your backend solution |
I solved this issue by creating environment variables:
This error only appear when your try to init. If you want to plan or apply you must have to pass your credentials as variables, like that:
Note: Official documentation Have a nice coding day! 🚀 |
Setting AWS_PROFILE during init doesn't work for me. I get following error: I don't have credential file in .aws folder. Instead I have assumed role something like this in my config file Not sure what the alternate solution is. Anybody can suggest the alternate solution to create s3 backend until main issue gets fixed. |
@sksinghvi , I don't think that terraform will use your .aws\config file, but you should be able to get your setup (assumed role) to work by using a credentials file in your .aws folder, something like:
|
@sjpalf: I don't have aws_access_* . One more thing I forget to mention. If I don't create the backend and just use default workspace and local state it authenticates and works . |
got this working by doiung "aws-okta exec okta-sandbox -- terraform init" |
Ran into this, had messed up my profile name on |
Hello! 🤖 This issue relates to an older version of Terraform that is no longer in active development, and because the area of Terraform it relates to has changed significantly since the issue was opened we suspect that the issue is either fixed or that the circumstances around it have changed enough that we'd need an updated issue report in order to reproduce and address it. If you're still seeing this or a similar issue in the latest version of Terraform, please do feel free to open a new bug report! Please be sure to include all of the information requested in the template, even if it might seem redundant with the information already shared in this issue, because the internal details relating to this problem are likely to be different in the current version of Terraform. Thanks! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I ran into an interesting problem when working with a terraform project that used a S3 remote tfstate file.
The setup:
I have been working on a terraform project on my personal laptop, I set it up locally with a remote state s3 file. AWS credentials are loaded into the project using a .tfvars file, for simplicity. (this file is not committed to git).
During the course of the project, it was determined that we would need to move where we were running terraform from to a server with firewall access to the AWS instances (for provisioning). I move the terraform project, and as a test run
terraform plan
, after that I get the following error:I checked and made sure that my tfvars file was in place. it looks like:
And also my terraform file that is important for this conversation:
Things I checked at this point:
terraform plan
, but this obviously does not have the right state. if I try to re setup the remote viaterraform remote...
it throws the above error.Through trial and error this is what we found to be the problem: A file has to exist in ~/.aws/credentials that has a [default] credential that is valid. This credential does NOT have to be for the environment that terraform is working on, and actually the key I used is for a completely separate AWS account. When I add that file to the new server, suddenly the s3 remote state is working. If I invalidate the key that is in that [default] profile, I get the following:
Note: this key has no access to the s3 bucket we are using. If there are any questions about this please let me know. Thanks
The text was updated successfully, but these errors were encountered: