Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS Credentials when working with s3 Remote State #5839

Closed
realdavidops opened this issue Mar 24, 2016 · 46 comments
Closed

AWS Credentials when working with s3 Remote State #5839

realdavidops opened this issue Mar 24, 2016 · 46 comments

Comments

@realdavidops
Copy link

I ran into an interesting problem when working with a terraform project that used a S3 remote tfstate file.

The setup:
I have been working on a terraform project on my personal laptop, I set it up locally with a remote state s3 file. AWS credentials are loaded into the project using a .tfvars file, for simplicity. (this file is not committed to git).

During the course of the project, it was determined that we would need to move where we were running terraform from to a server with firewall access to the AWS instances (for provisioning). I move the terraform project, and as a test run terraform plan, after that I get the following error:

Unable to determine AWS credentials. Set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.
(error was: NoCredentialProviders: no valid providers in chain. Deprecated. 
    For verbose messaging see aws.Config.CredentialsChainVerboseErrors)

I checked and made sure that my tfvars file was in place. it looks like:

# AWS Access Key
ak = "[my actual access key]"
# AWS Secret Key
sk = "[my actual secret key]"
# AWS Region
region = "eu-central-1"
# Environment Name
enviro = "eu3"

And also my terraform file that is important for this conversation:

# Variables for AWS
variable "ak" {
  type = "string"
}
variable "sk" {
  type = "string"
}
variable "region" {
  type = "string"
  default = "eu-central-1"
}

# Variables for Environment
variable "enviro" {
  type = "string"
  default = "eu3"
}

# Set up AWS access to environment
provider "aws" {
    access_key = "${var.ak}"
    secret_key = "${var.sk}"
    region = "${var.region}"
}

# Setup storage of terraform statefile in s3.
# You should change stuff here if you are working on a different environment,
# especially if you are working with two separate environments in one region.
resource "terraform_remote_state" "ops" {
    backend = "s3"
    config {
        bucket = "eu3-terraform-ops"
        key = "terraform.tfstate"
        region = "${var.region}"
    }
}

Things I checked at this point:

  1. My Access Key/Secret Key are both valid
  2. I am using the same version of terraform:Terraform v0.6.12
  3. It also does not work with latest Terraform v0.6.14
  4. I am able to access the API endpoints via network
  5. Removing the remote provider and deleting the .terraform file does allow me to run terraform plan, but this obviously does not have the right state. if I try to re setup the remote via terraform remote... it throws the above error.

Through trial and error this is what we found to be the problem: A file has to exist in ~/.aws/credentials that has a [default] credential that is valid. This credential does NOT have to be for the environment that terraform is working on, and actually the key I used is for a completely separate AWS account. When I add that file to the new server, suddenly the s3 remote state is working. If I invalidate the key that is in that [default] profile, I get the following:

Error reloading remote state: InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
    status code: 403, request id: C871F77837CFC156

Note: this key has no access to the s3 bucket we are using. If there are any questions about this please let me know. Thanks

@realdavidops
Copy link
Author

This is limited specifically to s3 remote state, working with terraform and AWS is fine.

@kendawg2
Copy link

i believe PR #5270 may address this issue.

@realdavidops
Copy link
Author

I believe you might be right. I'll work on testing

@simonvanderveldt
Copy link

I've just tried v0.6.16 which included #5270 with our code-base which uses S3 for remote state and it actually doesn't fix this issue. If I set the required access key, secret access key and region variables it doesn't work.
When setting variables using the AWS env vars it does work.

I haven't tried the ~/.aws/credentials file workaround mentioned in the initial report, so can't comment on that.

@rarkins
Copy link

rarkins commented Jun 23, 2016

I hit the same problem. Somehow terraform is not using ~/.aws/credentials properly. e.g. aws s3 ls ... will work and then terraform apply will not. Moving the credentials into default profile worked.

@jwadolowski
Copy link

Same here. The following terraform_remote_state:

resource "terraform_remote_state" "project" {
  backend = "s3"

  config {
    bucket     = "${var.project_state_bucket}"
    key        = "${var.project_state_key}"
    region     = "${var.project_aws_region}"
    access_key = "${var.project_aws_access_key}"
    secret_key = "${var.project_aws_secret_key}"
  }
}

always ends with

Error applying plan:

1 error(s) occurred:

* terraform_remote_state.project: AccessDenied: Access Denied
    status code: 403, request id: DD5189A46X0BX0BA

At the same time I was able to list and download content of the this S3 bucket using aws s3 command.

Workaround in my case:

  • ~/.aws/config
[default]
aws_access_key_id = <ACCESS_KEY_1>
region = eu-west-1
aws_secret_access_key = <SECRET_KEY_1>

[project]
aws_access_key_id = <ACCESS_KEY_2>
region = eu-west-1
aws_secret_access_key = <SECRET_KEY_2>
  • project.tf
resource "terraform_remote_state" "project" {
  backend = "s3"

  config {
    bucket                  = "${var.project_state_bucket}"
    key                     = "${var.project_state_key}"
    region                  = "${var.project_aws_region}"
    shared_credentials_file = "~/.aws/config"
    profile                 = "project"
  }
}

@rarkins
Copy link

rarkins commented Jun 23, 2016

I've resorted to just using aws s3 cp for terraform.tfstate before and after the terraform commands, and removing the terraform state configuration altogether. For a "single state" project am I missing out on anything?

@marcboudreau
Copy link

@jwadolowski: I'm curious, which version of terraform are you running? The shared_credentials_file and profile options seem new to me.

@jwadolowski
Copy link

@marcboudreau I'm running v0.6.16. Both profile and shared_credentials_file are fairly new, however fully documented here.

@marcboudreau
Copy link

@jwadolowski Thanks. I'm running 0.6.15 and it's not in. I found the code change that adds support for it.

@flyinprogrammer
Copy link

this is still broken in 0.6.16 as far as i can tell:

After rotating my keys, and supplying new ones into the config, I get these errors:

* terraform_remote_state.r53: InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.

@akatrevorjay
Copy link

If you're getting:

Error reloading remote state: SignatureDoesNotMatch: The request signature we calculated does not match the signature y
ou provided. Check your key and signing method.

Triple check your bucket names for your remote config. I for instance had the
s3:// prefix on it, which it really should strip, ala the aws s3 client,
which actually requires it (sigh).

@veqryn
Copy link

veqryn commented Aug 15, 2016

I began receiving this error, on 0.6.16, only after rotating my AWS access keys.
My guess is that terraform is expecting the keys to match what I was using a week ago.
I've tried removing the .terraform directory and .tfstate files from my computer, and re-pulling them with the remote state config command, but nothing is helping.

@veqryn
Copy link

veqryn commented Aug 15, 2016

It appears to be related to how we are pulling the remote config:

terraform remote config -backend=s3 -backend-config="<bucket>" -backend-config="region=<region>" -backend-config="access_key=<aws_access_key>" -backend-config="secret_key=<aws_secret>" -backend-config="key=<path>" -state="terraform.tfstate"

Once I changed this out to use profile instead, it worked fine:

terraform remote config -backend=s3 -backend-config="<bucket>" -backend-config="region=<region>" -backend-config="profile=<profile>" -backend-config="key=<path>" -state="terraform.tfstate"

@veqryn
Copy link

veqryn commented Sep 20, 2016

In my post above, switching to profile worked, but only after manually deleting the offending key and secret lines from the state file.
However, if you change the name of your profile, or a colleague names them differently, it will stop working again until you manually update the state file.

Terraform needs to be more flexible about pulling and applying remote state.
Maybe provide a way to change and delete remote state backend-config parameters?
Maybe just overwrite them on pulling down, with whatever was provided just now?
API Keys are regularly rotated, and profile names are different from one person to the next, so maybe don't even include them in the remote state...

@sjpalf
Copy link

sjpalf commented Oct 5, 2016

I have also experienced this problem but discovered that an easier workaround is to run the terraform remote config a second time. When you run it the second time it then automatically updates the access_key and secret_key fields.

@willis7
Copy link

willis7 commented Oct 25, 2016

I'm also getting this issue, but I have an added layer of complexity in that I have Multi Factor Auth enabled too.

@gerr1t
Copy link
Contributor

gerr1t commented Dec 18, 2016

I just hit this bug. Quite silly this is actually taking so long to get resolved.

@adamhathcock
Copy link

I'm also getting the name error as @flyinprogrammer after rotating keys

Failed to read state: Error reloading remote state: InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.

Version 0.8.1

@adamhathcock
Copy link

I've got around this by some combination of:

  • putting my key/secret explicitly in the config
  • wiping local state and pulling remote twice as suggested above
  • reconfiguring the terraform remote config command to have a key/secret in the command line.

However, when I remove the key/secret from my config explicitly then it doesn't work. My .aws/credentials file has the same key/secret,

This all resulted from rotating my user's key/secret

@ethangunderson
Copy link

Is there any update on this bug? I just ran into it today. v0.8.6. Created remote s3 state, now whenever I do a plan I get an error saying no credential sources found.

I originally had just a default profile in my aws credential file, but recently added a development and production profile with no default. I reverted back to my original profile of default, but am still getting this error.

@mars64
Copy link

mars64 commented Mar 8, 2017

Confirmed the bug is still present in v0.8.8. Worked around by explicitly setting profile.

@kevgliss
Copy link

Still seeing this error in 0.9.1

@adilnaimi
Copy link

adilnaimi commented Mar 29, 2017

Confirmed the bug is still present in v0.9.1, it works if I'm replacing aws profile with key and secret

my config.tfvars (not working)

profile = "my-profile"
region = "us-east-1"
lock_table = "stage-terraform-remote-state-locks"
encrypt = "true"
bucket = "stage-terraform-remote-state-storage"
key = "some-path/terraform.tfstate"
encrypt = "true"
kms_key_id = "xxxxxxx"
terraform init -backend-config=config.tfvars 
Downloading modules (if any)...
Get: git::ssh://git@github.com/myaccount/terraform-aws-modules.git?ref=v0.0.3
Initializing the backend...


Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary.
terraform plan                               
Error loading state: InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
	status code: 403, request id: XXXXXXXXX

new config.tfvars (working fine)

delete the .terraform diretory and replace the profile = "my-profile" with access_key and secret_key

access_key = "my-key"
secret_key = "my-secret-key"
region = "us-east-1"
lock_table = "stage-terraform-remote-state-locks"
encrypt = "true"
bucket = "stage-terraform-remote-state-storage"
key = "some-path/terraform.tfstate"
encrypt = "true"
kms_key_id = "xxxxxxx"
terraform init -backend-config=config.tfvars 
Downloading modules (if any)...
Get: git::ssh://git@github.com/myaccount/terraform-aws-modules.git?ref=v0.0.3
Initializing the backend...


Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary.
terraform plan                               
var.subnets
  List of subnets DB should be available at. It might be one subnet.

  Enter a value:

@nwhite-sf
Copy link

Having this issue with 0.9.2. Maybe I have wrong expectations for how this should work. Typically I have been passing in the ACCESS and SECRET on the cmd line using -var which I reference in things like my aws provider section. I tried configuring an s3 backend in a terraform backend block, however, I get an error saying that terraform.backend: configuration cannot contain interpolations.

If I leave access_key and secret_key out of the backend block I would assume it would leverage what is configured by the aws provider block but it does not and I get an S3 access error.

It's also not clear to me what the functional differences is between using terraform backend and terraform_remote_state to configure the S3 backend. Obviously using the resource would let me reference different s3 state files within my terraform project. Other then that is there any reason to use one way over the other?

@sysadmiral
Copy link

@nwhite-sf terraform_remote_state is now a data source rather than a resource so you would use it to pull in things defined in statefile elsewhere to use in the config/module you are working in.

I seem to be hitting the issue mentioned in this thread in 0.9.2 though. If I set the provider for AWS to use a profile and my terraform config defines a backend that uses the same profile for remote state I continually get an access denied message. I can confirm the profile in question has admin access to s3 so I'm not sure why this is happening yet but it definitely feels like a bug right now.

@pioneerit
Copy link

I've fixed this with removing all profile from the terraform code and just give the aws sdk to works with my environment variable AWS_PROFILE.
More details in section SharedCredentialsProvider - https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials/

@thedrewster-u51
Copy link

Run into this error today, thought it was related to this bug... it wasn't. Showed a Dev how to use AWS credentials earlier in the week, and had bogus AWS credentials set in my environmental variables. This appears to take precedence over what is defined in the state file. So just make sure that if you have a profile defined in the terraform remote config, environmental variables will override that.

@combinatorist
Copy link

Just noting this is related to #13589, right?

@gaffo
Copy link

gaffo commented Aug 18, 2017

Confirmed on 0.10.1... if I pull while specifying a different provider in provider "aws" it fails... if I rename my provider for this project to default it works. Means I can only work on one at a time.

@gaffo
Copy link

gaffo commented Aug 18, 2017

If you change profile in the config block of the remote provider it works. eg

data "terraform_remote_state" "aws_account" {
backend = "s3"
config {
bucket = "bucket"
key = "key"
region = "us-west-2"
profile = "PROFILE_HERE"
}
}

@bgdnlp
Copy link

bgdnlp commented Oct 12, 2017

Can confirm that this behavior is present in 0.10 and that adding profile (and region) to the backend config works.

@yoaquim
Copy link

yoaquim commented Apr 14, 2018

How is this still a bug in 2018?

@swoodford
Copy link

swoodford commented Apr 17, 2018

I ran into a similar problem today with:

Error: Error refreshing state: 1 error(s) occurred:

* module.aws.default: NoCredentialProviders: no valid providers in chain. Deprecated.
	For verbose messaging see aws.Config.CredentialsChainVerboseErrors

I verified my credentials, checked that I was still active in AWS IAM, checked the IAM policy, checked the git repo in CodeCommit, ran a terraform init --upgrade, tested pushing code to CodeCommit, tested the AWS profile with an aws s3 ls, updated Terraform to Terraform v0.11.6, provider.aws v1.14.1, updated awscli to 1.15.0, verified the profile name is correct and the region is in the backend "s3"config.

I still cannot run any TF plan or apply!

Edit: I solved my problem, this was user error... it turns out I was unaware the Terraform backend was configured to use a "provider" with an alias and profile I did not recognize and in order to run any plan/apply, every resource needed to declare that specific provider name and alias or it would fail with this error.

@ghost
Copy link

ghost commented Jun 29, 2018

May not be helpful and sorry for creeping on an old Issue but I got around this by moving my .terraform directory out and rerunning an init. This came up:

[admin@test aws]$ diff .terraform/terraform.tfstate us-west-2/.terraform/terraform.tfstate
3,4c3,4
<     "serial": 5,
<     "lineage": "<hash>",
---
>     "serial": 1,
>     "lineage": "<hash>",
11a12
>             "profile": "poc",```

It appears the profile flag was never getting added to my .terraform/terraform.tfstate. This is not the primary tfstate used for your infra BTW. I found this out when new team members were able to proceed without issue, but my local environment never worked despite how many flags, options, etc I passed. Hope this helps others.

EDIT: Full .terraform/terraform.tfstate backend block:

...
    "backend": {
        "type": "s3",
        "config": {
            "bucket": "<bucket_name>",
            "dynamodb_table": "<table_name>",
            "encrypt": true,
            "key": "<tf/path/to/terraform.tfstat>",
            "profile": "<dev/stage/prod/etc",
            "region": "<region_name>"
        },
        "hash": <some_numeric_hash>
    },
...

@tkjef
Copy link

tkjef commented Jul 13, 2018

check to make sure your environment is using the correct AWS account. You may have multiple accounts on your .aws/credentials

@rkt2spc
Copy link

rkt2spc commented Jul 18, 2018

Can confirm that this behaviour is present in 0.11.7 and that adding profile (and region) to the backend config works.

@dallasgraves
Copy link

dallasgraves commented Aug 1, 2018

On 0.11.7 and didn't work until my config block looked like this (obvious variables):

  backend "s3" {
    shared_credentials_file = "/location/of/credentials/file"
    profile = "profile-name-in-credentials-file"
    bucket = "bucket-name"
    key = "whatever-your-keyname-is"
    region = "us-east-1"
    dynamodb_table = "terraform.state"
    encrypt = "true"
  }
}

omit "dynamodb_table" line if you're not using that integration in your backend solution

@rimiti
Copy link

rimiti commented Nov 28, 2018

I solved this issue by creating environment variables:

export AWS_DEFAULT_REGION=xxxxx
export AWS_ACCESS_KEY_ID=xxxxx
export AWS_SECRET_ACCESS_KEY=xxxxx

This error only appear when your try to init. If you want to plan or apply you must have to pass your credentials as variables, like that:

terraform validate -var "aws_access_key=$AWS_ACCESS_KEY_ID" -var "aws_secret_key=$AWS_SECRET_ACCESS_KEY" -var "aws_region=$AWS_DEFAULT_REGION"

Note: Official documentation

Have a nice coding day! 🚀

@sksinghvi
Copy link

sksinghvi commented Mar 17, 2019

Setting AWS_PROFILE during init doesn't work for me. I get following error:
Error configuring the backend "s3": NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors

I don't have credential file in .aws folder. Instead I have assumed role something like this in my config file
[profile okta-sandbox-proxy]
credential_process = aws-okta exec okta-sandbox -- cmd /c C:\tools\proxy-aws-okta-result.bat
region = us-west-2
[profile okta-sandbox]
aws_saml_url = home/amazon_aws/XXXXXXX/xxxx
role_arn = arn:aws:iam::XXXXXXXXXX:role/VA-Role-Devops-L3
region = us-west-2

Not sure what the alternate solution is. Anybody can suggest the alternate solution to create s3 backend until main issue gets fixed.

@sjpalf
Copy link

sjpalf commented Mar 18, 2019

@sksinghvi , I don't think that terraform will use your .aws\config file, but you should be able to get your setup (assumed role) to work by using a credentials file in your .aws folder, something like:

[okta-sandbox-proxy]
aws_access_key_id = xxxxxxxxxxxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxx
region = us-west-2
[okta-sandbox]
role_arn = arn:aws:iam::XXXXXXXXXX:role/VA-Role-Devops-L3
source_profile = okta-sandbox-proxy
region = us-west-2```

@sksinghvi
Copy link

@sjpalf: I don't have aws_access_* . One more thing I forget to mention. If I don't create the backend and just use default workspace and local state it authenticates and works .

@sksinghvi
Copy link

sksinghvi commented Mar 18, 2019

got this working by doiung "aws-okta exec okta-sandbox -- terraform init"
Thanks @sjpalf for looking into it

@pkoch
Copy link

pkoch commented Jun 3, 2019

Ran into this, had messed up my profile name on ~/.aws/credentials.

@hashibot
Copy link
Contributor

Hello! 🤖

This issue relates to an older version of Terraform that is no longer in active development, and because the area of Terraform it relates to has changed significantly since the issue was opened we suspect that the issue is either fixed or that the circumstances around it have changed enough that we'd need an updated issue report in order to reproduce and address it.

If you're still seeing this or a similar issue in the latest version of Terraform, please do feel free to open a new bug report! Please be sure to include all of the information requested in the template, even if it might seem redundant with the information already shared in this issue, because the internal details relating to this problem are likely to be different in the current version of Terraform.

Thanks!

@ghost
Copy link

ghost commented Sep 27, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Sep 27, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests