Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform workspace list doesn't show workspaces, but new creates them just fine #17508

Closed
redbaron opened this issue Mar 6, 2018 · 18 comments
Closed

Comments

@redbaron
Copy link

redbaron commented Mar 6, 2018

Terraform Version

Terraform v0.11.3

Terraform Configuration Files

terraform {
  backend "s3" {
    bucket = "xxx-terraform" 
    key = "pubweb.tfstate"
    region = "eu-west-2"
    workspace_key_prefix = "pubweb/env:/"
  }
}

Actual Behavior

Steps to Reproduce

Exact command I am running

docker run --rm -v $(pwd):/code -w /code -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e TF_IN_AUTOMATION=1 -e AWS_DEFAULT_REGION=eu-west-2 -e AWS_REGION=eu-west-2 -e AWS_ACCESS_KEY_ID=ACCESS_KEY -e AWS_SECRET_ACCESS_KEY=SECRET_KEY --entrypoint= hashicorp/terraform:0.11.3 /bin/sh -c 'cp certs/* /usr/local/share/ca-certificates && update-ca-certificates && terraform init -input=false  && terraform workspace new qa && terraform workspace list'

Additional Context

AWS user which runs TF has policy as per TF doc:

  statement {
    actions   = ["s3:ListBucket"]
    resources = ["xxx-terraform"]
  }

  statement {
    actions = ["s3:GetObject", "s3:PutObject"]

    resources = [
      "xxx-terraform/pubweb/env:/qa*",
    ]
  }

References

Possibly dup #16383

@redbaron
Copy link
Author

redbaron commented Mar 6, 2018

/cc @jbardin

@jbardin
Copy link
Member

jbardin commented Mar 6, 2018

Hi @redbaron,

Running the command you provided through docker, with the policy shown worked correctly here.

Initializing the backend...

Terraform has been successfully initialized!
Created and switched to workspace "qa"!

You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.
  default
* qa

Is it possible to include the full trace log output from the the command? This should show the AWS API calls that may help us locate the problem.

@jbardin jbardin added waiting-response An issue/pull request is waiting for a response from the community backend/s3 labels Mar 6, 2018
@mcg
Copy link

mcg commented Mar 13, 2018

Following along here as 0.11.3 did not fix the original issue(#16383) for us as well.

@bakaie
Copy link

bakaie commented Apr 20, 2018

having the same issue, no docker. Just trying this on my laptop.

terraform workspace new prod
Created and switched to workspace "prod"!

You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.

terraform workspace list
  default

If I check in the .terraform/environment file i see prod in the list.
run a terraform apply,
the apply works and stores the state in s3 as it should.

create a new workspace

terraform workspace new dev
Created and switched to workspace "dev"!

You're now on a new, empty workspace. Workspaces isolate their state,
so if you run "terraform plan" Terraform will not see any existing state
for this configuration.

terraform workspace list
  default

now check the environment file and only see dev.

terraform workspace select prod

does not work, I have to run the new command again to manage the state.

my config inside the main.tf file for backend:

terraform {
  backend "s3" {
    bucket               = "some-bucket"
    dynamodb_table       = "lock-table"
    region               = "us-east-1"
    encrypt              = "true"
    profile              = "profile-name"
    workspace_key_prefix = "some/place/in/s3"
  }
}

Terraform v0.11.6
provider.aws v1.15.0

@bakaie
Copy link

bakaie commented Apr 20, 2018

did some playing around because this was working at one point. I disabled the workspace_key_prefix line from my config and it is working as expected now.

terraform workspace list
  default
* prod

looking at the s3 bucket when I had the workspace_key_prefix the folder structure was:
some/place/in/s3/prod.tfstate

there was nothing else in the bucket.

I deleted everything from the bucket and the .terraform folder from the system.
Did a terraform init with the prefix removed.
created the prod workspace.
s3 shows a new file in the root of the bucket prod.tfstate
then folder structure
env:/prod/prod.tfstate.

@bakaie
Copy link

bakaie commented Apr 20, 2018

I thought maybe it was failing because I did not specify a key in my config and was entering it via the prompt after running init.

Also tried to add the env: to the front of the prefix to see if forcing the path helped, no luck there.

Looks like as long as the workspace_key_prefix is in the config it does not work correctly.

@bakaie
Copy link

bakaie commented Apr 20, 2018

found a work around for now. If you place a path in the key field it works. So based on the example you have above.

terraform {
  backend "s3" {
    bucket = "xxx-terraform" 
    key = "pubweb.tfstate"
    region = "eu-west-2"
    workspace_key_prefix = "pubweb/env:/"
  }
}

if you change to the following:

terraform {
  backend "s3" {
    bucket = "pubweb/env:/xxx-terraform" 
    key = "pubweb.tfstate"
    region = "eu-west-2"
  }
}

it will store things in env:/pubweb/env:/xxx-terraform in the s3 bucket.

I will be using this for now until the prefix is working.

@ooglek
Copy link

ooglek commented Jul 17, 2018

This is still a problem -- using workspace_key_prefix causes terraform to be unable to list the states stored in S3. We're running into it as well.

Additionally the remote state data provider does not use the config passed by -backend-config flag, so it is difficult to specify the same location as the configured backend with a centralized configuration. 😞

@oceanlewis
Copy link

Still an issue as of Terraform 0.11.8, with AWS Provider version 1.40.

@cgspohn
Copy link

cgspohn commented Oct 24, 2018

Having the same issue, using terraform 0.11.9 and aws provider 1.40.0 .

@jbardin
Copy link
Member

jbardin commented Oct 26, 2018

@davidarmstronglewis, @cgspohn,

There hasn't been a reproducible example provided here yet, so unfortunately I'm not sure what the actual issue if. If either of you have an example, I'll be happy to take a look.

@cgspohn
Copy link

cgspohn commented Oct 26, 2018

@jbardin Thanks for your response, here is what we are seeing.

Remote setup:

terraform {
  backend "s3" {
    workspace_key_prefix = "component/worker/"
    profile              = "terraform"
    bucket               = "some-bucket-state-mgmt"
    region               = "us-east-1"
    key                  = "terraform.tfstate"
    encrypt              = "true"
    dynamodb_table       = "some-state-lock"
  }
}

Then for the provider we have this below.

provider "aws" {
  region = "us-east-1"
  profile = "production"   # this is actually variable based on the workspage.
  version = "~> 1.40"
}

When we get started we do terraform init and then terraform workspace create dev1. We apply in dev1 and all works well. We see the object in S3 for component/worker/dev1/terraform.tfstate. Then we do terraform workspace create prod for example, apply, etc.. all goes well, we see the object component/worker/prod/terraform.tfstate created as well. But here comes the issue, if we do terraform workspace list we get the following output:

$ terraform workspace list
  default

$ terraform workspace show
dev1
$ terraform -v
Terraform v0.11.9
+ provider.archive v1.1.0
+ provider.aws v1.40.0

The workspaces are not listed, instead we see default and blank line which seems odd. The show command correctly shows the workspace in which we are in. The workaround to change to an existing workspace is to do terraform workspace new prod, which works and the state is ok as it should be.

@bakaie
Copy link

bakaie commented Oct 26, 2018

Yes this is basically the same info i put in the issue 6 months ago. Removing the workspace_key_prefix from the backend config will fix the issue of the workspace's not showing up when you do a list or change. The prob is a caused when using workspaces and having a prefix key setup.

My workaround was to put the prefix key as part of the key path and this fixed the issue for me. It would be nice to be able to use the key prefix as that feature looks to have been created for workspaces.

@averigin
Copy link

The initial example in this thread (plus a number of others) has a workspace_key_prefix with a trailing slash. Terraform adds a slash to the end of this when listing the objects in the S3 bucket:

prefix := b.workspaceKeyPrefix + "/"

This means that the S3 API call is using a prefix ending with 2 slashes, so the response lists nothing. (If you export TF_LOG=debug before running terraform workspace list you can see the S3 API calls Terraform is making.)

For me, removing the trailing slash made terraform workspace list work as expected (running Terraform 0.11.11).

There's a simple fix here for someone familiar with Go.

@ghost ghost removed the waiting-response An issue/pull request is waiting for a response from the community label Feb 22, 2019
@jbardin
Copy link
Member

jbardin commented Feb 22, 2019

Excellent catch @averigin, I don't know how I missed that piece! It doesn't explain the entire story, since each of these examples worked with our infrastructure, but it very well could be what is breaking the config for some users.

@jbardin
Copy link
Member

jbardin commented Feb 23, 2019

OK, I'm going to mark this as fixed for 0.12, but we can revisit it of course if there continues to be a problem.

After diving into the extra slash issue, it turns out there was a few oddities in the key parsing with workspace_key_prefix that could have unexpected effects. In order to limit other issues around slashes being added, removed and split, I added validation on the workspace_key_prefix to prevent leading and trailing slashes at all. This change will be listed under the "BACKWARDS INCOMPATIBILITIES / NOTES:" in the 0.12 release.

@jbardin jbardin closed this as completed Feb 23, 2019
@mbowiewilson
Copy link

mbowiewilson commented Aug 14, 2019

@jbardin I've been having a similar issue using 0.12 with one of my applications, but I think I have a clue about what has been going on. My workspace_prefix_key was terraform which has terraform state files for many of my team's applications. I moved the state files for this particular application to a clean s3 path (eg. terraform_thisapplication) and updated the workspace_prefix_key accordingly. After that everything worked as expected with terraform workspace list and terraform workspace select.

I suspect the issue I and others were having is related to having too many files (or terraform files) under the workspace_prefix_key. I'm not sure if it matters, but most of the terraform files under terraform in my s3 bucket are not associated with workspaces.

@ghost
Copy link

ghost commented Aug 15, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Aug 15, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

9 participants