Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provider/archive: Converting to datasource. #8492

Merged
merged 7 commits into from
Oct 25, 2016

Conversation

BSick7
Copy link
Contributor

@BSick7 BSick7 commented Aug 26, 2016

Since the output file is always generated, you will get flapping between runs.

It makes sense to just convert to datasource and always generate a new archive file, but only show as new if sha of output file is different.

@kwilczynski
Copy link
Contributor

@BSick7 nice!

This also takes care about the clean-up issue, I've been talking to @stack72 about.

outputDirectory := path.Dir(outputPath)
if outputDirectory != "" {
if _, err := os.Stat(outputDirectory); err != nil {
if err := os.MkdirAll(outputDirectory, 0777); err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer '0755' to be here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good stuff.
I should probably do 0644 actually.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you do, you won't be able to enter the directory created. :)

@BSick7
Copy link
Contributor Author

BSick7 commented Aug 26, 2016

@kwilczynski Done!

@BSick7
Copy link
Contributor Author

BSick7 commented Aug 29, 2016

@apparentlymart rebased to fix build, ready for review

@Ninir
Copy link
Contributor

Ninir commented Aug 31, 2016

@BSick7 For informative purposes, could you explain a bit how did you avoid the flapping between runs from the code? (and was flapping about the zip sha sum always changing?)
Trying to learn go and get deeper into Terraform... thanks!

@BSick7
Copy link
Contributor Author

BSick7 commented Aug 31, 2016

@Ninir The original resource code had an Exists function that returned true only if the archive file already existed. Since the file never survives the current plan in Atlas, it would appear as if it needed to always create the resource.

Since it's now a datasource, the archive file is always generated. The sha is set as the id at the end of the read. This is what terraform uses to decipher whether the datasource appears new.

@Ninir
Copy link
Contributor

Ninir commented Aug 31, 2016

@BSick7 Ok got it, thank you for the explanation! 👍

@jen20
Copy link
Contributor

jen20 commented Sep 3, 2016

Hi @BSick7! This looks like a nice improvement - I'm going to pull it locally and verify the tests all still pass before merging. Thanks for the pull request!

@@ -4,20 +4,15 @@ import (
"crypto/sha1"
"encoding/hex"
"fmt"
"github.com/hashicorp/terraform/helper/schema"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs goimports running. I'll do this when I pull locally.

@jen20
Copy link
Contributor

jen20 commented Sep 3, 2016

HI @BSick7! I've pulled this into a branch in our repository (archive-data-source) and fixed up a few minor things. One thing I have noticed is that the tests are failing:

=== RUN   TestAccArchiveFile_Basic
--- FAIL: TestAccArchiveFile_Basic (0.02s)
        testing.go:265: Step 0 error: Check failed: Check 2/2 error: Not found: archive_file.foo
        testing.go:329: Error destroying resource! WARNING: Dangling resources
            may exist. The full state and error is shown below.

            Error: Check failed: Check 1/1 error: found file expected to be deleted: zip_file_acc_test.zip

            State: <no state>

This makes sense since we never delete the file - we should determine whether that is correct behaviour or not and update the destroy check if so.

@apparentlymart
Copy link
Contributor

Data sources don't have a Delete action since they are assumed to not have side-effects. It feels a bit weird to create something persistent in a data source, honestly... and the fact that it can't get deleted is one example of that weirdness.

Perhaps we could make this feel better as a data source by making the file it creates be explicitly a temporary file (e.g. in $TEMPDIR) and clean it up automatically on Terraform exit, rather than putting it in a place that makes it feel persistent and leaving it there. It being temporary feels more natural in a different way, too: local resources are not likely to persist from one Terraform run to another if Terraform isn't always run on the same machine, and so making it a data source and temporary makes it explicit that the file won't persist and needs to be re-generated on each run.

@BSick7
Copy link
Contributor Author

BSick7 commented Sep 5, 2016

I like the idea of using $TEMPDIR. Does this construct currently exist?

@apparentlymart
Copy link
Contributor

It looks like we use ioutil.TempFile in some places, which achieves part of the problem. The challenge here would be doing the cleanup step.

When I originally suggested this I was thinking about the explicit provider "Close" step that we ostensibly support, which could in principle allow the archive provider to keep a list of all the files it created and then delete them once it is closed. However, digging deeper into the code I see that ResourceProviderCloser seems to be a bit of a mirage: as far as I can tell, no provider actually implements this interface because helper/schema itself doesn't seem to implement it.

So this doesn't seem as straightforward as it did when I originally suggested it. We could try to add support for a CloseFunc callback to the ResourceProvider implementation in helper/schema, which would act as the close counterpart to ConfigureFunc... though honestly I'm not sure if that happens late enough for what we want here, since Terraform will probably execute the "close provider" action as soon as all of the resources from the provider have been dealt with, and not necessarily wait until all of the dependencies of the resources have been dealt with.

So I think perhaps my suggestion can't be done right now without some core Terraform work. We could potentially just start using ioutil.TempFile and accept that these files don't get cleaned by Terraform, presuming that some process outside of Terraform will clean it up... but that's not great if the archive files created will tend to be quite large in size, which I think is the case for bundles for AWS Lambda.

@BSick7
Copy link
Contributor Author

BSick7 commented Sep 9, 2016

@jen20 @apparentlymart
ran goimports

I'm still a little confused by the dangling resource issue.
If a datasource doesn't have a Delete, why would the test expect to delete?
Should I drop the resource shim?

@apparentlymart
Copy link
Contributor

@BSick7 I think the issue is that the CheckDestroy function on the test is checking to see if the file has been deleted, presumably because that made sense for the managed resource form of this, but there's no longer any code that actually deletes it.

@BSick7
Copy link
Contributor Author

BSick7 commented Oct 3, 2016

Now fixes #8501 by adding output_base64sha256 exported attribute.

@coen-hyde
Copy link

coen-hyde commented Oct 4, 2016

It would be nice if this archive data source didn't require writing the file to disk. Instead there could be a source output which could be fed into other resources, eg aws_lambda_function.

EDIT: Actually i just realized why this is a bad idea. We probably don't want people creating massive archives and then those being stored in the statefile.

@Ninir
Copy link
Contributor

Ninir commented Oct 12, 2016

Can't wait to have this, so many issues will be resolved with this! thank you @BSick7 :)

@BSick7
Copy link
Contributor Author

BSick7 commented Oct 13, 2016

@jen20 I believe I have fixed this.
Any chance you could verify my acceptance test again?

@jen20
Copy link
Contributor

jen20 commented Oct 20, 2016

Thanks @BSick7! One of us will review this over the next day or two for inclusion in the next version of Terraform.

@stack72
Copy link
Contributor

stack72 commented Oct 25, 2016

Hi @BSick7

This works now as expected :)

% make testacc TEST=./builtin/providers/archive                                                                                                                                                       130 ↵
==> Checking that code complies with gofmt requirements...
go generate $(go list ./... | grep -v /terraform/vendor/)
2016/10/25 15:58:29 Generated command/internal_plugin_list.go
TF_ACC=1 go test ./builtin/providers/archive -v  -timeout 120m
=== RUN   TestAccArchiveFile_Basic
--- PASS: TestAccArchiveFile_Basic (0.06s)
=== RUN   TestProvider
--- PASS: TestProvider (0.00s)
=== RUN   TestZipArchiver_Content
--- PASS: TestZipArchiver_Content (0.00s)
=== RUN   TestZipArchiver_File
--- PASS: TestZipArchiver_File (0.00s)
=== RUN   TestZipArchiver_Dir
--- PASS: TestZipArchiver_Dir (0.00s)
PASS
ok      github.com/hashicorp/terraform/builtin/providers/archive    0.077s

Thanks for the work here!

@stack72 stack72 merged commit 65523fa into hashicorp:master Oct 25, 2016
@ghost
Copy link

ghost commented Apr 15, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 15, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants