Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform test: Experimental JUnit XML reporting #34291

Merged
merged 2 commits into from
Jan 15, 2024
Merged

Conversation

apparentlymart
Copy link
Contributor

This adds an experimental new option -junit-xml=FILENAME for the terraform test command. When specified, it writes a JUnit XML report to the specified filename once the test run is complete, while continuing to report test progress in the UI in the usual way.

I had a little extra time and energy at the end of the day today, just before a long weekend, and so I took a stab at a quick implementation of one possible answer to #34264.

This is only experimental for now because it remains to be seen if this particular mapping to the JUnit XML schema is actually useful in real software -- this format is woefully underdocumented and implemented slightly differently by each consumer -- and so we might change this significantly before stabilizing it, or remove it altogether if it turns out that there's no useful mapping to JUnit XML here. Hopefully those who are interested in #34264 will try this experiment (with a future alpha release, once this is merged) against their favorite JUnit XML-consuming software and report back whether the report is presented in a helpful way.

It's a de-facto convention for JUnit XML to be reported separately to a file, rather than replacing the normal test run output, since tools that consume this format tend to present its results in a separate and less prominent place than the output of the command itself. This option is designed to follow that convention for consistency with various other software that produces this format.

The implementation here is intentionally pretty minimal and simplistic just as a starting point for gathering feedback. The main priority is that it be easy to evolve this based on feedback and to remove it altogether if we decide not to stabilize this at all. If this does become stabilized, it might deserve being factored out into a separate package so that we can minimize the amount of logic embedded directly inside the views package, and it will certainly need some unit tests to represent what we've committed to supporting in future versions.

Copy link
Member

@liamcervante liamcervante left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the implementation here looks good. I'm just worried we're not capturing all the failure cases.

It's possible for a whole file to fail via some of the validation checks we do - maybe this is equivalent to a compile-time error or something in traditional languages?

totalTests := len(file.Runs)
totalFails := 0
totalErrs := 0
for _, run := range file.Runs {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is possible for all the run blocks in a test file to be skipped with the overall file still erroring as we perform some validation over the test file itself. I think in this case there would be no indication the test still failed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you share some details on how I could recreate the situation you're describing to test with?

So far I've only found cases where the configuration is statically invalid, which prevents the test run from even starting and therefore doesn't produce any test report at all, and that behavior seems consistent with how test harnesses in other languages capable of static validation behave. But if you can think of an example that would be a dynamic failure reported at the whole-file level then I'd like to support that, I just don't know how to create it to test it. 😀

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking of this function: https://github.com/hashicorp/terraform/blob/main/internal/configs/test_file.go#L147. But it actually only returns warnings at the moment, so I think it's okay for the time being 👍 Something to revisit if we ever add dynamic validation that can actually fail!

errorsName := xml.Name{Local: "errors"}

enc.EncodeToken(xml.StartElement{Name: suitesName})
for _, file := range suite.Files {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test file itself can have error diagnostics attached to it, and I think they're not being exposed anywhere at the moment. Is there somewhere we can surface those?

internal/command/views/test.go Outdated Show resolved Hide resolved
This adds an experimental new option -junit-xml=FILENAME for the
"terraform test" command. When specified, it writes a JUnit XML report to
the specified filename once the test run is complete, while continuing to
report test progress in the UI in the usual way.

This is only experimental for now because it remains to be seen if this
particular mapping to the JUnit XML schema is actually useful in real
software -- this format is woefully underdocumented and implemented
slightly differently by each consumer -- and so we might change this
significantly before stabilizing it, or remove it altogether if it turns
out that there's no useful mapping to JUnit XML here. Hopefully those who
are interested in JUnit XML reports will try this experiment against their
favorite JUnit XML-consuming software and report back whether the report
is presented in a helpful way.

It's a de-facto convention for JUnit XML to be reported separately to a
file, rather than replacing the normal test run output, since tools that
consume this format tend to present its results in a separate and less
prominent place than the output of the command itself. This option is
designed to follow that convention for consistency with various other
software that produces this format.

The implementation here is intentionally pretty minimal and simplistic just
as a starting point for gathering feedback. The main priority is that it
be easy to evolve this based on feedback and to remove it altogether if we
decide not to stabilize this at all. If this does become stabilized, it
might deserve being factored out into a separate package so that we can
minimize the amount of logic embedded directly inside the views package,
and it will certainly need some unit tests to represent what we've
committed to supporting in future versions.
@apparentlymart
Copy link
Contributor Author

If any of the .tf files or .tftest.hcl files are statically invalid to the extent that the test run can't even start, we never reach the "conclusion" event and therefore no file is generated at all.

In my experience, that's the typical treatment of "compile-time" errors in test harnesses in other languages that have a distinction between static and dynamic errors, because of course the test program can't even compile if it isn't valid. Therefore I think that's an acceptable compromise here, too.

Back when I was working on JUnit XML for the previous incarnation of the testing experiment I found that CI tools which support this format tend to treat the absence of the file as a total test failure, and then the static errors get captured and reported as part of the main run output rather than as part of the test result report. I've not re-tested this with any JUnit-supporting tools yet because I no longer have any of them set up to test with, but hopefully we can learn from feedback from the experimental version in alpha if that assumption still holds for newer tools like Azure DevOps.

Previously we were just ignoring skipped runs completely, but JUnit XML
has some elements/attributes for describing those and so we'll populate
those too now.
@apparentlymart
Copy link
Contributor Author

apparentlymart commented Jan 6, 2024

I think this should now be complete enough to make for a reasonable experimental inclusion in an alpha release so we can have the folks who want to use it test it with the CI/CD software they use. If this approach seems promising based on that feedback, we can flesh it out further, add tests for it, etc.

@apparentlymart apparentlymart merged commit 4a0d06c into main Jan 15, 2024
6 checks passed
Copy link
Contributor

Reminder for the merging maintainer: if this is a user-visible change, please update the changelog on the appropriate release branch.

@ckiefer
Copy link

ckiefer commented Jan 24, 2024

This is precisely what I've been searching for. We are interested in reporting test outputs in GitLab, which, at present, only accommodates test reports in the JUnit report format. For more information, you can refer to their documentation: https://docs.gitlab.com/ee/ci/testing/unit_test_reports.html.

I built terraform using 'go install' on branch main from https://github.com/hashicorp/terraform. This results in

ckiefer@machine:~/git-repos/terraform$ /home/ckiefer/GO/bin/terraform -v
Terraform v1.8.0-dev
on linux_amd64

When I run it in the folder where I have the tests, I get:

ckiefer@machine:~/git-repos/terraform-google-bucket$ /home/ckiefer/GO/bin/terraform test -junit-xml=test-out-junit.xml

Error: JUnit XML output is not available

│ The -junit-xml option is currently experimental and therefore available only in alpha releases of Terraform CLI.

@apparentlymart Could you please advice how to proceed from here in order to successfully generate an output in JUnit format?

Many thanks for the valuable work done by everybody.

@Mechanolatry
Copy link

Error: JUnit XML output is not available │ │ The -junit-xml option is currently experimental and therefore available only in alpha releases of Terraform CLI.

@ckiefer - Use the following to ensure the experimental features are enabled in your build:

go install -ldflags="-X 'main.experimentsAllowed=yes'"

This is as per the header in the /experiments.go file

@apparentlymart
Copy link
Contributor Author

apparentlymart commented Jan 26, 2024

Hi all,

@ExtelligenceIT is correct that you can build an experiments-enabled build for yourself manually that way.

Experiments are always enabled in our alpha release builds, so my plan for this change had been to wait until the first v1.8.0-alpha release and then invite folks to test and give feedback based on that. We typically do experiment-feedback discussions over in our community forum rather than in GitHub just because it's easier then to keep all of the feedback and discussion in one spot while things are (sometimes) changing quickly, and so once we do reach the first alpha (which should appear once the v1.7.x patch releases settle down) I'll start a topic in the community forum and link it from #34264 so we can talk about it all in one place.

I would not suggest trying to discuss the change right here in this PR just because our closed-PR-locking bot will lock discussion here soon (since the change has been merged), and it'll be frustrating to have an ongoing discussion cut off like that.

@apparentlymart apparentlymart deleted the f-test-junit-xml branch January 26, 2024 17:14
@ckiefer
Copy link

ckiefer commented Feb 5, 2024

Thanks a lot to both of you, @ExtelligenceIT and @apparentlymart

I was able to test this experimental feature, and I also found a bug that I'd like to log and discuss. Just let me know when / where I can do that.

@ckiefer
Copy link

ckiefer commented Feb 6, 2024

Hi @apparentlymart

I see that the pre-release v1.8.0-alpha20240131 is out: https://github.com/hashicorp/terraform/releases/tag/v1.8.0-alpha20240131

Where in the community can I log any issues regarding the -junit-xml=FILENAME feature?

@alisdair
Copy link
Contributor

alisdair commented Feb 6, 2024

Hi @ckiefer, thanks for trying out the alpha! Please file any issues you find here on GitHub.

Copy link
Contributor

github-actions bot commented Mar 8, 2024

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active contributions.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Mar 8, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants