Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add output directory functionality to library and plugin #106

Merged
merged 5 commits into from
Dec 8, 2020

Conversation

AnanyaJha
Copy link
Contributor

@AnanyaJha AnanyaJha commented Dec 4, 2020

What does this PR do?

Adds output directory functionality to the library and adds support in the plugin.

What issues does this PR fix or reference?

@W-8320773@

@AnanyaJha AnanyaJha changed the title Add output directory functionality to library and support in plugin feat: add output directory functionality to library and plugin Dec 4, 2020
@AnanyaJha AnanyaJha requested a review from lcampos December 4, 2020 18:30
@AnanyaJha AnanyaJha marked this pull request as ready for review December 7, 2020 21:20
@AnanyaJha AnanyaJha requested a review from a team as a code owner December 7, 2020 21:20
ensureFileExists(file.path);
const writeStream = fs.createWriteStream(file.path);
writeStream.write(file.content);
return streamPromise(writeStream);
Copy link
Contributor Author

@AnanyaJha AnanyaJha Dec 7, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lcampos @brpowell thoughts on using createWriteStream here vs the promises API and writeFile? bryan mentioned we don't necessarily need to use streams here since the content is in memory already, which makes sense. i'd thought that the stream coooould still be useful in case we're writing a large amount of data to a file (loosely based off of what we did for logs earlier in this library & this util)- but i'm not too familiar with whether we actually get much of a benefit in that scenario so I'm open to changing it too!

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There could be a benefit in using streams to write the test result output, but the current implementation isn't taking advantage of the construct. Currently what's happening is we have the TestResults in memory as a javascript object, we pass it to the JUnitReporter and call format, which builds a string of the entire file in memory of the same information. We create a stream here and we call write once, which writes the whole file in one chunk. The way we want to use the stream is to break up that string into much smaller chunks and push them to disk one by one as they become available. Think of it as if you were watching Netflix, and when you clicked on an episode you had to wait for the whole episode to be fetched from the server and then it gets pushed to the video player for you to watch. You'd be using a streaming service that doesn't actually stream. Every little bit that gets loaded from the server we want pushed to the tv as soon as its ready.

Ideally a reporter could implement Readable and we could do something like this:

const testResult: TestResult = getTestResult()
pipeline(new JUnitReporter(testResult), createWriteStream('results.xml'))

and under the hood, the reporter would be doing something like this while pipeline "talks" to it:

// pipeline kicks off
this.push('<?xml version="1.0" encoding="UTF-8"?>\n')

// pipeline is ready for more info
this.push('<testsuites>\n')
this.push(`${tab}<testsuite name="force.apex" `)
this.push(`${tab}<testsuite name="force.apex" `)
this.push(`timestamp="${new Date(summary.testStartTime).toISOString()}" `)

// ok STOP - im still busy writing to the disk
//...
//...
// ok continue :)
this.push(`timestamp="${new Date(summary.testStartTime).toISOString()}" `)

I don't think it's worth trying to do this in the current PR, which is why i thought maybe simplify this by using writeFile for now and file a story for utilizing the stream mechanism. I think it's worth coming back to because imagine a customer running a mega test suite (which we know is common at this point), which generates a huge set of results - we are more or less doubling the amount of memory consumed by first building the entire file. Hope that makes sense.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's keep it as it is. We could potentially write large files and using streams is preferred in those scenarios.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@brpowell I posted without seeing your reply. If I remember correctly the writeStream api still has an internal buffer that would benefit the write we are trying to do even when we don't have a stream.

I do agree on logging something to keep track of further enhancements in a different PR.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lcampos even before any sort of stream instantiation/utilization occurs though, we have more or less doubled the memory consumption by eagerly building the string in memory. You are right that there are internal mechanisms in the stream api that control the flow, but it's a bit pointless since the readable source is from memory.

@AnanyaJha
Copy link
Contributor Author

AnanyaJha commented Dec 8, 2020

new behavior:
the library creates a test-run-id.txt file by default and the result file associated with the resultformat that a consumer specifies. it'll also create any custom files that the consumer specifies through fileInfos.

the plugin will create its custom test-result-<runID>.json file by default every time and will add its custom code coverage report when needed. will also specify junit or tap resultformat through the library when needed.

@AnanyaJha AnanyaJha requested a review from lcampos December 8, 2020 15:54
break;
case 'tap':
this.logTap(result);
break;
case 'junit':
this.logJUnit(result);
break;
case 'json':
Copy link
Contributor Author

@AnanyaJha AnanyaJha Dec 8, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixes a bug where we weren't handling the case where json is specified as a result format. test for this scenario is down below.

const coverageRecords = result.tests.map(record => {
return record.perTestCoverage;
});
fileMap.push({
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We also need to add another file with the same content and call it test-result-codecoverage.json. Should add a note in the code that we'll eventually remove it and we are just adding it as a way to start moving users to the new test-result-<test-run-id>-codecoverage.json

Copy link
Contributor Author

@AnanyaJha AnanyaJha Dec 8, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added the test-result-codecoverage file directly in the plugin, so cli users will see the old file and the new file when code cov is specified. the cli has a custom format for that file and since we intend to remove that behavior in the future anyway, I think it's better if we keep it in the plugin

@AnanyaJha AnanyaJha merged commit 1bc7414 into develop Dec 8, 2020
@AnanyaJha AnanyaJha deleted the aj/outputDir branch December 8, 2020 21:06
AnanyaJha added a commit that referenced this pull request Dec 10, 2020
AnanyaJha added a commit that referenced this pull request Dec 10, 2020
klewis-sfdc pushed a commit that referenced this pull request Oct 24, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants