-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add output directory functionality to library and plugin #106
Conversation
ensureFileExists(file.path); | ||
const writeStream = fs.createWriteStream(file.path); | ||
writeStream.write(file.content); | ||
return streamPromise(writeStream); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lcampos @brpowell thoughts on using createWriteStream
here vs the promises API and writeFile
? bryan mentioned we don't necessarily need to use streams here since the content is in memory already, which makes sense. i'd thought that the stream coooould still be useful in case we're writing a large amount of data to a file (loosely based off of what we did for logs earlier in this library & this util)- but i'm not too familiar with whether we actually get much of a benefit in that scenario so I'm open to changing it too!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There could be a benefit in using streams to write the test result output, but the current implementation isn't taking advantage of the construct. Currently what's happening is we have the TestResults
in memory as a javascript object, we pass it to the JUnitReporter
and call format
, which builds a string of the entire file in memory of the same information. We create a stream here and we call write
once, which writes the whole file in one chunk. The way we want to use the stream is to break up that string into much smaller chunks and push them to disk one by one as they become available. Think of it as if you were watching Netflix, and when you clicked on an episode you had to wait for the whole episode to be fetched from the server and then it gets pushed to the video player for you to watch. You'd be using a streaming service that doesn't actually stream. Every little bit that gets loaded from the server we want pushed to the tv as soon as its ready.
Ideally a reporter could implement Readable
and we could do something like this:
const testResult: TestResult = getTestResult()
pipeline(new JUnitReporter(testResult), createWriteStream('results.xml'))
and under the hood, the reporter would be doing something like this while pipeline "talks" to it:
// pipeline kicks off
this.push('<?xml version="1.0" encoding="UTF-8"?>\n')
// pipeline is ready for more info
this.push('<testsuites>\n')
this.push(`${tab}<testsuite name="force.apex" `)
this.push(`${tab}<testsuite name="force.apex" `)
this.push(`timestamp="${new Date(summary.testStartTime).toISOString()}" `)
// ok STOP - im still busy writing to the disk
//...
//...
// ok continue :)
this.push(`timestamp="${new Date(summary.testStartTime).toISOString()}" `)
I don't think it's worth trying to do this in the current PR, which is why i thought maybe simplify this by using writeFile
for now and file a story for utilizing the stream mechanism. I think it's worth coming back to because imagine a customer running a mega test suite (which we know is common at this point), which generates a huge set of results - we are more or less doubling the amount of memory consumed by first building the entire file. Hope that makes sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's keep it as it is. We could potentially write large files and using streams is preferred in those scenarios.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@brpowell I posted without seeing your reply. If I remember correctly the writeStream api still has an internal buffer that would benefit the write we are trying to do even when we don't have a stream.
I do agree on logging something to keep track of further enhancements in a different PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lcampos even before any sort of stream instantiation/utilization occurs though, we have more or less doubled the memory consumption by eagerly building the string in memory. You are right that there are internal mechanisms in the stream api that control the flow, but it's a bit pointless since the readable source is from memory.
e4835a5
to
9148ad5
Compare
new behavior: the plugin will create its custom |
break; | ||
case 'tap': | ||
this.logTap(result); | ||
break; | ||
case 'junit': | ||
this.logJUnit(result); | ||
break; | ||
case 'json': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixes a bug where we weren't handling the case where json
is specified as a result format. test for this scenario is down below.
const coverageRecords = result.tests.map(record => { | ||
return record.perTestCoverage; | ||
}); | ||
fileMap.push({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We also need to add another file with the same content and call it test-result-codecoverage.json
. Should add a note in the code that we'll eventually remove it and we are just adding it as a way to start moving users to the new test-result-<test-run-id>-codecoverage.json
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the test-result-codecoverage
file directly in the plugin, so cli users will see the old file and the new file when code cov is specified. the cli has a custom format for that file and since we intend to remove that behavior in the future anyway, I think it's better if we keep it in the plugin
What does this PR do?
Adds output directory functionality to the library and adds support in the plugin.
What issues does this PR fix or reference?
@W-8320773@