-
Notifications
You must be signed in to change notification settings - Fork 921
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve remaining build tests #1828
Conversation
This pull request is being automatically deployed with Vercel (learn more). 🔍 Inspect: https://vercel.com/pikapkg/snowpack/d6vnnyg4s |
|
||
describe('packageManifest.homepage', () => { | ||
beforeAll(() => { | ||
setupBuildTest(__dirname); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one line does have to be added to the beginning of every test, however, it can be omitted if a different setup is required. Hopefully it‘s the best of both worlds! And it keeps the functionality where if you only want to run a single test locally, only that one test will build.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea I like this! However, using the synchronous execSync
here probably slows these tests down a ton more than needed. To do this, Node/execa has to block the thread until the command continues. This prevents running multiple tests in parallel, unless Node is spinning up workers for each test file.
In either case, you can return a promise here, so strong +1 for using the async execa
command and then awaiting that here, so that Jest can go do other stuff while the test folder sets up
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That‘s a good point about running in parallel. I usually try not to pre-emptively optimize something but it does make more sense to flow with how the tests are running. Will change
|
||
const html = fs.readFileSync(path.join(__dirname, 'build', 'index.html'), 'utf8'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But with that setup script comes a caveat: none of these files exist when the test loads, so you‘ll have to read files in beforeAll(() => …
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually prefer this, touching globally shared test values is just asking for one test to modify them and then create unexpected behavior.
As we move to use the programatic interface & more "one fixture per test" tests, a separate files
object would most likely be created for each test, so this is a good step in that direction.
@@ -21,15 +26,19 @@ describe('buildOptions.baseUrl', () => { | |||
|
|||
it('import.meta.env works', () => { | |||
// env is present in index.js | |||
expect(indexJS).toEqual( | |||
expect(files['/index.js']).toEqual( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a totally-optional test helper for reading files. You pass in any array of files to readFiles([...files], { cwd })
, and you get back a map of { [fileName]: "contents" }
. I found it helpful in reading a bunch of files at once, but without the annoyance of snapshots.
You’ll notice may tests just skip this helper, especially if the test is simple to begin with. Again, it‘s optional to use but I find it to be easier to read.
} | ||
}); | ||
} | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bye bye, build.test.js
. We don‘t need this anymore, now that the individual tests have been broken out.
It also means we don‘t need any global RegEx to strip out all the different line endings for Unix vs Windows, ANSI colors, backslashes, and other weirdness.
Or, if that is needed in an individual test (or is the focus of the test), it can be handled case-by-case rather than trying to manage The One RegEx To Rule Them All™ or experiencing the classic “oh no why did changing the RegEx for my test break 3 other ones” again
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd hate to lose all of these useful helpers to the sands of time... any way that we could add a link to this commit somewhere in case we need to come back and re-add some of these? (maybe in test-utils?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yup, just needed to grab the "strip whitespace" regex to handle some mac vs. windows whitespace issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding to test-utils
(and updating the place where you included it)
03c7a7a
to
ec379d4
Compare
ec379d4
to
d2aa3f8
Compare
d2aa3f8
to
a0b6483
Compare
I’m going to merge this because it‘s non-disruptive to the core codebase, and it will help contributor experience by a little bit. Plus, if any contributor was having an issue with a build test this should help, too. Post-merge happy to incorporate any feedback or additions/changes. But I made sure to test that we got back the intended functionality of “run 1 test, only build that test.“ |
}); | ||
|
||
it('mounted ./src', () => { | ||
const js = path.join(cwd, '_dist_', 'index.js'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: since most tests start as copy-paste, I'd actually want to prioritize a pass on all of these to clean them for nits. Example:
js
is a vague variable name, we use*Loc
in the rest of the codebase to signify a file path- test names: when tests are this focused, I generally like to see "does X when Y" naming to make it really clear what's being tested (hard to tell from a quick read for some of these)
cwd
isn't really the right term here, we want the test output directory, not the current working directory. Actually, couldsetupBuildTest
return that variable so that we turn the implicit connection of "oh, when setupBuildTest is done, I know that this other folder on disk now exists" into something more explicit?
Left some comments to take a look at next week as you continue this work! |
const distJS = fs.readFileSync(path.join(cwd, '_dist_', 'index.js'), 'utf-8'); | ||
const $ = cheerio.load(html); | ||
let files = {}; | ||
let $; | ||
|
||
describe('buildOptions.baseUrl', () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, top-level describe
name should match the file name so that we keep the current behavior of yarn test base-url
. Right now, we have the extra overhead of keeping track of two different names
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’m game for renaming the describe()
names to be clearer in purpose, but I’m against renaming them to match filenames. No matter what they’re named, you still have to specify part of the filename with yarn test base-url
. But they’re an opportunity to explain a test’s focus in more clear language, and not just repeat the folder name (Jest shows you both). But to that end, I’m not saying that they are human-readable clear language now and I’m fine with them changing; that’s just where I’d like them to be.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whoops, looks like I misunderstood. yarn test base-url
seems to work in the existing main
branch. I could have sworn I tried this and it didn't work. nvm!
beforeAll(() => { | ||
setupBuildTest(__dirname); | ||
|
||
files = readFiles(['index.html', 'index.js', '_dist_/index.js'], {cwd}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also, comment as I'm writing my first test: do we need to maintain an allowlist like this? I'm seeing some unnecessary overhead managing this array and the different files[PATH]
in the tests below. Since there's no connection between the two, I could see this getting out of date as well.
A better interface imo would be readFile(path)
, which looks up the file during the test itself:
it('...', (async () => {
expect(await getFile('/index.js')).toEqual(...);
});
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You‘re right that the initialization is awkward. But reading file contents is the bulk of these tests, and most tests can‘t have an inline expect(await getFile(…))
like in your example. Most tests reuse file contents across tests, so it needs to exist in some shared variable.
In writing tests I found myself even getting confused by having to name every file in a project (how do I name /index.js
vs /_dist_/index.js
vs /_dist_/components/foo2.js
?). I found it much easier to just refer to the filepath itself, both in reading & writing tests (files['/index.html']
is clearer than indexHTML
). I think even if you scope variables within tests, you still have the same problem of having to come up with names for all these files, and even though it‘s a minor, optional, thing, I still find it helpful to establish a good pattern for the most common part of the build tests.
Re-reading files isn‘t really a big deal; I’m mostly concerned about the DX.
But that said, WDYT about a glob pattern? e.g. readFiles(['**/*.js', '**/*.html'])
. Or even shorter: readFiles(someDir, {ignore: […]})
to read all files? That’d fix the allowlist problem and still save having to name dozens of files which was my main aim.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not ready to give up on the idea of solving this with a getter function idea that easily :) Mainly because the objective overhead cost of having to maintain that duplicate file allowlist always matching the files referenced in the individual tests doesn't beat out the subjective "easier variable naming" problem.
I think your issue is addressable here. For example, I'd be happy with either of these:
// 1.
const fileToTest = await getFile('/index.js');
expect(fileToTest).toInclude(`import __SNOWPACK_ENV__ from './__snowpack__/env.js';`);
expect(fileToTest).toInclude(`import.meta.env = __SNOWPACK_ENV__;`);
// 2. (with the result memoized so that only one lookup is actually used)
expect(await getFile('/index.js')).toInclude(`import __SNOWPACK_ENV__ from './__snowpack__/env.js';`);
expect(await getFile('/index.js')).toInclude(`import.meta.env = __SNOWPACK_ENV__;`);
BUT... if you're still on board with moving towards running these tests through the JS Build API as a part of our v3 release, then I'm fine with the existing files: []
pattern for now, since the helper that I'm proposing would most likely go away when that API arrives.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BUT... if you're still on board with moving towards running these tests through the JS Build API as a part of our v3 release, then I'm fine with the existing
files: []
pattern for now, since the helper that I'm proposing would most likely go away when that API arrives.
Ah yeah I’d be down for that! I think that’d be a great replacement for the current files: []
pattern.
I think your issue is addressable here. For example, I'd be happy with either of these:
Maybe I didn’t explain clearly… the current situation is more like imagine if #1 had 20+ files to keep track of, and needed to be referenced between tests.
#2 could work, but in order to be memoized we’d really need to beef it up and I’d rather just discard it in favor of the Node API later on.
See https://github.com/snowpackjs/snowpack/pull/1861/files#diff-5097166b779657484196ba597acf4521937f197da2c8803d52480c87d1fb8c75R13 for a reference on the changes… for now I changed it so that it just reads a directory.
also: OH MY GOD ITS SO MUCH FASTER!!! |
Changes
Follow up to #1728. Though this PR is noisy, it doesn’t change the fundamental nature of our build tests. It simply removes snapshots in
test/build/*
to explain, in more human language, what our tests are trying to do. And cut down on “snapshot fatigue” we’ve been experiencing in this project.I should note that a couple tests were removed because they were no longer significant, or they were overlapping with another test (one example was
config-ts-format
, because it seemed to overlap 100% withconfig-ts-file
—both simply testedsnowpack.config.ts
).Many tests were fixed in the process, such as
legacy-mount-script
, which wasn‘t even testing the legacy mount scripts (now it is 🙈 )A few other tests were renamed to be clearer in purpose, e.g.:
plugin-optimize -> plugin-hook-optimize
andtransform-sourcemap -> plugin-hook-transform
: these tests were very closely-related but you wouldn‘t know it by the old test namesresolve-imports -> config-alias
: this was testing config‘salias
functionalityresource-proxy -> import-assets
: even though we call it “import proxying” sometimes internally, almost no one calls importing.css
in.js
“proxying.” for that reason, this was renamed to be clearer in purpose: it‘s importing assetsThere are a few other minor changes that were made in the spirit of cleanup & consistency, that don‘t affect test coverage, etc.
Testing
Tests should pass. Note: it‘s hard to prove, but I believe this work really sped up the
test/build
speed, however, we won‘t see GitHub Actions pass any quicker until we do the same work fortest/esinstall
andtest/create-snowpack-app
Docs
A note was added to
CONTRIBUTING.md
, but I‘d like to continue this work intest/esinstall
before updating that document fully