-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve Site Editor performance tests #48138
Conversation
Size Change: +1.6 kB (0%) Total Size: 1.34 MB
ℹ️ View Unchanged
|
Flaky tests detected in 3da31c0. 🔍 Workflow run URL: https://github.com/WordPress/gutenberg/actions/runs/4343998768
|
cc68370
to
64bbf26
Compare
@youknowriad I was able to get the Site Editor perf failure screenshot from CI (see artifacts from this job), and this is what I have so far (from Typing test): I have no idea what's going on there - where that title's from, why the content is empty, etc. Let me know if it rings any bell for you. I can't repro locally, so I'll keep debugging the CI. |
29775bb
to
de1a5af
Compare
070059c
to
dfe0ac8
Compare
FYI I've found the real reason for the performance failures and it's....ugly https://github.com/WordPress/gutenberg/pull/48094/files#r1112904171 |
0211e8a
to
c90cc1a
Compare
while ( i-- ) { | ||
await page.close(); | ||
page = await browser.newPage(); | ||
it.each( iterations )( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Iterate with it.each
instead of while
as the new page is already being created within the global afterEach
hook.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is great! also, to double-check: this won't re-use the page will it? I know we want to recreate the browser page each time. I suspect everything is okay, but I wanted to make sure we verify we aren't accidentally re-using a browser tab with this change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The global afterEach
that @WunderBart mentioned should take care of that.
const resultsFilename = basename( __filename, '.js' ) + '.results.json'; | ||
writeFileSync( | ||
join( __dirname, resultsFilename ), | ||
JSON.stringify( results, null, 2 ) | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The results
object contains data for both Loading
and Typing
tests, so let's write the file in the afterAll
hook instead of the end of the Typing
test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is storing this file alongside the sourcecode the optimal choice here? I know I was confused when first storing results files as artifacts because of the directory structure. would we want to move these instead to our relatively-new __test-results
directory?
we could consider some kind of TEST_RESULTS_BASE_DIR
ENV
to hold the base path if we wanted to avoid doing path-math everywhere: running in CI vs. locally in Docker vs. locally without Docker…
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, we should be able to pass the __test-results
path as an env variable and save directly there. Will give it a try.
edit:
OTOH, let's leave that for a follow-up PR. It touches all the other perf tests and I wanted to make this PR about the Site Editor ones only.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've drafted the refactor in #48684, but I'm not sure how to handle the "locally without docker" situation. I think it still makes the most sense (as it's most convenient) to store alongside the sourcecode when running e.g. via npm run test:performance -- site-editor
.
Let me know if that makes sense to you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the main thing about colocating with sourcecode is that it leaves its mess distributed around the project. at least when creating a project-root-level __test-results
directory it's easy to wipe that out in one command.
see for example challenges with the distributed build
, build-styles
, build-modules
, and build-types
directories in each package directory.
I'll add further remarks in the linked PR.
thanks! ✋
Clearing local storage on a page that we're about to close can throw error because the current context could be an iframe, not the top level document. Attempting to clear the LS in an iframe would end up throwing the `Failed to read the 'localStorage' property from 'Window': Access is denied for this document` error.
An `about:blank` page cannot have its LS cleared. It will throw `Evaluation failed: DOMException: Failed to read the 'localStorage' property from 'Window': Access is denied for this document.`
This reverts commit f86fa08.
ff3cdc7
to
12c0558
Compare
12c0558
to
10d7c04
Compare
7392eb8
to
8486b48
Compare
const editorURL = await page.url(); | ||
|
||
// Number of sample measurements to take. | ||
describe( 'Loading', () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we have sub describes and iterations within the same "test"? I find this a bit confusing personally. What benefits do we have from this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a strong opinion here, but it sometimes helps separate the concerns better. Like with the Loading
block, it's easier to set up the it.each
iterator below if it's wrapped with the describe
block. Or in case of e.g. multiple typing test cases (like with the post editor perf specs - c1, c2), it would be more readable to group them with the decribe( 'Typing' )
block. Having said that, I'm open to flattening it back with a single, top-level describe
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a strong opinion either, but feel like whatever we do, it should be the same for all of our tests and there's already other suites using a single parent describe (post editor, classic theme, block theme)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point; consistency is king. I'll switch it to a single parent describe
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
What?
Refactor the Site Editor performance tests to reduce flakiness and improve readability.
How?
waitForSelector
calls,afterAll
hook instead at the end of the test block,test.each
instead of awhile
loop - every test will open on a new page anyway.