Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running WPT #78

Closed
foolip opened this issue May 31, 2021 · 13 comments
Closed

Running WPT #78

foolip opened this issue May 31, 2021 · 13 comments

Comments

@foolip
Copy link

foolip commented May 31, 2021

Hi @MattiasBuelens!

You asked in #testing on IRC:

I'm working on a polyfill for web streams, and I'd like to run WPT against it in multiple browsers. Is there a way to make "wpt run" automatically load an extra JavaScript file at the start of each test, so I can inject my polyfill?

Did you figure out how to do this?

@MattiasBuelens
Copy link
Owner

No, not yet.

Right now, I'm using wpt-runner to run the tests on Node with jsdom, but that has it limits. For example: for #20, I want to take the browser's built-in streams implementation and polyfill only the newer (not yet natively implemented) features. Or perhaps I find that a browser doesn't yet implement a spec change, and I may want to monkey-patch certain properties and methods ahead of browser updates.

To test this, I need a way to run WPT in actual browsers with the polyfill loaded. It seems that wpt run almost fits the bill, except I can't figure out how to make it load an extra script file before every test. All of the relevant streams tests are already in .any.js format, so it'd suffice if it behaved like an extra // META: script= rule (like this).

Would it be better if I opened an issue over at web-platform-tests/wpt?

@foolip
Copy link
Author

foolip commented May 31, 2021

Sure, feel free to open an issue at https://github.com/web-platform-tests/wpt/issues.

I would explore one of these things:

  • If all of these tests already load a common helper, edit that helper to first inject your polyfill
  • Otherwise, add your polyfill to testharnessreport.js
  • Or just edit every test individually.

All of these require local modification. If you want something that can be run with ./wpt run out of the box, maybe that's possible to build out once the concept has been proven to work.

@MattiasBuelens
Copy link
Owner

I would have preferred an out-of-the-box solution, but I can understand that if that doesn't exist yet, it'd be risky to already commit to something without knowing if it would actually solve my problem. So sure, I'll start by hacking the WPT files locally first. 😛

Thanks for the help!

@foolip
Copy link
Author

foolip commented May 31, 2021

Right, I'm guessing you might discover some new problems by just hacking it together which are worth knowing about early. For example, if you need to inject your polyfill at a very specific time, that might affect the shape of a solution.

@MattiasBuelens
Copy link
Owner

All right, I got something working. 😛

  1. Copy dist/polyfill.es2018.js to test/web-platforms-tests/web-streams-polyfill/polyfill.es2018.js
  2. Patch tools/wpt/serve.py (within test/web-platforms-tests/):
@@ -81,6 +81,7 @@
             query = "?" + query
         meta = "\n".join(self._get_meta(request))
         script = "\n".join(self._get_script(request))
+        script = self._script_replacement("script", "/web-streams-polyfill/polyfill.es2018.js") + script
         response.content = self.wrapper % {"meta": meta, "script": script, "path": path, "query": query}
         wrap_pipeline(path, request, response)
  1. Run (on Windows):
cd test/web-platforms-tests/
python ./wpt run chrome "streams/readable-streams/tee.any.html" --binary "C:\Program Files\Google\Chrome\Application\chrome.exe" --yes --headless --no-pause-after-test

That runs the tests for ReadableStream.tee(). And yes, they're all passing with this polyfill. 😁

I can change this to wpt run chrome streams to run all of the tests, but actually I only want to run a subset of the stream tests. For example, we don't support transferring a polyfilled ReadableStream to a worker, so ideally I want to skip those tests. Similarly, we can't detach ArrayBuffers synchronously from user-land code, so some tests for .respond() and .respondWithNewView() are expected to fail.

Right now, I'm managing this with some custom filters. If I understand the documentation for wpt correctly, the tool can generate a list of "expectations" with wpt update-expectations which can then be fed back into wpt run? Guess I need to do some more digging. 😅

@MattiasBuelens
Copy link
Owner

@foolip Sorry to bother you. Do you happen to know of any example usages of wpt update-expectations, and how to feed that into wpt run? Or perhaps other/better ways to specify which tests are expected to fail and/or should be skipped by wpt run? I've been staring at the Chromium and Firefox source code, but there's just... too much going on there. 😛

@foolip
Copy link
Author

foolip commented Jul 7, 2021

I'm afraid I don't know how to use that, it's not used in Chromium AFAIK. I think it's used by Gecko however and @jgraham might know how to invoke it.

@jgraham
Copy link

jgraham commented Jul 8, 2021

So, I think no one uses the wpt command reguarly, gecko uses the functionality but with a different frontend.

But assuming your setup is pretty standard, wpt run --log-wptreport wptreport.json <product> <path>; wpt update-expectations wptreport.json should work. You might want to create a metadata directory and use --metadata <path> with both commands to ensure the metadata gets used.

@MattiasBuelens
Copy link
Owner

MattiasBuelens commented Jul 10, 2021

Not having much luck... 😕

PS C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests> python ./wpt update-expectations --product chrome streams.log.json --metadata mymeta
Processing log 1/1
Traceback (most recent call last):
  File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\wpt", line 11, in <module>
    wpt.main()
  File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wpt\wpt.py", line 179, in main
    rv = script(*args, **kwargs)
  File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wpt\update.py", line 32, in update_expectations
    updater.run()
  File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\update\update.py", line 182, in run
    rv = update_runner.run()
  File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\update\base.py", line 63, in run
    rv = step(self.logger).run(step_index, self.state)
  File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\update\base.py", line 34, in run
    self.create(state)
  File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\update\update.py", line 98, in create
    runner.run()
  File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\update\base.py", line 63, in run
    rv = step(self.logger).run(step_index, self.state)
  File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\update\base.py", line 34, in run
    self.create(state)
  File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\update\metadata.py", line 24, in create
    metadata.update_expected(state.paths,
  File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\metadata.py", line 75, in update_expected
    for metadata_path, updated_ini in update_from_logs(id_test_map,
  File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\metadata.py", line 278, in update_from_logs
    for item in update_results(id_test_map, update_properties, full_update,
  File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\metadata.py", line 300, in update_results
    updated_expected = test_file.update(default_expected_by_type, update_properties,
  File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\metadata.py", line 776, in update
    subtest.update(full_update=full_update,
  File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\manifestupdate.py", line 286, in update
    prop_update.update(full_update,
  File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\manifestupdate.py", line 407, in update
    property_tree = self.property_builder(self.node.root.run_info_properties,
  File "C:\Users\Mattias\Documents\Git\web-streams-polyfill\test\web-platform-tests\tools\wptrunner\wptrunner\manifestupdate.py", line 309, in build_conditional_tree
    properties, dependent_props = run_info_properties
ValueError: not enough values to unpack (expected 2, got 0)

I generated streams.log.json with:

python ./wpt run chrome "streams/readable-byte-streams/bad-buffers-and-views.any.html" --binary "C:\Program Files\Google\Chrome\Application\chrome.exe" --webdriver-binary "_venv3\Scripts\ch
romedriver.exe" --no-pause-after-test --log-wptreport streams.log.json --metadata mymeta

I'm on web-platform-tests/wpt@07a56dc.

Am I doing something wrong, or should I open an issue? 😅

@jgraham
Copy link

jgraham commented Jul 12, 2021

You might need to define update_properties for Chrome c.f. https://github.com/web-platform-tests/wpt/blob/07a56dce8775820d3a42d4ac94beb39428f6a585/tools/wptrunner/wptrunner/browsers/firefox.py#L265-L267 and https://github.com/web-platform-tests/wpt/blob/07a56dce8775820d3a42d4ac94beb39428f6a585/tools/wptrunner/wptrunner/browsers/firefox.py#L55 It should be OK to make it just return [], {} since you presumably don't care about combining the results from multiple run configurations (I think the factoring here is a little off; the properties used in the update should be configured in the updater, not linked to a specific browser, because really it depends on the target deployment e.g. if it's wpt CI or gecko CI or whatever).

@MattiasBuelens
Copy link
Owner

@jgraham Sorry for not getting back to you sooner. I can confirm that adding a dummy update_expectations to chrome.py does the trick! 😄 It correctly generates an expectations file in meta/, and re-running wpt run with --meta set does indeed compare the new test results against those expectations.

Would you accept a PR for this in upstream WPT? I don't think it'd hurt to copy a generic update_properties function like the one from servo.py over to chrome.py, no? Or is it better to keep it as a local patch for this polyfill only?

@foolip
Copy link
Author

foolip commented Sep 9, 2021

@MattiasBuelens I would be very happy to see a PR for that against WPT. I'm not sure what the correct fix is, but I've never gotten the expectations stuff to work with ./wpt run, and it sounds like you have! :)

@MattiasBuelens
Copy link
Owner

I decided not to use ./wpt run in the end. Instead, I wrote my own test runner on top of Playwright and re-use the local web server from wpt-runner to host the tests. See #96 for more details if you're curious.

Thanks for your assistance anyway! 🙂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants