-
Notifications
You must be signed in to change notification settings - Fork 467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Procedural Test Generation #470
Comments
This sounds great. I hope the template syntax can be relatively terse to make instantiating the templates really light-weight. In that example, it didn't really look like using templating saved much typing or effort at all, but maybe I'm misunderstanding something. A couple ideas:
These are just ideas, and probably you've thought through the implications more than I have; I just wanted to mention them. I'm happy you're making progress here. @gdeepti What do you think would work out well for eventually refactoring the SIMD.js tests? |
Thanks for the feedback, @littledan!
As an demonstration of generating destructuring binding tests, the example I The write-up had already grown longer than I liked, and I was trying to limit
Good point. The only drawback I can see to removing the "end" delimiters is
For this, I would prefer to observe some sort of namespacing, if only to allow
I think we'll need to express frontmatter information in both places, although
When generating files, the two
I think that the current hierarchy is good for discoverability purposes. |
The test generation tool has been implemented, reviewed, and landed in We're currently vetting the new tool with tests for destructuring binding, the spread operator, and Annex B "function in block" semantics (all under review at the time of this writing). |
The following language features are observable from a number of distinct syntactic forms*:
Testing a given feature following this project's current practices would involve authoring highly-related tests that differed only in the syntactic form. For instance, completely testing 13.3.3.6 Step 6.b alone would require 18 separate test files scattered throughout the project file hierarchy. The semantics behind destructuring binding are very involved, so this 1-to-18 ratio makes for a large surface area for testing. These tests would vary according to a well-defined pattern.
Although writing such tests is a dull task, that cost is paid once and enjoyed many times. Maintaining those tests, on the other hand, is a more concerning prospect. When tests are added in the future, there is nothing to support a contributor in maintaining parity across all the different syntactic forms (or even recognizing that as an expectation).
One solution** would be to maintain files describing the tests abstractly and use a tool to generate the related tests. I spoke about this at the July 2015 TC-39 meeting (slides here), and the committee's conclusion at that time was to "continue exploration."
I've been doing just that, and here is what I think we'll need:
one or more insertion points, which can then be expanded with values
specified in external "test case" files.
emitted in the generated tests:
es6id
,description
,info
,features
and
flags
.to assert SyntaxErrors, as well!
And although this is not a hard requirement, I think it would be good if the generated test files document from which source files they were derived.
Here's how I think the solution should behave:
#2
and expand the "regions" withthe provided values
"test case" and test template
There are plenty of implementation details that will need to be decided (e.g. the template syntax and the programming language to implement the tool), but first I'd like to get the general requirements/behavior solidified. Still, it can be hard to visualize this without something concrete, so here's an example of what the files could look like:
src/templates/dstr-binding/var.tmpl
:src/templates/dstr-binding/func-decl.tmpl
:src/cases/dstr-binding/ary-name-init-undef.js
:We could go farther, but I'm suspicious of trying to do too much all at once. Hopefully this strikes the right balance between simplicity and power.
Thanks for sticking with me through all that! I'd welcome feedback from anyone, but (based on how gh-467 is proceeding) I believe these details will only concern Test262 contributors/maintainers (not consumers). So with that in mind, @anba @bterlson @caitp @domenic @goyakin @leobalter @littledan @rwaldron: what do you all think?
* - These are the syntactic forms I had in mind for each language feature:
for..in
head,for..of
head
(
new
), SuperCall(object initializer), Method (class expression), Method (class declaration),
Static Method (class expression), Static Method (class declaration), Accessor
method (
set
), GeneratorExpression, GeneratorDeclaration, GeneratorMethod(object initializer), GeneratorMethod (class expression), GeneratorMethod
(class declaration), Static GeneratorMethod (class expression), Static
GeneratorMethod (class declaration), ArrowFunction
var
statement,let
statement,const
statement, FunctionExpression, FunctionDeclaration, Method (object
initializer), Method (class expression), Method (class declaration), Static
Method (class expression), Static Method (class declaration),
GeneratorExpression, GeneratorDeclaration, GeneratorMethod (object
initializer), GeneratorMethod (class expression), GeneratorMethod (class
declaration), Static GeneratorMethod (class expression), Static
GeneratorMethod (class declaration), ArrowFunction
** - Another solution to this basic problem would be to introduce tooling that could assert test file parity based on file names and meta-data. I proposed such a system last year, but the conversation died out. I believe this is because a tool like this could only work on an informal heuristic, leaving room for error.
The text was updated successfully, but these errors were encountered: