-
Notifications
You must be signed in to change notification settings - Fork 47k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fuzz tester that simulates Sierpinski Triangle demo #9952
Conversation
Incremental tests often rely on yielding at a specific point. Using an explicit API should make our tests more resilient to changes in implementation.
return {simulate, totalChildren, totalTriangles}; | ||
} | ||
|
||
it('works', () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test fails but I want to check it in, anyway. Maybe we should use the same approach for incremental bugs as we did for Fiber — put failing tests behind a feature flag and track them with a script?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could also xit
them for now to skip, and when we fix them, immediately it
them so they don't regress. This would be almost equivalent, except that it wouldn't get indication when a skipped test suddenly unintentionally passes (since you have to explicit it
it to see if it passed).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea I've used xit
for other things such as the node.normalize()
case. There's a number of these. I hope we can get a way to detect if xit passes in the future.
Please name this something better tho.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good
const totalChildren = leafTriangles.length; | ||
const totalTriangles = triangles.length; | ||
|
||
function treeIsConsistent(activeTriangle, counter) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor nit: maybe assertConsistentTree
or something that makes it clearer this is throwing.
@@ -85,6 +85,8 @@ exports[`ReactDebugFiberPerf does not treat setState from cWM or cWRP as cascadi | |||
⚛ (Committing Changes) | |||
⚛ (Committing Host Effects: 2 Total) | |||
⚛ (Calling Lifecycle Methods: 2 Total) | |||
|
|||
⚛ (React Tree Reconciliation) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why did this happen?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
setState
inside componentWillReceiveProps
schedules a callback if one is not already scheduled. What we should do is not schedule an additional idle callback if we're already performing idle work.
This is an existing issue, though, not one introduced by this PR. It's only visible now because the new version of ReactNoop.flush()
keeps flushing until there is no more work (until scheduledDeferredCallback
is null), whereas before it would only flush the current callback.
The fix is simple but outside the scope of this PR.
The tests only assert that the output of the tree is consistent after each action, and that the final result after all work has flushed is the expected value. It does not assert how work is reused, which means it can't detect starvation issues. However, this also means that it is fairly resilient to changes in our incremental algorithm.
The tests only assert that the output of the tree is consistent after each action, and that the final result after all work has flushed is the expected value. It does not assert how work is reused, which means it can't detect starvation issues. However, this also means that it is fairly resilient to changes in our incremental algorithm.
The tests currently fail, which we would expect, given the known problems with the progressed work model.