Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Flight][Fizz] schedule flushing independently from performing work #28900

Closed
wants to merge 2 commits into from

Conversation

gnoff
Copy link
Collaborator

@gnoff gnoff commented Apr 24, 2024

Today work that pings in a microtask in Flight and Fizz will end up being rendered in a new macrotask. This is potentially sub-optimal for cache-locality reasons but it also has an impact on flushing because flushes always happen synchronously after rendering. It would be ideal if tasks that can ping in a microtask get a chance to render before we flush because we can end up with a more compact wire format for flight and we can potentially avoid showing fallbacks for fizz.

This commit doesn't actually implement rendering in microtasks. This will come in a follow up change. What this change does do is refactor the flushing controls to be able to schedule flushing in a macrotask separately from the render phase. The appraoch uses a notion of epochs to allow scheduled work to infer if it is still valid. For instance if Float causes a flush to be enqueued but then the next task is a ping leading to a render we don't necessarily want to flush before additional renders are allowed to complete. Additionally if there is already a flush scheduled we don't want an enqueued flush from Float to lead to an additional flush.

the request's flushScheduled property is now more narrowly tailored to represent out-of-band flushes enqueued through float and not the general purpose flushing that is done after every work loop.

the request now has an epoch property which can be used to determine if we haven't started a new work loop or flush since the task was previously scheduled.

In some environments schedulWork is synchronous. All the logic still makes sense for synchronous work even if it can have unecessary overhead (such as checking if we're in the same epoch since this will necessarily be true in sync mode). However sync mode is mostly useful for legacy renderers like renderToString and we should probably move away from in for the browser build anyway so I don't think we ought to concern ourselves with further optimization of the sync case.

@facebook-github-bot facebook-github-bot added CLA Signed React Core Team Opened by a member of the React Core Team labels Apr 24, 2024
@react-sizebot
Copy link

react-sizebot commented Apr 24, 2024

Comparing: 6f18664...1c0a3f2

Critical size changes

Includes critical production bundles, as well as any change greater than 2%:

Name +/- Base Current +/- gzip Base gzip Current gzip
oss-stable/react-dom/cjs/react-dom.production.js = 497.71 kB 497.71 kB = 88.93 kB 88.93 kB
oss-experimental/react-dom/cjs/react-dom.production.js = 504.00 kB 504.00 kB = 89.95 kB 89.95 kB
facebook-www/ReactDOM-prod.classic.js = 591.14 kB 591.14 kB = 103.91 kB 103.91 kB
facebook-www/ReactDOM-prod.modern.js = 566.95 kB 566.95 kB = 100.12 kB 100.12 kB
test_utils/ReactAllWarnings.js Deleted 64.67 kB 0.00 kB Deleted 16.09 kB 0.00 kB

Significant size changes

Includes any change greater than 0.2%:

Expand to show
Name +/- Base Current +/- gzip Base gzip Current gzip
oss-stable-semver/react-server/cjs/react-server-flight.development.js +0.57% 86.93 kB 87.42 kB +0.92% 20.22 kB 20.40 kB
oss-stable/react-server/cjs/react-server-flight.development.js +0.57% 86.93 kB 87.42 kB +0.92% 20.22 kB 20.40 kB
oss-experimental/react-server/cjs/react-server-flight.development.js +0.44% 113.28 kB 113.77 kB +0.71% 24.82 kB 24.99 kB
oss-experimental/react-dom/cjs/react-dom-server-legacy.browser.production.js +0.43% 211.13 kB 212.04 kB +0.63% 38.33 kB 38.58 kB
oss-experimental/react-dom/cjs/react-dom-server-legacy.node.production.js +0.42% 216.17 kB 217.08 kB +0.83% 40.21 kB 40.55 kB
oss-experimental/react-dom/cjs/react-dom-server.bun.production.js +0.42% 215.98 kB 216.89 kB +0.81% 39.32 kB 39.64 kB
oss-stable-semver/react-server-dom-esm/cjs/react-server-dom-esm-server.node.development.js +0.40% 123.70 kB 124.20 kB +0.63% 28.59 kB 28.77 kB
oss-stable/react-server-dom-esm/cjs/react-server-dom-esm-server.node.development.js +0.40% 123.70 kB 124.20 kB +0.63% 28.59 kB 28.77 kB
oss-stable-semver/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.browser.development.js +0.39% 126.61 kB 127.11 kB +0.64% 29.18 kB 29.37 kB
oss-stable/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.browser.development.js +0.39% 126.61 kB 127.11 kB +0.64% 29.18 kB 29.37 kB
oss-stable-semver/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.edge.development.js +0.39% 127.51 kB 128.00 kB +0.59% 29.42 kB 29.59 kB
oss-stable/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.edge.development.js +0.39% 127.51 kB 128.00 kB +0.59% 29.42 kB 29.59 kB
oss-stable-semver/react-server-dom-webpack/cjs/react-server-dom-webpack-server.browser.development.js +0.38% 129.06 kB 129.56 kB +0.59% 29.85 kB 30.02 kB
oss-stable/react-server-dom-webpack/cjs/react-server-dom-webpack-server.browser.development.js +0.38% 129.06 kB 129.56 kB +0.59% 29.85 kB 30.02 kB
oss-stable-semver/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.development.js +0.38% 129.44 kB 129.93 kB +0.60% 29.91 kB 30.08 kB
oss-stable/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.development.js +0.38% 129.44 kB 129.93 kB +0.60% 29.91 kB 30.08 kB
oss-experimental/react-dom/cjs/react-dom-server.browser.production.js +0.38% 235.50 kB 236.40 kB +0.44% 41.22 kB 41.40 kB
oss-stable-semver/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.node.unbundled.development.js +0.38% 129.86 kB 130.36 kB +0.63% 29.59 kB 29.78 kB
oss-stable/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.node.unbundled.development.js +0.38% 129.86 kB 130.36 kB +0.63% 29.59 kB 29.78 kB
oss-stable-semver/react-dom/cjs/react-dom-server-legacy.browser.production.js +0.38% 196.35 kB 197.09 kB +0.11% 36.25 kB 36.29 kB
oss-stable/react-dom/cjs/react-dom-server-legacy.browser.production.js +0.38% 196.38 kB 197.12 kB +0.11% 36.28 kB 36.32 kB
oss-stable-semver/react-server-dom-webpack/cjs/react-server-dom-webpack-server.node.unbundled.development.js +0.37% 132.15 kB 132.65 kB +0.61% 30.18 kB 30.36 kB
oss-stable/react-server-dom-webpack/cjs/react-server-dom-webpack-server.node.unbundled.development.js +0.37% 132.15 kB 132.65 kB +0.61% 30.18 kB 30.36 kB
oss-stable-semver/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.node.development.js +0.37% 132.50 kB 133.00 kB +0.60% 30.40 kB 30.58 kB
oss-stable/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.node.development.js +0.37% 132.50 kB 133.00 kB +0.60% 30.40 kB 30.58 kB
oss-stable-semver/react-dom/cjs/react-dom-server-legacy.node.production.js +0.37% 200.84 kB 201.58 kB +0.27% 38.01 kB 38.12 kB
oss-stable/react-dom/cjs/react-dom-server-legacy.node.production.js +0.37% 200.86 kB 201.61 kB +0.27% 38.04 kB 38.14 kB
oss-stable-semver/react-dom/cjs/react-dom-server.bun.production.js +0.37% 199.60 kB 200.34 kB +0.35% 37.08 kB 37.21 kB
oss-stable/react-dom/cjs/react-dom-server.bun.production.js +0.37% 199.63 kB 200.37 kB +0.35% 37.10 kB 37.23 kB
oss-stable-semver/react-server-dom-webpack/cjs/react-server-dom-webpack-server.node.development.js +0.37% 134.79 kB 135.28 kB +0.58% 31.01 kB 31.19 kB
oss-stable/react-server-dom-webpack/cjs/react-server-dom-webpack-server.node.development.js +0.37% 134.79 kB 135.28 kB +0.58% 31.01 kB 31.19 kB
facebook-www/ReactDOMServer-prod.modern.js +0.36% 204.18 kB 204.93 kB +0.21% 37.40 kB 37.48 kB
facebook-www/ReactDOMServer-prod.classic.js +0.36% 204.84 kB 205.58 kB +0.13% 37.61 kB 37.66 kB
oss-stable-semver/react-dom/cjs/react-dom-server.browser.production.js +0.35% 210.84 kB 211.57 kB +0.01% 38.30 kB 38.30 kB
oss-stable/react-dom/cjs/react-dom-server.browser.production.js +0.35% 210.86 kB 211.60 kB = 38.32 kB 38.33 kB
oss-experimental/react-server-dom-esm/cjs/react-server-dom-esm-server.node.development.js +0.32% 153.60 kB 154.09 kB +0.51% 33.92 kB 34.10 kB
oss-experimental/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.browser.development.js +0.32% 156.31 kB 156.80 kB +0.51% 34.45 kB 34.63 kB
oss-experimental/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.edge.development.js +0.31% 157.88 kB 158.37 kB +0.48% 34.92 kB 35.09 kB
oss-experimental/react-server-dom-webpack/cjs/react-server-dom-webpack-server.browser.development.js +0.31% 158.76 kB 159.25 kB +0.50% 35.12 kB 35.30 kB
oss-experimental/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.node.unbundled.development.js +0.31% 159.76 kB 160.26 kB +0.52% 34.93 kB 35.11 kB
oss-experimental/react-server-dom-webpack/cjs/react-server-dom-webpack-server.edge.development.js +0.31% 159.81 kB 160.30 kB +0.46% 35.42 kB 35.58 kB
oss-experimental/react-server-dom-webpack/cjs/react-server-dom-webpack-server.node.unbundled.development.js +0.31% 162.05 kB 162.55 kB +0.47% 35.53 kB 35.69 kB
oss-experimental/react-server-dom-turbopack/cjs/react-server-dom-turbopack-server.node.development.js +0.30% 162.40 kB 162.90 kB +0.48% 35.76 kB 35.93 kB
oss-experimental/react-server-dom-webpack/cjs/react-server-dom-webpack-server.node.development.js +0.30% 164.69 kB 165.18 kB +0.48% 36.36 kB 36.53 kB
test_utils/ReactAllWarnings.js Deleted 64.67 kB 0.00 kB Deleted 16.09 kB 0.00 kB

Generated by 🚫 dangerJS against 1c0a3f2

Comment on lines +3743 to +3745
<head>
<link rel="preconnect" href="foo" />
</head>
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This demonstrates a small semantic difference in this PR. the foo preconnect resolves in a microtask which now runs before the flush. I updated the test to perform an additional preconnect new a new macrotask and it demonstrates that there is no attempted write to the stream after the render has completed.

Copy link
Collaborator

@sebmarkbage sebmarkbage left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I realize that the mathematical chances of a service running long enough to over shoot the safe integer range here is basically nil (like 1000 years). It feels a bit hacky. Is a running number really necessary to express this logic?

export function startWork(request: Request): void {
request.flushScheduled = request.destination !== null;
if (supportsRequestStorage) {
scheduleWork(() => requestStorage.run(request, performWork, request));
requestStorage.run(request, startPerformingWork, request);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don’t quite trust this thing here. It assume a lot about the scheduling correctly following the local storage implementation. Which 1) it’s implemented as AsyncLocalStorage at all which might not be the case 2) that the scheduling is implemented using something that follows the storage 3) the scheduling isn’t batched across requests.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this makes sense. I also think then I should run the pings inside request storage explicitly too. We're sort of relying on them to be carried in an async context implicitly today but there isn't harm (other than a little perf) in running it in a scope explicitly

@gnoff
Copy link
Collaborator Author

gnoff commented Apr 24, 2024

Is a running number really necessary to express this logic?

Not for this PR but I expect it to be necessary for the follow up which maybe should land together which is to do some microtask pings synchronously rather than scheduling a new work task. We could loosen our definition of a "microtask ping" to be anything that pings after the main work task but before the next flush task but this is quite racey and while it might not have any practical downsides it doesn't model what I'm trying to model

If you're wanting to support > 1000 year long running renders I can mod the epoch. There is still a chance of collision of course but now it's a rarity of rarities. but I was satisfied by the maximum allowed running time. At some point you'd also run out of id's though you'd probably go through those a bit slower than task counters 🙃

Today work that pings in a microtask in Flight and Fizz will end up being rendered in a new macrotask. This is potentially sub-optimal for cache-locality reasons but it also has an impact on flushing because flushes always happen synchronously after rendering. It would be ideal if tasks that can ping in a microtask get a chance to render before we flush because we can end up with a more compact wire format for flight and we can potentially avoid showing fallbacks for fizz.

This commit doesn't actually implement rendering in microtasks. This will come in a follow up change. What this change does do is refactor the flushing controls to be able to schedule flushing in a macrotask separately from the render phase. The appraoch uses a notion of epochs to allow scheduled work to infer if it is still valid. For instance if Float causes a flush to be enqueued but then the next task is a ping leading to a render we don't necessarily want to flush before additional renders are allowed to complete. Additionally if there is already a flush scheduled we don't want an enqueued flush from Float to lead to an additional flush.

the request's flushScheduled property is now more narrowly tailored to represent out-of-band flushes enqueued through float and not the general purpose flushing that is done after every work loop.

the request now has an epoch property which can be used to determine if we haven't started a new work loop or flush since the task was previously scheduled.

In some environments schedulWork is synchronous. All the logic still makes sense for synchronous work even if it can have unecessary overhead (such as checking if we're in the same epoch since this will necessarily be true in sync mode). However sync mode is mostly useful for legacy renderers like renderToString and we should probably move away from in for the browser build anyway so I don't think we ought to concern ourselves with further optimization of the sync case.
@gnoff
Copy link
Collaborator Author

gnoff commented Apr 24, 2024

I pushed an update that models this using a schedule state. I think there are still some timing implications once we move to having microtask pings retry synchronously because we won't be able to tell the difference between a ping in a microtask from the current work macrotask vs a ping in a macrotask that is interleaved between the the work macrotask and the flush macrotask.

That said this may be somewhat arbitrary. It may be rare and also it just shifts a bit what is flushed but the semantics should stlil hold. The one case that might be problematic is if it is possible for an IO task to ping before the first flush. We may want to favor flushing the first work loop without allowing any IO to ping first. I don't think this can happen in Node.js because I believe setImmediate has the right behavior but I'm not confident about this for edge becuase it isn't clear to me that consecutive setTimeout tasks won't be interleaved. Maybe we can use postMessage or some other scheduling in edge that gives us greater confidence in the ordering

@gnoff
Copy link
Collaborator Author

gnoff commented Jun 1, 2024

closing in favor of #29491

@gnoff gnoff closed this Jun 1, 2024
@gnoff gnoff deleted the refactor-performwork branch July 25, 2024 22:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed React Core Team Opened by a member of the React Core Team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants