-
Notifications
You must be signed in to change notification settings - Fork 46.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: ReactDOMServer render to Stream #6420
Comments
Thanks for following up and filing this. I haven't looked at your code in detail, but I am generally positive on this. It sounds like serious users of server rendering (which include none of us on the core team) would all appreciate this feature. We should get feedback from some of those people before settling on an API. I would prefer to have parallel codepaths rather than degrade the performance of renderToString. It shouldn't require serious changes: I believe that React[Server]ReconcileTransaction and ReactDOMComponent are really the only modules that have knowledge of the server-rendering difference, so there shouldn't be much to split. In particular, none of our "isomorphic" modules know about server rendering because it is a DOM-specific concept (ex: React Native does not support server rendering). |
At Yahoo, we do a significant amount of server rendering with React and are extremely interested in getting streaming support in the core. As such, I am more than willing to help out in any way possible. I have to done some investigation into render streaming and also looked at your implementation of As such, I think the best course of action (and recommendation by @sebmarkbage) was to move the recursion out of the individual I did some initial work on hoisting the recursion in #6011. Centrallizing the recursion as in this PR would make it much easier to support streaming and asynchrony in the future. It is difficult to commit to working on this further without buy in from the core team and more information on what is involved in #6170. |
So great to hear, thanks!
Agreed. Other than this issue, and outreach I will do to
Good feedback, thanks.
I think it's a bit more widespread than the code that has knowledge of server-rendering; the issue is that the I'm currently working on a newer version where
Eek. The implementation is... not my proudest achievement; I had to rewrite it from scratch about 4 or 5 times (callbacks, generators, async/await) because perf was often abysmal once I got it working.
I think I did something like this in my most recent experimental WIP branch (as I said above, most of the interesting stuff is in This experimental branch passes nearly all the tests (it doesn't do checksums yet, but I know how to do that from previous work), but it's about 50-100% slower for last byte than
Thanks for pointing that out; I didn't know about it at all. Is there any info on what the algorithm rework looks like? |
Tentative plan:
Does this sound right? Did I miss something? Other notes:
|
This would probably be very slow and would likely require a bunch of internal refactors to even make possible.
We barely need this at all now but I kept it intentionally so we can distinguish React-rendered nodes from nodes from elsewhere (from browser extensions, for instance). |
Nodes from elsewhere are going to disturb a react render anyway, and thus get blown away when React mounted into the dom. I don't think there is much value in knowing where the nodes came from (knowing that the nodes differ is sufficient), but there is a disadvantage to being forced to maintain the |
That's what I remember, yes.
We talked about this a while, and we weren't entirely sure how slow it would be, but I agree it's the biggest risk in the idea. @jimfb argued that it would be at least same big(O) of what is done currently when rendering on top of server markup, but the question is how much worse reading dom tag names and attributes is than constructing a giant string. I'm worried that it might be MUCH worse, but we will see. |
This, and slow, is what worries me mostly. Any pointers on where to start looking? Also helpful would be knowing where the logic that checks if it's React.createClass, React.Component, or SFC is at. |
@goatslacker also, thanks for writing this up! 👍 FYI, I'm trying to hack my way through a rough cut of step 1 today. I'm in the middle of it right now, and it's definitely a beast. I've got a few dozen new unit tests passing but a bunch not. |
src/renderers/shared/reconciler/ReactCompositeComponent.js (eg. #5884) |
I think an important piece of this is building a rough spec + test suite that can be run against external renderers that is also run against react core to ensure that breakages are properly communicated before being published. This should cover:
This could be done similarly to ECMA's test262 repo, and breakages + additions could be communicated inside of the github repo that holds these tests. This is really exciting! |
Quick update on potential speed of reconnecting to DOM by walking the dom rather than checksum (step 1). I have a branch that does this now, although it's by no means complete (needs work on interactivity, a bunch more unit tests, 3 existing unit tests fail, and the code is not as readable as I'd like). It is complete enough to do a really rough comparison of performance between the two strategies, though, which I started this morning. My very preliminary and very rough observation is that DOM walking is about 20% slower than checksum in Safari/OSX and about 40% slower in Chrome/OSX. This is obviously not ideal (faster is better!), but it's also not an order of magnitude increase. It's also not a giant amount of absolute time: in my (once again, rough) test on a 635KB render, the difference amounted to 20-40ms. When I tested a 107KB render, I couldn't detect a difference between the two result with ad-hoc testing. It's also worth noting that I haven't done any perf optimization on the branch. I'm still concerned about the potential for bad perf regression, particularly given that different browsers can have wildly different perf characteristics. I think the best path forward is to polish up my test, make it automatable via Selenium, and run it enough times to get statistical significance on a variety of platforms. I think good coverage would look something like:
Does that make sense? Do folks agree? Also: How much perf regression on client reconnect to server markup would you accept, if any? |
Great work! Getting better numbers makes a lot of sense to me.
I think the answer might depend, at least partly, on how much faster we think we can make server rendering with the optimizations we discussed. |
Agreed. Even if connecting server markup on the client is slower, it's conceivable that the perf gains from speeding up server rendering and reducing page weight by removing |
@aickin Great work! This is amazing! Obviously we do care about performance in all browsers, but if you have perf metrics for a couple of browsers (eg. Chrome+Firefox), my intuition is that it's sufficient to understand the likely ramifications of your changes. Obviously feel free to collect more data if you think it's desirable/impactful, but your effort might be better spent optimizing and fixing bugs. Your call. We can always go collect more perf metrics later if such metrics are needed to make a decision. Personally, I think 20-40% is fine. Especially since there are so many benefits to this approach. We can give better error messages. We can remove remove all the complicated data-id lookup logic. We can remove the dataids from the markup, making markup smaller. We can have a faster/optimized/dedicated SSR renderer (I predict at least 2-5x improvements there anyway, which will annihilate the additional 20-40% remount cost). Etc. However, that said, it's ultimately a team/community decision. And we should still do everything we can to make it even faster and get better numbers! I think I would also be curious to hear how the performance compares with an initial pass that was rendered without serverside markup (ie. the createElement path). |
I think it will be difficult to get this off the ground if you try to make it better in phase 1. Just getting to a point where the code is decoupled while still maintainable in a way is going to be hard enough. As a first goal, I'd suggest we get to a point where it is just feature/implementation parity with what we have to day. |
Sage advice. I wanted to know that it wasn't going to be like 5x worse to walk the DOM before investing a ton more time in this strategy, and I feel pretty confident that's the case now (Selenium-izing the tests shows between 5-70% worse perf, depending on platform). Back to working on correctness! |
I just opened PR #6618, which attempts to tackle the correctness side of stage 1. Please feel free to comment there. Thanks! |
I've been thinking about walking the DOM to do checksum validation and if it's more beneficial to create a virtual DOM based off of the existing DOM instead of trying to do validation. This way we would be able to do a patch to update the server-rendered DOM to be equivalent to the client-rendered markup instead of doing a wholesale replacement. If we're already traversing the DOM for validation is it any slower to create a virtual DOM instead? This is similar to what incremental-dom proposes, but in React's case it would only be doing this for the initial instantiation over top of existing DOM. |
render to stream and |
@aickin @goatslacker @spicyj @sebmarkbage @lencioni do you have an update on this? It is a feature we are very interested on, and we would like to move it forward. Is there anything we can do to help with the feature? |
We are currently working on Fiber which is a complete reimplementation of React core. Fiber should make it easier to create a separate streaming server renderer that would be focused on performance. This won't be our initial focus but as Fiber becomes more stable I'm sure there will be a lot more space for such community contributions. |
thanks a lot for the update @gaearon we had to migrate to markojs because react isn't feasible for isomorphic, but will definitely check Fiber out when it comes out. |
too bad we don't have this |
So is it correct to assume there is currently no way to have a server render to stream in React v15.x? |
@isaachinman: There is https://github.com/aickin/react-dom-stream and https://github.com/FormidableLabs/rapscallion — but I haven't tried them personally. |
React 16 will likely include a streaming renderer that @tomocchino has been working on. We don't plan to add one to React 15. |
@spicyj: Neat. I didn't know @tomocchino was hacking. |
It’s landed. |
please, provide documentation. |
It’s in beta. We add documentation when something becomes stable and part of the official release. The 16 release has not happened yet. |
Btw that documentation is currently inaccurate. The APIs are exported directly from import { renderToStream } from 'react-dom/server';
// or
import { renderToStaticStream } from 'react-dom/server'; Available since 16 beta 3. We'll fix the docs before the release. |
tl;dr: I'd like to know how much enthusiasm there is on the core React team for accepting a PR with the ability to render markup to a stream. I don't need a guarantee, of course, but I'd like to get an sense if it's something the React team might do before spending a ton of time working on it.
Background
Currently, server rendering is accomplished by
ReactDOMServer.renderToString
orReactDOMServer.renderToStaticMarkup
, which are synchronous functions that return a string.The server render methods can be somewhat slow. Large (~500K) web pages can easily take over 100 or 200ms to render, depending on the server. Also, since node is single threaded and single-core performance has not been improving rapidly, this is not just a problem that Moore's Law will eventually solve. The slowness in turn causes two problems in production websites with large (or somewhat large) HTML pages:
Proposed Solution
The proposal is to add two methods to ReactDOMServer, both of which return a node Readable Stream.
In terms of behavior, these methods would return a Readable Stream that outputs the same markup as
renderToString
andrenderToStaticMarkup
, respectively.These methods could solve both problems above. A stream can begin emitting content immediately to send to the browser, keeping TTFB constant as components get larger. Streams are also inherently asynchronous, which allows large pages to not starve out smaller ones.
A few months back I forked react into a library called react-dom-stream to do this exact thing, and it's been fairly well received (1,100+ stars on GitHub, though modest actual downloads on npm). There's also a wishlist bug thread on this repo about it. However, maintaining a fork is a less than desirable position, and I wonder if I can get it integrated in to the core code base.
My implementation in
react-dom-stream
is not ideal; I ended up copying basically every implementation ofmountComponent
and making the copied versions asynchronous. The benefit of this approach was that it didn't affectrenderToString
performance, but the obvious drawback was that any code changes or bug fixes would have to be performed in both forks of the code, which is not ideal.Recently, I figured out a way to share code in a better way between
renderToString
andrenderToStream
, but the result was thatrenderToString
slowed down relatively significantly (+30%, plus or minus). I'm trying to figure out how if I can tune it more carefully, but it's not looking great yet.Questions
So I think the questions I'd like to ask to the team are:
Thanks for your time, and thanks for all the great work you do on react!
(Tagging @spicyj, @gaearon, and @sebmarkbage, as I was instructed to do on the react IRC channel.)
The text was updated successfully, but these errors were encountered: