Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Streaming renderToStaticMarkup #3009

Closed
jussi-kalliokoski opened this issue Feb 1, 2015 · 64 comments
Closed

Streaming renderToStaticMarkup #3009

jussi-kalliokoski opened this issue Feb 1, 2015 · 64 comments

Comments

@jussi-kalliokoski
Copy link

I've been messing around with trying to get my current project to cater the critical rendering path (i.e. the first ~14kb of data) ASAP. Our landing page is quite heavy in content and takes around 200ms for React.renderToStaticMarkup() to process, which is not the time we want the users to wait until they can get something on their screens. Currently I'm doing it so that it res.write()s everything up until the <body> tag first, then the content, finishing up with a bundled JSON dump of the data the server used to render the page and the ~1kb of JS for webpack bootstrapping. This gets me somewhere, as in you get background color and title in the first rountrip, but no content.

It would be neat if there was a version of React.renderToStaticMarkup() that returned a node stream, and an option to specify some function that tells the number of bytes to render before pushing stuff downstream, for example:

var chunksWritten = 0;
React.renderToStaticMarkupStream({
    getChunkSize: function () {
        if ( chunksWritten++ === 0 ) {
            // first push the first 13kb
            return 13 * 1024;
        } else {
            // push the rest in one big chunk
            return -1;
        }
    },
}).pipe(res);
@brigand
Copy link
Contributor

brigand commented Feb 2, 2015

I'm pretty sure this would require a huge rewrite for minimal gain. If you're not already caching the output and serving it from a cdn, you'll get much better results from that.

@zpao
Copy link
Member

zpao commented Feb 2, 2015

Yea, this would be a pain BUT I don't want to write it off completely. Streaming markup to a browser is definitely a thing that people do and our server-side rendering story isn't great.

It might turn out that it's not realistic with the way React is architected but if somebody wants to investigate, I'd be interested in hearing how it goes.

@jussi-kalliokoski
Copy link
Author

If you're not already caching the output and serving it from a cdn, you'll get much better results from that.

Unfortunately we're already using caches and hitting a CDN for pretty much everything we can, however the views have too much variation per country, screen size (we are estimating screen sizes based on UA and session on the server side) and user-specific information to benefit from caching.

Would be interesting to see experiments about this at least, even if they never made it to core. :)

@syranide
Copy link
Contributor

syranide commented Feb 3, 2015

@jussi-kalliokoski @zpao This should be quite simple for now, #1542... but it might not be in the future...?

@zpao
Copy link
Member

zpao commented Feb 3, 2015

@syranide yea, I was thinking of that PR. But even that has the big assumption that it's all flushed at once. I haven't worked with streaming markup and don't know what might have to change. Though if we're only doing it for renderToStaticMarkup it might not actually matter too much.

@syranide
Copy link
Contributor

syranide commented Feb 3, 2015

@zpao Well it's built like that, but replace array.push() with something internal and voila...?

jussi-kalliokoski added a commit to jussi-kalliokoski/react that referenced this issue Feb 4, 2015
Allows feeding a write callback for renderToStaticMarkup().
@jussi-kalliokoski
Copy link
Author

I made something based on what @syranide did in #1542 to try it out and as a quick benchmark, I fed in res.write as the callback in my application and on my MBP the first ~13kb of the landing view was pushed downstream within <100ms whereas when rendering the whole document at once, it would be flushed downstream at ~280ms.

@aaronshaf
Copy link

Seems like this would be great for renderToString too, not just renderToStaticMarkup.

@busticated
Copy link

a synchronous .renderToString() that takes > 50ms for trivial view hierarchies seems super-problematic.

are people just not hitting this or is there an obvious work-around (besides caching) that i'm missing?

@syranide
Copy link
Contributor

a synchronous .renderToString() that takes > 50ms for trivial view hierarchies seems super-problematic.

@busticated I don't have any hard facts at the moment, but that makes no sense to me. Performance definitely is not that slow, they're probably running the DEV-version av React (which is far slower) and there's probably something else that is funky as well. I'm guessing he's not counting the DOM components either and generally have really large components.

Also, caching is and always has been the answer to this, it's not any different with React.

@busticated
Copy link

@syranide thanks. i should have noted that i'm actually seeing similar numbers. could very well be a result of something silly - config, etc - but afaik all that is required to disable 'DEV' mode server-side is NODE_ENV=production so at least that should be ruled out (side-note: docs here would be nice).

as for component size & count, i need to collect hard data but my views are certainly no more complex than something like --> https://instagram.com/rei/

@jussi-kalliokoski
Copy link
Author

Performance definitely is not that slow, they're probably running the DEV-version av React (which is far slower)

Actually was not, but...

and there's probably something else that is funky as well.

Yes, my benchmark was way off, for multiple reasons - on our production server the render times even for the biggest pages are around 12ms (small sample size with manual test setup on one node just to see, will post more results once I have proper data from monitoring in production). I'm suspecting the main reason my tests were that skewed is the lack of JIT warmup.

I'm guessing he's not counting the DOM components either and generally have really large components.

Yes, there are some pretty large components and when I posted the initial numbers, there were also some string replaces that ran against massive (100kb-500kb) strings, contributing a big chunk of the time spent.

As a side note, I did some more experimentation on the streaming and it caused more trouble than it was worth and I ended up reverting the manual document splitting I was doing (turned out that connect-gzip doesn't like the machines in our environment).

@syranide
Copy link
Contributor

@jussi-kalliokoski Ah, thanks for the update!

@pablolmiranda
Copy link

Why not use @syranide proposal at #1542 with a renderToStream method:

function renderToStream(component) {
  ("production" !== process.env.NODE_ENV ? invariant(
    ReactDescriptor.isValidDescriptor(component),
    'renderComponentToStaticMarkup(): You must pass a valid ReactComponent.'
  ) : invariant(ReactDescriptor.isValidDescriptor(component)));

  var transaction,
    Readable = require('stream').Readable,
    stream = new Readable(),
    processedFragments = 0,
    batchSize = 5;

  try {
    var id = ReactInstanceHandles.createReactRootID();
    transaction = ReactServerRenderingTransaction.getPooled(true);

    transaction.perform(function() {
      var componentInstance = instantiateReactComponent(component);
      componentInstance.mountComponent(id, transaction, 0);
    }, null);

    stream._read = function() {
      var fragments,
        end = batchSize,
        finish = false;

      if(processedFragments + batchSize > transaction.markupFragments.length) {
        end = transaction.markupFragments.length;
        finish = true;
      }

      fragments = transaction.markupFragments.slice(processedFragments, end);
      processedFragments += end;
      stream.push(fragments.join(''));
      finish && stream.push(null);
    }

  } finally {
    ReactServerRenderingTransaction.release(transaction);
  }
}

The idea is only concat the string in chunks soon the stream need it. This will avoid the event loop to be blocked even for a massive string (100kb-500kb). A more elaborate strategy will take in consideration the buffer size requested by the stream.
@jussi-kalliokoski, does this strategy match with your experiments?

@jakearchibald
Copy link

I show the benefits of streaming responses at https://m.youtube.com/watch?v=d5_6yHixpsQ&feature=youtu.be @ 4:05.

Third party APIs or heavy database requests can cause this.

I used dust.js to work around this, but it'd be great if react could do it.

@jakearchibald
Copy link

Also, streaming is coming to the browser (https://streams.spec.whatwg.org/), so this could become a benefit on the client too.

@sophiebits
Copy link
Collaborator

@jakearchibald Can you help me understand how streams could help on the client for us? Would there be some way to set innerHTML to a stream or were you thinking of something else?

@jakearchibald
Copy link

@spicyj once streams land in the browser, stream-supporting APIs will follow. I'm pretty certain, given the browser can already do it but it's not currently exposed, we'll be able to get a streaming document fragment. Maybe @domenic has discussion links on that?

In the video above I write to innerHTML twice, once with a partial result, and again with the full result. Horribly hacky, but it did improve render time.

@domenic
Copy link

domenic commented Jun 16, 2015

It's pretty tentative, but the current thoughts are along these lines.

@sophiebits
Copy link
Collaborator

Thanks for the references.

@AbrahamAlcaina
Copy link

I did some stress test to my isomorphic application and I found another issue related with this topic.
By default (in my laptop) express handles 20002200 request/second but in my app only handles 70100 request/seconds.

After profiler it I found that .renderToString()/renderTotStaticMarkup blocks the event loop for 26-34ms and for this reason express dispatch x20 less request/second.

@syranide
Copy link
Contributor

@AbrahamAlcaina Dynamically generating anything complex on-demand is obviously going to be costly, you'll need implement caching or pre-build static responses yourself.

@AbrahamAlcaina
Copy link

@syranide caching is always a good idea but in this case I'm rendering a timeline for a user.
Each user has own timeline.

It will be a good idea improve the performance of this method to less that 5 ms.

Take in mind that for render to string isn't needed to keep track of the changes of state, props, etc. Remove these checks will improve the performance.(idea)

@sophiebits
Copy link
Collaborator

@AbrahamAlcaina Yes, we hope to implement some optimizations like that.

@brigand
Copy link
Contributor

brigand commented Aug 20, 2015

@mufumbo it changes the problem from blocking the web server event loop to how well the cpu parallelizes work from multiple processes. If you use a finite pool of processes (not sure if this is the way to go), the worst case is that you develop a backlog. If you do, it means that you need more servers. Because the event loop isn't blocked, you can send the necessary messages to cause autoscaling to kick in.

Maybe I'm wrong about this; I'm mainly a frontend developer.

Edit: to clarify, I'm not saying that the performance is fine, I'm suggesting a way for the performance to not cripple your web servers.

@pablolmiranda
Copy link

@brigand that will be the same than use node clusters. But with node cluster you don't pay the price of data commute between the web server and the queue. And you should be fine to scale, since the web server is stateless.
One solution is break you page on personalized and non-personalized parts. You can cache the rendered HTML from the non-personalized parts and send it to the client right away plus the data for the personalized parts. On the client you show the non-personalized parts and render the personalized part soon you finish to receive the data that you need for that. That should give your servers a little bit of breath.
Of course, the requirements for each product should take in consideration. It is not a silver bullet.

@zpao
Copy link
Member

zpao commented Aug 20, 2015

While I think this is a great discussion, we've wandered way off the topic of "streaming markup". I encourage you to use an alternate venue for this discussion so that we can make sure people watching the repo can maintain a better signal to noise ratio. We set up https://discuss.reactjs.org precisely for discussions like this :)

@mufumbo
Copy link

mufumbo commented Aug 21, 2015

@jimfb can you help us figure out how to disable checksum? We can't find anywhere in the documentation. Also sanity checks?

@sophiebits
Copy link
Collaborator

@aickin
Copy link
Contributor

aickin commented Oct 20, 2015

On the topic of streaming renderToString and renderToStaticMarkup, I wrote a library over the last couple weeks to do that; I'm calling it react-dom-stream:

https://github.com/aickin/react-dom-stream

It's very new, and probably has some issues, but it passes all of the rendering tests that react-dom currently passes. Some preliminary performance microbenchmarks have shown it to massively reduce TTFB for large responses, and in real world conditions, TTLB goes down as well: https://github.com/aickin/react-dom-stream-example

I intend to work more on the library over the coming weeks, hopefully including adding the ability for the server-side render to yield control back to the Node event loop. This should help with an operational difficulty of SSR where long pages can starve out small pages for CPU.

Any feedback would be welcome. Thanks!

ETA: the renderer is actually a fork of react-dom, which I'd be happy to give back to the react project, but I have no idea if they'd want it.

@geekyme
Copy link

geekyme commented Oct 21, 2015

@aickin excellent ! Is there a drawback with using your solution at the moment ? (Other than it is alpha) I'm happy to help test it. Also it might help to document what kind of changes you made to the standard ReactDOM to support streamable components.

Regarding yielding control to the event loop, does that mean when streaming a component down, node is able to handle other requests?

@megamaddu
Copy link

@aickin This is super awesome! Also curious about the above @geekyme's questions. I've got a server-rendered app with average response times of 100-200ms that I'd love to try this on when it's stable enough!

@aickin
Copy link
Contributor

aickin commented Oct 21, 2015

@geekyme I think the core drawback is that it hasn't been battle tested. The core of it is the ReactDOM code, and it passes all the ReactDOM tests, so it's in OK shape, but I'm sure that there are wrinkles to find. Also, if your site only serves up tiny pages (say, 10K), it probably won't do much good, as renderToString probably isn't a big time sink.

Most of the changes are in this commit, and it mostly involves changing methods in the render path to take in a stream rather than return a string. What kind of documentation would you like?

If I implement yielding (which, to be clear, I haven't yet!), yes, it would allow node to switch in between requests. This should be especially good for sites that have a mix of large, slow pages and small, quick pages. On those sites, small pages can get starved for CPU by the big pages.

@aickin
Copy link
Contributor

aickin commented Oct 21, 2015

@spicydonuts Thanks! And is there any way I could convince you to give it a try and send me some feedback? ;)

@megamaddu
Copy link

@aickin I may just do that : ] Just need to make sure I'm watching for the right issues while it runs. Any recommendations other than watching latency and general exceptions?

@aickin
Copy link
Contributor

aickin commented Oct 21, 2015

@spicydonuts Great questions! Off the top of my head, I'd say:

  • latency, both TTFB of the React component and TTLB
  • exceptions
  • making sure that you don't get console errors when connecting React on the client side
  • time to first paint in the browser

@zpao
Copy link
Member

zpao commented Oct 21, 2015

Might be worth moving discussion about your specific project out of here and into that project's issues or a discussion forum like the one @spicyj linked to above to avoid notifying a bunch of people here. The project is relevant to this issue but the ensuing discussion isn't quite. Definitely keep us in the loop when you feel it's stable though.

@aickin
Copy link
Contributor

aickin commented Oct 21, 2015

@zpao Apologies (and thanks for the gentle guidance!).

There is a thread at https://discuss.reactjs.org/t/react-dom-stream-a-streaming-server-side-rendering-library-for-react-as-much-as-47-faster-than-rendertostring/2286 for anyone who might want to continue to chat about react-dom-stream.

@avesus
Copy link

avesus commented Dec 23, 2015

I wonder how one of the main trending technologies is not yet implemented. The idea of MVP is around us ever in well-known libraries and frameworks we use.
Streaming rendering is the one natural solution for Node. How one could demand other way?..

Are any plans to have working renderToStream() included in React? In 6 months? In a year?

@sars
Copy link

sars commented Jan 25, 2016

Hi there!
Is there any updates on this?

@zpao
Copy link
Member

zpao commented Jan 26, 2016

We will update issues when we have updates. If you don't see anything from us, it's safe to assume there are no updates.

@megamaddu
Copy link

That's a bit snarky. Many people are interested in this and don't want to risk it getting deferred due to a presumed lack of interest. And three months is probably a reasonable amount of time to wait before checking to see whether it's still be worked on.. At least it wasn't a +1.

@iamdustan
Copy link
Contributor

At least it wasn't a +1.

haha true that. 😄

If you were on a team that had the popularity and weight of this Backlog of work public work to do, how would you respond?

(not meant to be critizing of any parties at all; I suspect I would also have a similar response to @zpao if I was in his shoes.)

@gaearon
Copy link
Collaborator

gaearon commented Jan 26, 2016

And three months is probably a reasonable amount of time to wait before checking to see whether it's still be worked on.

Nowhere in this thread did anybody from React team say they worked on this. react-dom-stream is a third party project by @aickin, so feel free to try it and assess its stability and performance.

It is great that people explore this space but I want to clarify that streaming renderToStaticMarkup() is not currently being worked on by anyone on the React team to the best of my knowledge. The core React team has limited resources, and other issues are currently a bigger priority to Facebook.

Subscribe to this issue if you are interested in the updates but please consider other people subscribed to this issue who don’t want to receive this kind of back-and-forth talk. We will definitely post updates when and if there are any. In the meantime please feel free to contribute to the relevant discussion thread. Thank you!

@avesus
Copy link

avesus commented Jan 26, 2016

@gaearon thank you, Dan! Appreciate the truth. Hope they will find time for this or someone writes a good PR. When things will be different, hope they announce the changes here in this issue.

@gaearon
Copy link
Collaborator

gaearon commented Aug 4, 2017

It’s implemented.

import { renderToStaticStream } from 'react-dom/server';

works in 16 beta 3 (and later).

Try 16 beta: #10294 (comment)

@gaearon gaearon closed this as completed Aug 4, 2017
@avesus
Copy link

avesus commented Jan 1, 2018

Yay, lovely! Thank you all for the great work!

@gaearon
Copy link
Collaborator

gaearon commented Mar 24, 2021

There’s also new work in this area that related to waiting for IO. #20970

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests