-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: V8 Performance Insights #1158
Comments
Fixing total script size often requires massive re-architecting of a site. |
I think it's going to be a harder sell to get folks to drop their frameworks, but guiding them towards loading less for a route/page and deferring the rest of that work till later on might be more feasible. Perhaps framing the problem in more actionable terms might help? Insight: script X is large |
For starters, raising the awareness that "you're loading too much script" is a good signal. I'd love to see the Performance Metrics get beefed up and becomes it's own section. There are some DBW performance audits we can move into there as well. Also filed #1145 to add a bunch of new metrics to the report. |
I think it would be very useful to highlight parse/compile cost. I'd even say that parse cost is more important here, as compilation (usually) only happens when you also execute the code, which often doesn't happen during TTI. Maybe it'd even make sense to separate pre-parse and parse cost, although for a user it might be difficult to draw any useful conclusions from that. I especially like the idea of attributing the cost to a specific script and suggesting to lazy load or split the bundle. |
@brendankenny and I talked about this a little yesterday. We're going to initially investigate exposing top-level time spent in JS (Parse/Compile/maybe GC/etc) included lower down in the report around the critical requests section. Once it's there we can build on it in future (maybe introducing a "hey, you're spending too much time here", budgets etc). |
Lighthouse currently exposes a Performance Metrics section that includes items like Time-to-interactive. This is great, but what if we helped you understand why you weren't doing great at some of these metrics? With attribution that ties back to your source.
A poor TTI may be due to scripts keeping the main thread busy. Some factors that contribute to this are script specific:
JS bundle sizes: Large, expensive JS bundles can proportionately take longer to compile and run. They'll almost certainly peg the main thread longer depending on the work being done. Profiling a few thousand sites, I've noticed the average React or Angular app ships more than 700KB+ of script while those generally hitting a pleasing TTI are shipping < 300KB of script. Similar to how we have a notion of critical request chains, we could highlight total JS size as a potential bottleneck 🔆.
Depending on the UX for this, we might also be able to hint at using techniques like code-splitting to break up those larger scripts and only ship down what is useful for a route.
Another iteration on this could break down large scripts based on origin, so it's clear to see when large bundle sizes are an origin level concern vs something a third-party is pulling into your page. I've seen a few otherwise "okay" sites having a party on the main thread 🎉 because an ad was loading in all of Angular.js. It'd be good to highlight where the issue is your app/framework code vs things slightly out of your control.
Size alone is just perspective on this problem however.
Parse/compile costs: There's a need for more devs to understand parse/compile can be a real bottleneck. Many of the traces I look at have folks spending 4-5s in just this phase, often pushing out how soon a page is sufficiently booted up to be useful. Sites that are interactive within our current public budgets (<5s) typically spend < 3500ms in parse/compile (before trying to progressively load in anything else). Most of the Chrome loading team's mobile URL sets spend < 2.5s in parse/compile so I feel like starting with a 2.5-3.5s budget for this and tweaking over time might make sense.
Practically, we could introduce a notion of long scripts, highlighting the top 5-10 scripts on your page that are parse/compile heavy so you know where time is being spent ⌚️
We know that V8 spends a lot of time in the parse phase and there may be something interesting we can do here with runtime callstats to expose causes of slowness.
Whether we would go more granular than parse/compile is up for discussion. We could potentially also highlight GCs, idle-time etc, but I'd start with the biggest contributors to slow TTI and work our way from there.
I thought @samccone's CDS talk did a 💯 job of highlighting the cost of parse/compile on mobile and how easy it can be to trip up.
What else would this enable?
Over in Webpack, we just landed support for JS performance budgets highlighting large page-level and chunk-level scripts. I would love for us to be able to use Lighthouse for getting insight into parse/compile costs at some point as this is something we can't really do statically without instrumenting Chrome/V8. I similarly think other tooling vendors would be interesting in tapping in to this feature.
Where does this live?
It's unclear to me if this is a DoBetterWeb item or something we could evolve on top of the existing Performance Metrics portion of the report. I could see Performance Metrics becoming Performance with a section on attribution. I could also see Metrics stay the way they are now with attribution insights being available via a more info expansion or a link to a DBW section lower down the page. Opinions welcome 👓
The text was updated successfully, but these errors were encountered: