Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

new_audit: add Max Potential FID to JSON #5842

Merged
merged 9 commits into from
Jan 30, 2019
Merged

new_audit: add Max Potential FID to JSON #5842

merged 9 commits into from
Jan 30, 2019

Conversation

patrickhulce
Copy link
Collaborator

@patrickhulce patrickhulce commented Aug 16, 2018

Summary

Putting this up for discussion. Over email we settled on "Maximum Potential FID (mFID)" as a good lab FID proxy that's easy to understand. In our case, it is the longest task post-FCP and this is its PR... :)

Open questions to answer along with my thoughts

  1. Should LH surface this metric?
    Yes, mFID is easily explainable and has a clear tie to our RUM push with FID.
  2. What should we call this metric?
    Max Potential FID is what was somewhat settled on in the thread, but Estimated Input Latency is a lot prettier and legit sounding of a name to compete with, so I'm all ears to suggestions. I'd be so bold as to suggest the title is just "Expected First Input Delay" with the explanation of "maximum potential blah blah" in the description tooltip, but I'm certain others will disagree :)
  3. What do we do with Estimated Input Latency?
    This PR moves it down to LHR only but we can do other things. Keeping it in right now makes the metrics section a bit weird with 3-4. EIL's implementation with thresholds and boundaries has always been somewhat tenuous. Until we have a clear motivating example for devs, I'd prefer a FID stand-in. We're just going to land this in the JSON and see how it compares in HTTPArchive for now.

Key Change Introduced By This PR

This PR also adjusts the EIL estimate multipliers to fully represent the Lantern prediction. Previously the raw EIL estimate was reduced by 20% before being surfaced in the UI. This will be split out into a separate PR.

cc @tdresser @philipwalton


screen shot 2018-08-15 at 5 10 34 pm

@patrickhulce patrickhulce changed the title new_audit(mfid): add Max Potential FID metric new_audit(mFID): add Max Potential FID metric Aug 16, 2018
@patrickhulce patrickhulce changed the title new_audit(mFID): add Max Potential FID metric new_audit: add Max Potential FID metric Aug 16, 2018
@tdresser
Copy link

@stubbornella

I think this is reasonable. It does make the story around page load interactivity a bit more confusing though.

Do we have any way of evaluating the success of a change like this?

@addyosmani - should we have some documentation on how mFID / FID / TTI all relate before shipping this in lighthouse?

@philipwalton
Copy link
Member

I'm OK with calling it "Estimated First Input Delay" (note, I wouldn't use the acronym since we don't use it for other metrics).

I'm also OK with removing estimated input latency entirely since it's not super clear how it's calculated (and thus not super clear how to fix it if you're getting a value higher than you expect).

Overall, I'm happy to see this as part of Lighthouse. If nothing else, it'll help make developers aware of long tasks in their script initialization that they should break up.

Copy link
Member

@brendankenny brendankenny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, technically it is the current EIL queried at the 100th percentile :D

The name is admittedly kind of awkward, but I like it because it really is the max potential FID, and "estimated"/"expected" each bring their own nomenclature baggage that I think will end up confusing things.

Current EIL I agree isn't super helpful and I'd personally like to see it come back in moving-window graph form (when we resurrect the graph) as a diagnostic for thread busyness. That makes issues with arbitrary window size and percentiles go away and doesn't discard the rich amount of data available to LH in the trace for a single summary statistic.

).filter(evt => evt.duration >= 1);

return Promise.resolve({
timing: Math.max(...events.map(evt => evt.duration), 16),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

timing and not timeInMs?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeppers

export interface Metric {
timing: number;
timestamp?: number;
}


const str_ = i18n.createMessageInstanceIdFn(__filename, UIStrings);

class MaxPotentialFID extends Audit {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we get some overview comment to explain the basics of how this metric is calculated?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

return {
intercept: 0,
optimistic: 0.4,
pessimistic: 0.4,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whats this mean? the two graphs are the same. Both are adjusted?

similarly, lantern-speed-index has two identical graphics but then has different coefficients.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The graphs aren't the differentiating part of the optimistic/pessimistic equation here, it's actually whether we use optimistic/pessimistic FCP as the start point :)

The 0.4 was to mirror EIL, but honestly the justification for it (that we'll overestimate EIL a tad) hasn't proved to be true in the data, so I'll stick to 0.5 (simple average of optimistic + pessimistic estimates)

For speed index case the "optimistic"/"pessimistic" labels are a real stretch. The "optimistic" one is just normal speed index on the fast trace, the graph is essentially ignored completely here. The "pessimistic" one is our lantern psuedo-speed index based on the TTI graph.

@paulirish
Copy link
Member

Should LH surface this metric?

yup

What should we call this metric?

Def think it should tie into "First Input Delay" closely.
Right now I'm most fond of First Input Delay Impact

What do we do with Estimated Input Latency?

Comment it out until its glorious return as a rolling EQT diagnostic.

@addyosmani
Copy link
Member

should we have some documentation on how mFID / FID / TTI all relate before shipping this in lighthouse?

Our interactivity narrative over the last quarter has focused on TTI in the lab and FID in the field; I'd love to make sure First Input Delay Impact or Estimated First Input Delay (whichever name we decide on) is well covered in our metrics guidance so it's clear:

  1. TTI still matters (eFID compliments TTI without diminishing it)
  2. How to think about our two lab metrics for interactivity (eFID is a clear lab corollary for FID, how it differs).

@philipwalton Will you have bandwidth to update our metrics docs to reflect the above?
@kaycebasques would be great to similarly see if we can get a Lighthouse doc up about eFID.

@patrickhulce patrickhulce changed the title new_audit: add Max Potential FID metric new_audit: add First Input Delay metric Aug 22, 2018
@patrickhulce patrickhulce changed the title new_audit: add First Input Delay metric new_audit: add Max Potential FID to JSON Dec 10, 2018
@patrickhulce
Copy link
Collaborator Author

The new plan here is to land this without exposing for now, so we can take a look at the HTTPArchive results and how it compares to EIL/TTI/etc

@patrickhulce
Copy link
Collaborator Author

bump @brendankenny @paulirish :)

Copy link
Member

@paulirish paulirish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

if (node.type !== BaseNode.TYPES.CPU) continue;
if (timing.endTime < fcpTimeInMs) continue;

events.push({
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you could just filter nodeTimings instead of constructing these new objects...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point! done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants