-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
report: sort performance audits based on impact #15445
Conversation
@adamraine feel free to chime in!
My understanding is that its difficult to do this, and maybe not immediately worth it because of the audits requiring a reference to others & the metric scores and such that you mentioned. We could potentially try for this in a future iteration?
the linear impact here is mostly for the bottom section of the audits (the less prioritized audits). You're right that the |
Yeah, this creates a situation where the result of one audit is dependent on the result of another audit. What happens when someone does
Pretty much this. It's a way to compare impact of audits whose log normal impact gets rounded down to 0 because the actual metric value is so big.
+1 to comments |
Isn't the available information the same if doing it in runner vs in the report renderer? The values would just have to be optional, like how they're already treated in Spending 30 seconds thinking about this so probably a better way to do it and definitely better naming possible, but interface MetricSavings {
LCP?: {
value?: number;
score?: number;
};
FCP?: {
value?: number;
score?: number;
};
CLS?: {
value?: number;
score?: number;
};
TBT?: {
value?: number;
score?: number;
};
INP?: {
value?: number;
score?: number;
};
}
|
The difference is that In general, I'm unsure if we should to expose |
FWIW:
|
I wasn't suggesting that the sorting is internal, but I don't think we are obligated to expose the exact order audits should appear in in the JSON.
You might find this challenging to do because the report before this change has two different groups and sorts these categories using heuristics that are internal to the report renderer (
It will still be possible to compute the impact for scoring impact in HTTPA using the metric scoring options and
I think our options for this are:
|
I don't believe that's the case. Because almost all the new perf audit infrastructure is optional ( In any case, it does feel like the JSON consumer story is being treated as an intermediate HTML-report-generation step here rather than an endpoint of its own.. BUT y'all have done a lot of good work and thinking on this and I don't want to block the effort. Carry on! ✌️ |
const { | ||
overallImpact: aOverallImpact, | ||
overallLinearImpact: aOverallLinearImpact, | ||
} = this.overallImpact(a, metricAudits); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right now we recompute each audit's overallImpact each time this comparator is run.
it ends up not being super costly, but... it just seems like good practice to avoid recomputing the same thing. can we move that higher?
.filter(audit => this._classifyPerformanceAudit(audit) === 'load-opportunity') | ||
.filter(audit => !ReportUtils.showAsPassed(audit.result)) | ||
.sort((auditA, auditB) => this._getWastedMs(auditB) - this._getWastedMs(auditA)); | ||
|
||
const filterableMetrics = metricAudits.filter(a => !!a.relevantAudits); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what about the metric filter? it seems like we have all the info we need to resort when we filter to just FCP, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. That might be best left as a follow up and the old metric filter is adequate for now. After this PR but before the release?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure. clearly something about the score/icons will also have to land post this-PR , pre-release.
sg
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
This
Opportunities
section and moves its audits intoDiagnostics
In a follow up (#15447), score will be modified to use impact and will then also be used as a sorting variable.
https://lighthouse-git-metric-savings-score-googlechrome.vercel.app/sample-reports/english/