Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Tracking] Metric - Report screen load time (Performance audit by Callstack) #34494

Closed
rinej opened this issue Jan 15, 2024 · 27 comments
Closed
Assignees
Labels

Comments

@rinej
Copy link
Contributor

rinej commented Jan 15, 2024

Metric: Report screen load time
Phase: Measurements
Commit hash: callstack-internal@f15ed75

Report screen load time:

Marker - open_report
Description - duration between pressing a report list item on the LHN to the report page being fully interactive

Round Duration(ms)
1 22522.100616
2 23424.314386
3 17239.968001
4 25921.813463
5 4211.818077
6 4687.519308
7 13862.709540
8 2771.418000
9 4587.897923
10 28249.186386
11 4718.193616

FPS:

Workflow:

  • open up report screen
  • wait until fully loaded
  • scroll up
  • scroll down
Round FPS
1 59.1
2 59.4
3 59.5
4 59.2
5 59.4
6 59.7
7 59.3
8 59.3
9 59.3
10 59.5

Average FPS: 59.37

Example FPS chart from flashlight measure:

fps_7

Additional markers Avg CPU usage, AVG RAM usage:

Round Avg CPU usage (%) AVG RAM usage (MB)
1 161 823.9
2 142.8 817.3
3 144.6 828.9
4 145.8 931.6
5 143.1 808.2
6 136.2 810.9
7 144.4 809.1
8 144.1 834
9 148.2 813.4
10 133.4 801

Avg from all rounds -> CPU usage (%): 144.36
Avg from all rounds -> RAM usage (MB): 827.83

Example charts from flashlight measure:

Total CPU:
total_cpu_7

CPU per Thread:
cpu_per_thread_7

RAM:
ram_7

@rinej rinej changed the title [Tracking] Metric - Average FPS on Report screen load (Performance audit by Callstack) [WIP] [Tracking] Metric - Average FPS on Report screen load (Performance audit by Callstack) Jan 15, 2024
@rinej
Copy link
Contributor Author

rinej commented Jan 15, 2024

Tracking issue for the Report screen load time metric as part of the Performance audit conducted by Callstack.

Part of #33070

@rinej rinej changed the title [WIP] [Tracking] Metric - Average FPS on Report screen load (Performance audit by Callstack) [Tracking] Metric - Average FPS on Report screen load (Performance audit by Callstack) Jan 15, 2024
@melvin-bot melvin-bot bot added the Monthly KSv2 label Jan 18, 2024
@rinej rinej changed the title [Tracking] Metric - Average FPS on Report screen load (Performance audit by Callstack) [Tracking] Metric - Report screen load time (Performance audit by Callstack) Jan 19, 2024
@adhorodyski
Copy link
Contributor

FYI This issue has been edited to unify the 2 metrics which we thought were overlapping too much - load time in ms and FPS while loading are too close to each other and it only made it more unclear to have these posted separately.

Our whole research on the analysis regarding Opening up a report will be posted under this one issue :)

cc @mountiny

@mountiny
Copy link
Contributor

Thank you, makes sense!

@adhorodyski
Copy link
Contributor

A quick update - we're aiming to provide the analysis by the EOW so we can start picking action items for this scope.

@adhorodyski
Copy link
Contributor

Update! The analysis will most probably drop tomorrow, it's now going through our internal review process :)

@muttmuure muttmuure self-assigned this Feb 13, 2024
@adhorodyski
Copy link
Contributor

adhorodyski commented Feb 14, 2024

Metric: Report screen load time
Phase: Analysis

This analysis was produced and authored by Jakub Romańczyk (@jbroma) and later coauthored by @adhorodyski.


Introduction

In our investigation, we utilized data measured during the previous phase where the majority of the workload was concentrated on the JavaScript (JS) thread, focusing primarily on this aspect. Utilizing react-native-release-profiler, we gathered traces to identify the specific functions responsible for the significant load on the JS thread during the ReportScreen's loading phase. Notably, our findings intersect substantially with metrics such as App start time, Send a message and Search screen load time.

Trace collection was done with help of the new set of markers placed across the code:

  • Performance.markStart(CONST.TIMING.OPEN_REPORT) ~> start of trace collection
  • Performance.markEnd(CONST.TIMING.OPEN_REPORT) ~> end of trace collection

Please see this commit for the exact implementation.


Detailed Analysis

We've found the following hot areas during the direct analysis of traces:

  • getOrderedReportIDs
  • replaceAndExtractEmojis

On top of that, taking a more holistic approach we have also investigated the possibility of optimizing Onyx connections on the ReportScreen within subcomponents as well as the current architecture behind data persistence and its' potential alternative.

Baseline Hermes trace in the JSON format

getOrderedReportIDs

baseline-all

Overview and the lack of freezing

We've identified that the getOrderedReportIDs function is the biggest chokepoint during the loading process of the Report Screen. It's responsible for preparing the data for LHN to render and is being calculated inside of the SidebarScreen's React tree.
It's very important to note there where its' getting called from, as it's not directly tied to the ReportScreen we're transitioning to.

Each time SidebarLinksData rerenders, getOrderedReportIDs is getting fired causing the big slowdown and massively impacting the load times as well as the CPU usage effectively blocking the JS thread. This happens during the transition as well as afterwards when interacting within the screen, that's why this also impacts the other Core Metric of Sending a message.

The reason this happens in the first place is because of the lack of freezing on the SidebarScreen when navigating directly out of it. Since SidebarScreen is a direct parent to the ReportScreen on the navigation stack, current logic inside of the FreezeWrapper component prevents the freezing mechanism from kicking in.

As long as a part of the React tree is not in the view (and here it's the case for mobile devices), calculating things in the background makes no sense for the end user, possibly degrading the performance while interacting with the visible parts of the UI. It would not be a big deal with a lightweight tree, but in this case the heavy calculations are really impacting the performance and preventing them from running will vastly improve the user experience.

The rest of the analysis on this function's implementation still holds and is incredibly important as it directly impacts all of the 3 Core Metrics, but in our opinion the most important step in adressing the long loading times of the ReportScreen should be resolving the freezing aspect so we can prevent this function from running in the background in the first place when it's unnecessary.

Breakdown

Having this context, please see the below breakdown as this function can be divided into 4 main parts which show up in the flamegraph:

  1. Cache lookup
  2. Filtering
  3. forEach loop with property additions and grouping
  4. Sorting

Cache lookup

This finding is the same as described by Kacper in his Send a message TTI metric analysis.

In essence, cache lookup requires JSON.stringify to create a key for the cache which takes a lot of time (~300ms in the trace below). The cache is rarely hit because of sheer amount of parameters that can change and is limited to 5 entries due to memory impact of it's keys and values.

baseline-json-stringify

The usefulness of the cache is almost non existent (please see the base trace) and it's removal should be considered to lower the execution time of getOrderedReportIDs (in this case by 300ms).

Filtering

In short, this part of the function cannot be further optimised by nearly any margin as most of it represents the business logic for choosing the reports to display. As long as it all lives in the JS thread and cannot be offloaded to the SQL layer which is available underneath (more on this later), it can only be microoptimised and we decided not to spend more time on this aspect. For now, the bigger the dataset, the longer the .filter() will take to execute.

forEach loop

The loop is responsible for 2 things:

  • adding extra properties (displayName & iouReportAmount) to the report object
  • grouping the reports by category

The displayName field

displayName is obtained from getReportName function which is very logic heavy. Our investigation revealed that most of the overhead comes from the last part of it - creating a report name out of involved users' names.

function getReportName(
  report: OnyxEntry<Report>,
  policy: OnyxEntry<Policy> = null
): string {
  // ...

  // Case: "Not a room or PolicyExpenseChat"
  const participantAccountIDs = report?.participantAccountIDs ?? [];
  // filtering through all of the report's participants
  const participantsWithoutCurrentUser = participantAccountIDs.filter(
    (accountID) => accountID !== currentUserAccountID
  );
  const isMultipleParticipantReport = participantsWithoutCurrentUser.length > 1;

  // mapping all of the participants to their display names to get the resulting string
  return participantsWithoutCurrentUser
    .map((accountID) =>
      getDisplayNameForParticipant(accountID, isMultipleParticipantReport)
    )
    .join(", ");
}

In our baseline trace, we've noticed that this function can take even over 1000ms to execute in cases where there are a lot of participants. This fallback comes with a lot of unnecessary processing and does not really provide us much of the information for a group name due to how it's later displayed in the App. LHN's layout truncates it and the user can't see the full name anyway, the one we spent time calculating for every report.

Here is a list of different approaches we designed to address this bottleneck:

Slice approach

Since name of the chat doesn't need to be unique, we can introduce a simple fix by creating a name out of only a small portion of participants (slice 3-4 of them). This should result in no visible changes but save a lot of computation, netting a 50% reduction in the execution time of getReportName.

Static default approach

We can go one step further and replace the name with a static default like a New Report (or anything more informative and translated, but still static). Using a default can result in gains as high as 83% reduction of time of this function.

Computed value approach

It might also be worthy to consider to somehow compute the name of the report in advance, and store it in Onyx directly, reducing the execution time even further and making this function obsolete across the whole app.

To sum up, this becomes more of a design decision which now highly impacts performance. Whatever approach we decide to take, it's necessary to address this matter as it is considered a low hanging fruit. Adressing this problem will also really positively impact the Search screen load time metric as proven by Tomek under this analysis.

The iouReportAmount field

Addition of iouReportAmount seems unnecessary since it's not being used anywhere in the app and should be considered for a removal. Its impact on performance is neglible (10ms cummulative), but trims some fat nonetheless.

Grouping

Lastly, the grouping part of the forEach loop is not optimizable in any way and is fine as it is.

Sorting

Sorting inside of getOrderedReportIDs is a very heavy operation, mostly due to use of localeCompare, which was already brought in as a bottleneck in the App Startup metric's analysis prepared by Hur Ali.

Almost all of the work is done by sorting the nonArchivedReports.

// before
nonArchivedReports.sort((a, b) => {
    const compareDates =
      a?.lastVisibleActionCreated && b?.lastVisibleActionCreated
        ? compareStringDates(
            b.lastVisibleActionCreated,
            a.lastVisibleActionCreated
          )
        : 0;
    const compareDisplayNames =
      a?.displayName && b?.displayName
        ? a.displayName.toLowerCase().localeCompare(b.displayName.toLowerCase())
        : 0;
    return compareDates || compareDisplayNames;

After examining the logic, we've come to conclusion that compareDisplayNames is being unnecesarily computed in cases where compareDates evaluates as not falsy. By introducing a simple if statement we can greatly reduce the work done for this most common scenario and thus optimise this function for the hot path.

// after
nonArchivedReports.sort((a, b) => {
    const compareDates =
      a?.lastVisibleActionCreated && b?.lastVisibleActionCreated
        ? compareStringDates(
            b.lastVisibleActionCreated,
            a.lastVisibleActionCreated
          )
        : 0;
    if (compareDates) {
        return compareDates;
    }
    const compareDisplayNames =
      a?.displayName && b?.displayName
        ? a.displayName.toLowerCase().localeCompare(b.displayName.toLowerCase())
        : 0;
    return compareDisplayNames;

Since this change effectively uses localeCompare only as a fallback, it allowed for a reduction from 15s to just 300ms of the execution time.
localeCompare is also used in other branches for sorting, so addressing it's performance issues is critical. Nonetheless, even after fixing it, the optimisation shown above will still prove beneficial as it's call count is very high.

baseline-sort


replaceAndExtractEmojis

During the analysis of the traces we've spotted that replaceAndExtractEmojis takes a significant amount of time (577ms) when visiting a chat for the first time. Most of the time taken origins from initialization of the Trie data structure used for storing available emojis in a search-friendly way. This is currently being tracked under the CONST.TIMING.TRIE_INITIALIZATION performance marker (Timing).

Digging into it, we've pinpointed the issue down to the add method which currently looks like this:

add(word: string, metaData: Partial<TMetaData> = {}, node = this.root, allowEmptyWords = false) {
    const newWord = word.toLowerCase();
    const newNode = node;
    if (newWord.length === 0 && !allowEmptyWords) {
        throw new Error('Cannot insert empty word into Trie');
    }
    if (newWord.length === 0) {
        newNode.isEndOfWord = true;
        newNode.metaData = metaData;
        return;
    }
    if (!newNode.children[newWord[0]]) {
        newNode.children[newWord[0]] = new TrieNode();
        this.add(newWord.substring(1), metaData, newNode.children[newWord[0]], true);
    }
    this.add(newWord.substring(1), metaData, newNode.children[newWord[0]], true);
}

Current implementation of add results in processing every entry twice because of the incorrect branching at the end of it and due to the recursive nature of this function. Processing for the second time is a bit faster, since we have already created the required TrieNode's. By removing the unnecessary call wrapped in an if statement, we get the following:

add(word: string, metaData: Partial<TMetaData> = {}, node = this.root, allowEmptyWords = false) {
    const newWord = word.toLowerCase();
    const newNode = node;
    if (newWord.length === 0 && !allowEmptyWords) {
        throw new Error('Cannot insert empty word into Trie');
    }
    if (newWord.length === 0) {
        newNode.isEndOfWord = true;
        newNode.metaData = metaData;
        return;
    }
    if (!newNode.children[newWord[0]]) {
        newNode.children[newWord[0]] = new TrieNode();
    }
    this.add(newWord.substring(1), metaData, newNode.children[newWord[0]], true);
}

With the above fix, we've managed to reduce the execution time of replaceAndExtractEmojis from the initial 577ms down to 352ms, resulting in a 39% reduction. The rest of the implementation showed no abnormalities and the final time it takes to process the entries comes from the quantity of data. It's also worth noting that the emojiTrie is only getting initialized once and subsequent uses are much faster thanks to that.

ReportActionItem & ReportActionItemParentAction

ReportActionItem

Previous work on the LHNOptionsList showed that moving Onyx subscriptions up the tree to the parent component which holds the list allowed for big performance gains. We wanted to investigate whether it makes sense to do the same for the ReportActionsList.

After careful analysis it seems that it’s not worth it because of how collection of emojiReactions is currently being stored. Accessing iouReport property in ReportActionItem in any other way is also unfeasible in the current architecture.

emojiReactions are indexed by reportActionID, therefore by moving it to the root ReportActionsList we would need to subscribe to the reactions from all reports at once which would cause great number of re-renders afterwards. One way to mitigate that would be to use a selector for withOnyx configuration and filter the subscribed reactions by sortedReportActions which are passed as a prop to ReportActionsList . This is not possible right now as selectors have no access to component’s props (unlike key property, when a function is passed). It might also prove unfeasible due to the fact that every update to emojiReactions would result in a O(n*m) array scan (since we have sortedReportActions of size m and collection of emojiReactions of size n). This could be optimised to O(n) but would require passing indexed reportActions instead.

When it comes to iouReport , situation is quite similar as we would need to pass collection of all reports to find the related ones in the ReportActionsList component.

Without changes to the structure of how this data is being stored, it’s not feasible to move subscriptions up the tree to the above mentioned ReportActionsList component. Since the biggest overhead comes from establishing the connection to Onyx to enable subscriptions, it’s not worth it to move the rest of the props as well to prevent unnecessary prop drilling.

ReportActionItemParentAction

The subscriptions in ReportActionItemParentAction are quite obsolete, since report can be passed from the parent component. This can be removed completely in this case. As for parentReportActions - since both ReportActionItemParentAction and ReportActionItem subscribe to that, we could consider moving that to the ReportActionsList itself.

Overall, this would reduce the number of subscriptions by the number of items which have ReportActionItemParentAction rendered - 1 (as there would be one subscription in the list component). The fact that limiting the amount of connections is beneficial was proven in the past, so we should consider this as another action item.

Onyx improvements

Apart from the things listed above, we've also analysed some potential improvements in the Onyx layer. These were researched and later proposed by Eduardo Graciano under this PR regarding keyChanged and we were able to confirm that they positively impacted this metric by eg. limiting the time spent in keyChanged by a huge margin, from ~150ms to about ~7ms per call.

When isolated, these updates on average provided us with a ~300ms reduction in the total load time of the ReportScreen.

Data persistence architecture

Problem

With the target goal of 500ms, there's definitely still some room for improvement. Important to note is that this metric's performance shouldn't be tied in any way to the size of the account we're using - and for now this is the case.

Fundamentally, the performance should be consistent across all accounts no matter the size and amount of data. The time it takes to load a screen in an offline-first application should be way below our target goal - instant for the end user as the data is available locally and we're not relying on the network connection, but rather reading from the disk.

In the long run, if we want to stay competetive against the other apps we should definitely leverage the offline-first nature to our advantage. The speed is well proven to be a big selling point and we should strive to be the fastest app on the market, especially given our technological stack (including both React and SQLite) which fits perfectly into this paradigm.

Based on our numerous weeks-long investigations, we believe that the reason we struggle to hit our performance goals touches on the current architecture behind data persistence across the app. Architecture, because even though our 'queries' run relatively fast in isolation (time to get a list of all reports, a list of all the messages, all reactions etc.), it later on forces app developers to process and combine all of the data in the JS thread.

As long as we rely on the Key-Value pattern (no matter the internal implementation) with no way to query efficiently for only the data we currently need for a scenario, the bigger the account the bigger will be the computational overhead on the JS thread - the one that is responsible for handling user interactions and should always stay responsive.

Moreover, performance will potentially be our concern with the new features released in the future if only they will require more data processing that is currently handled by multiple costly sort(), filter() or reduce() operations on big datasets.

As you can see from the previous sections, the biggest bottlenecks are directly tied to the amount of data we're handling. This problem, however, can't be solved by microoptimisations like the ones presented above and requires a more holistic approach. To sum up, we can get closer and closer to our target on accounts of a certain size, but certainly not all of them and in the long run we won't be able to reach them without addressing the root cause of the problem.

Solutions

To help address this matter, we've come up with the below recommendations:

  • Data-heavy operations should be offloaded to run on a different thread. Think 'querying for the most recent chats with their last messages and their authors'.

Here we see a big potential in SQLite as we're already using it in the app. With a relational data model we're already relying on (eg. 'report_id' fields mimicing references), we could store entities on different tables enabling us to query across them really efficiently.

Benefits

This brings the benefits like listed below:

  • Short data to UI loop - efficient and once-defined queries running outside of JS would allow us to use some state of the art React solutions for managing asynchronous state and React's concurrent features for managing Promises to really shorten the loop between making a query and the React tree being rendered.
  • Scalability - for a frontend application scale won't be a problem anymore with the help of the database engine. Indexes, views, clauses like WHERE, LIMIT, OFFSET, ORDER BY - all the goodies are available to us and can be used to our advantage.
  • Knowledge sharing - we could use the expertise of anyone proficient in SQL to help us optimise our queries as this knowledge would apply directly.
  • Data consistency - With the help of the database engine we could also enforce constraints on the data, making sure it's always consistent and valid. This also includes reliable migrations.
  • Running migrations - These could run fast and in the background, allowing for fast app start time.
  • Type-safe database access - with the help of today's Typescript tooling around databases we could also optionally employ an ORM (like drizzle-orm) in between to help us work with the database in a type-safe manner. This would also help us to avoid potential bugs and make the code more maintainable.

Challenges

With this however, also comes a few challenges. As SQLite's support for the web is landing, we've skipped this part. The biggest one we've identified is the migration path.

  • How do we migrate from the current APIs without breaking the app?

We believe this is a big challenge and it deserves a separate take. It's definitely worth it to consider this approach and with the amount of experts on board we're confident that we're able to design a solution that will not only be performant, but also the least intrusive to the current codebase at the same time. Thus, we recommend forming a working group in this matter to further investigate the potential of this solution. This shouldn't impact any of the ongoing initiatives, but rather be a separate one running in parallel to not block what's currently in the backlog for the next months - especially as this will live very long in the design phase.

Examples

To give you a better overview of how this might look like in the wild, we've prepared an example app which uses SQLite directly in React Native, utilising the concurrent features of React to manage the state and react-query (as an example) to help handle Promises. This is a proof of concept and should be treated as such, but it's a good starting point for the discussion on how performant this architecture can be in a real world.

A chat app with React Native + SQLite

Moreover, "a picture is worth a thousand words" so please see this X post for a visual representation of how this might look like in the real world.

In this example app, we were able to achieve ~1-4ms to query for the 'chat' itself + ~1-15ms to load 'the latest messages with their authors' for this chat, all while working with more than 100k database rows. Most importantly, these numbers are not varying no matter how many chats, messages or users we seed into the database (here 100k+) - and this is nothing crazy when looking into how short the data loop is with this approach.

There are also numerous ready to use solutions on the market which are aiming to solve this problem in a similar way. We're linking them here for a reference:

Evolu

Summary

Using only the fixes mentioned (getOrderedReportIDs + replaceAndExtractEmojis), we were able to reduce the time it took ReportScreen to load from 22.29s to 5.02s (with default name for a report) and 5.13s (with short name for a report) respectively.

The best case scenario which included fixes proposed in the App start time metric resulted in a time of ~3.5s for the Heavy account we've used for testing.

@adhorodyski
Copy link
Contributor

cc @mountiny, I know it's quite a write up 😅

Please feel free to bring in anyone to help us with picking the action items! Not all of them were already brought up in the different metrics so there's still some work to be done, but this marks the final stage of the Analysis phase :)

Let me know if I can help you guys with anything in here, happy to answer any questions.

@mountiny
Copy link
Contributor

@adhorodyski its great as always. I think the main points are share with other analysis too. getOrderedReportIDs and replaceAndExtractEmojis, it would be great if you could focus on those and bring them in as soon as possible.

Then other smaller improvements can follow up in smaller self contained PRs/ issues.

Regarding the SQL I think it would be great if you could also maybe join the discussions with Margelo and the SQLite team so you can more closely follow what's up. Also feel free to use the SQLite team help to answer any of your questions.

@adhorodyski
Copy link
Contributor

@mountiny of course, thank you so much! I'd probably just need your help with picking the right option for the slice/static/computed approach around the displayName field (it's a design decision I think).

@mountiny
Copy link
Contributor

@adhorodyski thanks! looking into that, it feels like an edge case only for Group chats. Those might have custom names too in future so I think the best will be to use the Slice method so we ensure the names of the users are in the report name

@adhorodyski
Copy link
Contributor

adhorodyski commented Feb 15, 2024

Alright, to sum this up:

  • all of the improvements around getOrderedReportIDs mentioned here and that also span across the other 3 core metrics will be included (with the slice approach for displayName's fallback)
  • replaceAndExtractEmojis oneliner will also make it through

When it comes to the Freezing mechanism, would you like us to look into fixing this afterwards? As mentioned, this can help by a ton and strip down the whole execution time of getOrderedReportIDs where it's not necessary and this can come in later.

Also, I'll make sure to link the Data persistence architecture thread where it makes sense in the context of design discussions we do across GH/Slack around SQLite. I believe having this audit's context is super valuable here.

cc @mountiny

@mountiny
Copy link
Contributor

@adhorodyski I think the freezing aspect should be looked at in parallel but with less urgency than the methods improvements

@adhorodyski
Copy link
Contributor

Ok, in this case I think we'll be able to start this next week as the first PRs are already being set up. Thank you, that's exactly what we needed in this phase for this metric :)

@roryabraham
Copy link
Contributor

The usefulness of the cache is almost non existent (please see the base trace) and it's removal should be considered to lower the execution time of getOrderedReportIDs (in this case by 300ms).

Assuming you're referring to this, I'd be a big fan of removing that, especially if it's not providing us much value. In my experience ad-hoc caches created for very specific use cases such as this one add a lot of complexity to the code and can lead to bugs (i.e: you add a new dependency but don't know you need to update the cache key).

@roryabraham
Copy link
Contributor

roryabraham commented Feb 15, 2024

[the filtering] part of the function cannot be further optimised by nearly any margin as most of it represents the business logic for choosing the reports to display. As long as it all lives in the JS thread ... it can only be microoptimised

Here's an idea we've discussed a bit before... we have some business logic that's currently duplicated in the front-end and back-end:

  • What reports to display to a given user
  • Whether a report should display a green dot
  • Whether a report should display as unread
  • What a report's name should be, according to a given user
  • etc...

Furthermore, problems can occur if the front-end and back-end get out of sync in their implementations of this logic.

So what if we extract some of this core business logic into an open-source C++ library that can be used by both the front-end and the back-end? That could help:

  • Reduce redundancy in our codebase overall
  • Increase the reach of our open-source contributors
  • Offload heavy business logic such as sorting the LHN to another thread, with a very fast C++/wasm implementation (this is the part that feels relevant)

Now, I recognize I'm making some assumptions here, but I feel like this would be more worth discussing than using SQLite in the front-end as a normal relational database, mostly because it would be a relatively easy step to take from where we are today, and could provide a lot of the same value, without requiring a massive rearchitecture of our front-end data layer.

@roryabraham
Copy link
Contributor

Regarding displayName, I think that the "slice approach" is the winner:

  • We don't want to change the UI/UX based solely on performance, if we can avoid it
  • I think storing the computed value in Onyx sounds like it will lead to extra complexity and be a potential source for bugs

I think a simple change could improve this significantly:

diff --git a/src/libs/ReportUtils.ts b/src/libs/ReportUtils.ts
index ebde1b1bf8..01f56849a3 100644
--- a/src/libs/ReportUtils.ts
+++ b/src/libs/ReportUtils.ts
@@ -2545,10 +2545,17 @@ function getReportName(report: OnyxEntry<Report>, policy: OnyxEntry<Policy> = nu
 
     // Not a room or PolicyExpenseChat, generate title from participants
     const participantAccountIDs = report?.participantAccountIDs ?? [];
-    const participantsWithoutCurrentUser = participantAccountIDs.filter((accountID) => accountID !== currentUserAccountID);
-    const isMultipleParticipantReport = participantsWithoutCurrentUser.length > 1;
-
-    return participantsWithoutCurrentUser.map((accountID) => getDisplayNameForParticipant(accountID, isMultipleParticipantReport)).join(', ');
+    const isMultipleParticipantReport = participantAccountIDs.length > 2;
+    const participantDisplayNames = [];
+    let i = 0;
+    while (i < participantAccountIDs.length && participantDisplayNames.length < 6) {
+        const accountID = participantAccountIDs[i];
+        if (accountID !== currentUserAccountID) {
+            participantDisplayNames.push(getDisplayNameForParticipant(accountID, isMultipleParticipantReport));
+        }
+        i++;
+    }
+    return participantDisplayNames.join(', ');
 }
 
 /**

@roryabraham
Copy link
Contributor

Addition of iouReportAmount seems unnecessary since it's not being used anywhere in the app and should be considered for a removal

Happy to remove any deprecated/unused fields 👍🏼

Would love it if we could catch this with static analysis too.

@roryabraham
Copy link
Contributor

roryabraham commented Feb 15, 2024

After examining the logic, we've come to conclusion that compareDisplayNames is being unnecesarily computed in cases where compareDates evaluates as not falsy. By introducing a simple if statement we can greatly reduce the work done for this most common scenario and thus optimise this function for the hot path.

Love it, easy change for a huge win.

Another easy change we could make on top of that, coming from the docs for localeCompare:

When comparing large numbers of strings, such as in sorting large arrays, it is better to create an Intl.Collator object and use the function provided by its compare() method.

@roryabraham
Copy link
Contributor

@adhorodyski minor tip for presenting small code changes in a before/after format – you can use a diff:

  1. Make the change in your IDE/editor
  2. Copy the diff to your clipboard: git diff HEAD | pbcopy
  3. Post it like so with triple-backticks and diff
image

@roryabraham
Copy link
Contributor

Good catch on the duplicate add in the emoji Trie initialization, happy to review a PR to clean that up 👍🏼

@roryabraham
Copy link
Contributor

This is not possible right now as selectors have no access to component’s props

This is actually not completely true – the second argument to a selector is the withOnyx internal state (i.e: the props passed to the component). So it won't have access to other props that have been modified by another selector, but if you just need raw Onyx data from another prop, then that's an option.

This is another case where we could stand to benefit from the (coming-soon) useOnyx hook. Using hooks we could easily create "dependent selectors" where we use useOnyx with a selector, then use the result in a second useOnyx hook invocation.

@roryabraham
Copy link
Contributor

The subscriptions in ReportActionItemParentAction are quite obsolete, since report can be passed from the parent component. This can be removed completely in this case.

Sounds good 👍🏼

As for parentReportActions - since both ReportActionItemParentAction and ReportActionItem subscribe to that, we could consider moving that to the ReportActionsList itself.

Also sounds good 👍🏼

@adhorodyski
Copy link
Contributor

This is not possible right now as selectors have no access to component’s props

This is actually not completely true – the second argument to a selector is the withOnyx internal state (i.e: the props passed to the component). So it won't have access to other props that have been modified by another selector, but if you just need raw Onyx data from another prop, then that's an option.

This is another case where we could stand to benefit from the (coming-soon) useOnyx hook. Using hooks we could easily create "dependent selectors" where we use useOnyx with a selector, then use the result in a second useOnyx hook invocation.

Thank you for clarifying here! @jbroma I think we should once more look into this, might be helpful.

@adhorodyski
Copy link
Contributor

After examining the logic, we've come to conclusion that compareDisplayNames is being unnecesarily computed in cases where compareDates evaluates as not falsy. By introducing a simple if statement we can greatly reduce the work done for this most common scenario and thus optimise this function for the hot path.

Love it, easy change for a huge win.

Another easy change we could make on top of that, coming from the docs for localeCompare:

When comparing large numbers of strings, such as in sorting large arrays, it is better to create an Intl.Collator object and use the function provided by its compare() method.

Nice catch! Please see this other analysis by @hurali97 under this post for how much this improves the performance.

@melvin-bot melvin-bot bot added Reviewing Has a PR in review Weekly KSv2 and removed Monthly KSv2 labels Feb 26, 2024
@melvin-bot melvin-bot bot added Weekly KSv2 and removed Weekly KSv2 labels Mar 7, 2024
@melvin-bot melvin-bot bot added Weekly KSv2 and removed Weekly KSv2 labels Mar 8, 2024
@melvin-bot melvin-bot bot added Weekly KSv2 and removed Weekly KSv2 labels Mar 11, 2024
@kirillzyusko kirillzyusko mentioned this issue Apr 2, 2024
50 tasks
@melvin-bot melvin-bot bot added Weekly KSv2 and removed Weekly KSv2 labels Apr 2, 2024
@melvin-bot melvin-bot bot added Weekly KSv2 Awaiting Payment Auto-added when associated PR is deployed to production and removed Weekly KSv2 labels Apr 18, 2024
@melvin-bot melvin-bot bot changed the title [Tracking] Metric - Report screen load time (Performance audit by Callstack) [HOLD for payment 2024-04-25] [Tracking] Metric - Report screen load time (Performance audit by Callstack) Apr 18, 2024
@melvin-bot melvin-bot bot removed the Reviewing Has a PR in review label Apr 18, 2024
Copy link

melvin-bot bot commented Apr 18, 2024

Reviewing label has been removed, please complete the "BugZero Checklist".

Copy link

melvin-bot bot commented Apr 18, 2024

The solution for this issue has been 🚀 deployed to production 🚀 in version 1.4.62-17 and is now subject to a 7-day regression period 📆. Here is the list of pull requests that resolve this issue:

If no regressions arise, payment will be issued on 2024-04-25. 🎊

For reference, here are some details about the assignees on this issue:

@mountiny mountiny removed the Awaiting Payment Auto-added when associated PR is deployed to production label Apr 20, 2024
@mountiny mountiny changed the title [HOLD for payment 2024-04-25] [Tracking] Metric - Report screen load time (Performance audit by Callstack) [Tracking] Metric - Report screen load time (Performance audit by Callstack) Apr 20, 2024
@mountiny
Copy link
Contributor

I think we will create new issues for the second autdit

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Development

No branches or pull requests

5 participants