Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HOLD for payment 2024-04-15] [Tracking] Metric - App Startup (Performance audit by Callstack) #35234

Closed
hurali97 opened this issue Jan 26, 2024 · 25 comments
Assignees
Labels
Awaiting Payment Auto-added when associated PR is deployed to production Daily KSv2

Comments

@hurali97
Copy link
Contributor

Description:

As a part of the #33070, the issue tracks and summarises phases for App Startup Metric. Here we will share the Measurements, Analysis and Implementation details as we progress through the Audit.

@hurali97 hurali97 added Bug Something is broken. Auto assigns a BugZero manager. Daily KSv2 labels Jan 26, 2024
Copy link

melvin-bot bot commented Jan 26, 2024

Triggered auto assignment to @trjExpensify (Bug), see https://stackoverflow.com/c/expensify/questions/14418 for more details.

@hurali97
Copy link
Contributor Author

Metric: App Startup
Phase: Measurement
Commit hash: f268c39
Commit hash with setup for audit: 452595f

Flow:

  1. Open the app
  2. Splash will be hidden after a few secs
  3. Login to the app
  4. Then wait for the data to show
  5. Close the app and force stop it from the settings
  6. Open the app again
  7. Splash will be hidden after a long period
  8. Data will be visible when the splash hides

For each subsequent measurement, steps 1-3 are not followed. Which means we loop over from step 5-8 for the measurement cycle.

This way we get the app startup of the usual flow that a user faces after the fresh install, uses the app for a while and closes it. And then each time he'll launch the app he follows the same flow as we are following here.

Markers:

There are already a bunch of markers in place in the Performance class but it's missing some other valuable markers that might come in handy when analysing the measurements. Also, the ending TTI marker wasn't placed precisely as it was being executed even before the splash was hidden. Ideally, we want to end the marker when the splash hides.

  • We are only interested in the following markers:

    • nativeLaunch
    • nativeLaunchEnd_To_appCreationStart
    • appCreation
    • appCreationEnd_To_contentAppeared
    • contentAppeared_To_screenTTI

    And

    • TTI
  • If we combine the markers without TTI we should get the value equivalent to TTI. This gives us a chance to analyse each phase in app startup and pin point the root cause for potential fixes.

Measurements:

Screenshot 2024-01-26 at 4 46 01 PM

We see that TTI takes about 50 seconds, which means for that period the user can't interact with the app. This reflects a use case for a really heavy account and can be considered as a worst case scenario. Each improvement considering this scenario will benefit other users as well.

@mountiny
Copy link
Contributor

Thanks!

@mountiny mountiny added Monthly KSv2 and removed Daily KSv2 labels Jan 26, 2024
@trjExpensify
Copy link
Contributor

@mountiny what am I doing with this bud?

@mountiny mountiny removed the Bug Something is broken. Auto assigns a BugZero manager. label Jan 29, 2024
@mountiny
Copy link
Contributor

Nothing for now thanks! Wrong labels been used

@hurali97
Copy link
Contributor Author

hurali97 commented Feb 5, 2024

Analysis: Replace localeCompare with Intl.Collator

We start off by analysing the trace recorded with the help of [hermes release profiler](https://github.com/margelo/react-native-release-profiler) in a release build on a really heavy account, which has more than 15k reports.

The measurements we had from the previous phase amounts to 48.5 seconds for TTI.

While analysing the hermes profile trace we realised that the function getOrderedReportIDs takes about 50% of the TTI. Diving further shows us that there is a sorting mechanism in getOrderedReportIDs function which calls String.localeCompare() method, which is the root cause of bottleneck in getOrderedReportIDs function.

We have sorting on 4 arrays in getOrderedReportIDs function. An example of one sorting is as follows:

draftReports.sort((a, b) => (a?.displayName && b?.displayName ? a.displayName.toLowerCase().localeCompare(b.displayName.toLowerCase()) : 0));

Exploring the slowness in String.localeCompare() method, we looked for resources in Hermes itself. There we found this issue which explains that why it's pretty slow, especially on large data. Based on the suggestions, the best shot is to use Intl.Collator and create it's instance one time then re-use that instance to compare locales.

Below is how this would look like in practice:

let currentLocale = 'en';
Onyx.connect({
    key: ONYXKEYS.NVP_PREFERRED_LOCALE,
    callback: (locale) => {
        if (!locale) {
            return;
        }

        currentLocale = locale;
    },
});

const collator = new Intl.Collator(currentLocale);

const localeCompare = (str1, str2) {
  return collator.compare(str1.toLowerCase(), str2.toLowerCase());
}

function getOrderedReportIDs (...args) {

...

draftReports.sort((a, b) => (a?.displayName && b?.displayName ? localeCompare(str1, str2) : 0));

...

}

We can further improve this by removing the str.toLowerCase() by adding sensitivity options to Intl.Collator:

const collator = new Intl.Collator(currentLocale, {usage: 'sort', sensitivity: 'base'});
const localeCompare = (str1, str2) {
  return collator.compare(str1, str2);
}

With these improvements, we recorded the hermes profile trace again and the TTI is reduced drastically by ~50% 🚀


This patch is also tested in other metrics like Send A Message, Report Screen Load Time and the results are jaw-dropping 😮 For eg, in Send A Message result dropped from 27 seconds to 5 seconds of the whole execution time 👏

@hurali97
Copy link
Contributor Author

hurali97 commented Feb 5, 2024

Analysis: Replace Intl.Collator with basic comparison

Now we know that `Intl.Collator` brings in remarkable performance improvements but given our huge data it still takes some noticeable amount of time. For example, we still see about ~1.7 seconds spent in sorting for `getOrderedReportIDs`.

We can improve this further by using the basic comparison, only downside is that it might or might not play well with non-english locales. But since we are only comparing on displayName which will probably only be in English. Otherwise, we can do a regression test with other locales supported by Expensify.

Following is how it will look in practice:

 const compareLocale = (str1: string, str2 : string) => {
     if (str1 === str2) {
         return 0;
     }

     if (str1 < str2) {
         return -1;
     }

     return 1;
 }

const localeCompare = (str1: string, str2 : string) => {
    return compareLocal(str1.toLowerCase(), str2.toLowerCase());
};

That's all we need to do based on the improvements from previous analysis. This speeds things up and bring us about 2-3 seconds of reduction in the App Startup Time.

@hurali97
Copy link
Contributor Author

hurali97 commented Feb 5, 2024

Analysis: Reduce Redundant Operations

Further analysing the hermes profile trace unearths that there is some operations that are being repeated as we have `filter`, `forEach` and `sort` in place for `getOrderedReportIDs`. In there we have following operations that can be improved to avoid redundancy and reduce the execution time of `getOrderedReportIDs` function:
  • Extract out translation calls from the function to not translate same value on each iteration of loop
    • There are some places which are translating the static value in the loop like this function getDisplayNameOrDefault  have Localize.translateLocal('common.hidden') being invoked each time this function is called. Which is an overkill, so a simple solution is to just extract this out of the function. This is also the case for getDisplayNameForParticipant, getPolicyName and getReportName.

Below is how we can do that in practice:

// Previously
function getDisplayNameOrDefault(...args) {
		...
    const fallbackValue = shouldFallbackToHidden ? Localize.translateLocal('common.hidden') : '';
		...
}

// After Optimisation
const hiddenText = Localize.translateLocal('common.hidden');
function getDisplayNameOrDefault(...args) {
		...
    const fallbackValue = shouldFallbackToHidden ? hiddenText : '';
		...
}

  • Check if number doesn’t contain SMS code, don’t pass it to awesome-phone library
    • The parsePhoneNumber library is really costly to use. Takes about 30-35ms for each call in combination with Str.removeSMSDomain. As we were analysing the localePhoneNumber function, we realised that parsePhoneNumber is also called on a string which is not a number like. To improve this we can:

      • Do an early return, if number doesn’t contain SMS domain expensify.sms , we return the same number. For eg, the number can be concierge@expensify.com which doesn’t have expensify.sms in it, so no point in going further.

In practice, following is the code that helps us achieve the same:

const SMS_DOMAIN_PATTERN = 'expensify.sms';
function formatPhoneNumber(number) {
...
// do not parse the string, if it's not a phone number
if (number.indexOf(SMS_DOMAIN_PATTERN) === -1) { 
	return number;
}
const numberWithoutSMSDomain = Str.removeSMSDomain(number);
const parsedPhoneNumber = parsePhoneNumber(numberWithoutSMSDomain);
...
}

  • Replace RegExp replace with string substring method

Below is how it will look like in practice:

// Previously
function getDisplayNameOrDefault(...args) {
	const displayName = passedPersonalDetails?.displayName ? passedPersonalDetails.displayName.replace(CONST.REGEX.MERGED_ACCOUNT_PREFIX, '') : '';
...
}

// After Optimisation
function getDisplayNameOrDefault(...args) {
		let displayName = passedPersonalDetails?.displayName ?? '';
    if (displayName.startsWith('MERGED_')) {
        displayName = displayName.substring(8);
    }
...
}

This all combined bring us about ~3 seconds of reduction in the App Startup Time.

@hurali97
Copy link
Contributor Author

hurali97 commented Feb 5, 2024

Analysis: Improve Onyx’s isKeyMatch

Onyx’s `isKeyMatch` function is used in a lot of places and it’s important that it should be really performant. As of now, around 40 ms are being consumed by this function alone. At the moment, below is how things look in this function:
// In Onyx.js
function isKeyMatch(configKey, key) {
    return isCollectionKey(configKey) ? Str.startsWith(key, configKey) : configKey === key;
}

// In Str.js
function startsWith(haystack, needle) {
    return _.isString(haystack) && _.isString(needle) && haystack.startsWith(needle);
}

We are using underscore to check whether the arguments are string. We can improve this by removing the dependency on checking for a string and directly checking for startsWith, since keys are always in string. We can do this as follows:

function isKeyMatch(configKey, key) {
  return isCollectionKey(configKey) ? key.startsWith(configKey) : configKey === key;
}

This reduces the execution time for this function by ~50% 👏

@hurali97
Copy link
Contributor Author

hurali97 commented Feb 5, 2024

Analysis: Avoid creating new Array in getAllKeys in Onyx

In `OnyxCache.js` we have `getAllKeys` method, which upon invocation generates a new Array from the set and returns it. Below is how it looks as of yet:
getAllKeys() {
  return Array.from(this.storageKeys);
}

In App Startup cycle, as a result of Onyx.Connect calls we execute this method each time. We roughly have more than 30 connections which happens in App Startup cycle. As a result, each time a new array is created from the set. We can improve this by checking whether the keys have changed and if they have changed, we create a new array and store it to a variable. And then the next time getAllKeys is invoked we will just return this variable if no keys are changed.

Let's see how it looks like in action:

// In OnyxCache.js
constructor() {
	...
 this.keysArray = [];
 this.hasChanged = false;
	...
}

getAllKeys() {
  if (this.hasChanged) {
    this.keysArray = Array.from(this.storageKeys);
  }
  this.hasChanged = false;
  return this.keysArray;
}

addKey(key) {
	 ……
  this.hasChanged =  true;
}

merge(data){
   .....
   this.hasChanged = true;
   const storageKeys = this.getAllKeys();
   .....
}


drop(key) {
		……
  this.hasChanged =  true;
}

This brings us about 1 second of reduction in the App Startup Time 🚀

@hurali97
Copy link
Contributor Author

hurali97 commented Feb 5, 2024

Analysis: Reduce Report connections to Onyx

With each `onyx.connect` call we are adding additional overhead. Whenever we call it, under the hood `getCollectionDataAndSendAsObject` is called which appears to be costly specially for collection with huge keys like `report_`. In our case, there was 15k keys for this and it was being called for this `report_` for more than 10 times. To test the effect of this, I manually commented out all of the `onyx.connect` calls to `report_` and only kept 2-3 of them alive. The app startup went down by ~2.5 seconds, this is huge.
  • For possible solutions, we can try the singleton approach combined with even driven data flow. This will allow us to have two base classes where each will have a connection to onyx, with only difference of waitForCollectionCallback. Then we can make use of addListener method and just call it where we would normally use onyx.connect for report collection.

Below is how this class would look like:

import ONYXKEYS from "@src/ONYXKEYS";
import { NativeEventEmitter, NativeModule } from "react-native";
import Onyx from "react-native-onyx";

const nativeModuleTemplate: NativeModule = {
    addListener: () => {},
    removeListeners: () => {},
};

const eventTypeWait = 'observe_report_wait_';
const eventTypeWithoutWait = 'observe_report_';

class BaseClass {
    /**
     * @property eventHandler - the NativeEventEmitter instance
     */
    protected eventHandler: NativeEventEmitter;
    protected constructor() {
        this.eventHandler = new NativeEventEmitter(nativeModuleTemplate);
    }
}

class ReportCollectionObserverWithWait extends BaseClass {
    private static instance: ReportCollectionObserverWithWait;
    protected constructor() {
        super();
        ReportCollectionObserverWithWait.instance = this;
        Onyx.connect({
            key: ONYXKEYS.COLLECTION.REPORT,
            waitForCollectionCallback: true,
            callback: (value) => {
                this.eventHandler.emit(eventTypeWait, value);
            },
        });
    }

    public static getInstance(): ReportCollectionObserverWithWait {
        // Ensure singleton instance
        return ReportCollectionObserverWithWait.instance ?? new ReportCollectionObserverWithWait();
    }

    addListener(listener: (event: unknown) => void, context?: unknown) {
        this.eventHandler.addListener(eventTypeWait, listener, Object(context));
    }
}

class ReportCollectionObserverWithOutWait extends BaseClass {
    private static instance: ReportCollectionObserverWithOutWait;
    protected constructor() {
        super();
        ReportCollectionObserverWithOutWait.instance = this;
        Onyx.connect({
            key: ONYXKEYS.COLLECTION.REPORT,
            callback: (value, key) => {
                this.eventHandler.emit(eventTypeWithoutWait, value, key);
            },
        });
    }

    public static getInstance(): ReportCollectionObserverWithOutWait {
        // Ensure singleton instance
        return ReportCollectionObserverWithOutWait.instance ?? new ReportCollectionObserverWithOutWait();
    }

    addListener(listener: (event: unknown) => void, context?: unknown) {
        this.eventHandler.addListener(eventTypeWithoutWait, listener, Object(context));
    }
}

class ReportCollectionObserver {
    public static getInstance(waitForCollectionCallback?: boolean): ReportCollectionObserverWithWait | ReportCollectionObserverWithOutWait {
        if (waitForCollectionCallback) {
            return ReportCollectionObserverWithWait.getInstance();
        }
        return ReportCollectionObserverWithOutWait.getInstance();
    }
}

export default ReportCollectionObserver;

The usage of this class looks like:

// For waitForCollectionCallback: true
ReportCollectionObserver.getInstance(true).addListener((value) => {
    allReports = value as OnyxCollection<Report>;
});

// For waitForCollectionCallback: false|undefined
ReportCollectionObserver.getInstance().addListener((report, key) => {
    if (!key || !report) {
        return;
    }

    const reportID = CollectionUtils.extractCollectionItemID(key);
    allReports[reportID] = report;
});

This brings us about ~2.5 seconds of reduction in the App Startup Time. We can further reduce the app startup time by 1-2 seconds if we follow the same approach for other collections that are using onyx.connect like transactions_, policy_ and reportActions_ .

@hurali97
Copy link
Contributor Author

hurali97 commented Feb 5, 2024

Analysis: Avoid setting cache value in loop in Onyx

In` Onyx.js` we have `getAllKeys` function which gets the key/value pairs from storage and set it to `OnyxCache` using a loop. Below is what we have currently:
// In Onyx.js
function getAllKeys() {
    ...
    const promise = Storage..getAllKeys().then((keys) => {
        _.each(keys, (key) => cache.addKey(key));
        return keys;
    });
    ...
}

This adds around ~37 ms to the App Startup. We can improve this by setting all the keys at once:

// In lib/onyx.js -> getAllKeys
const promise = Storage.getAllKeys().then((keys) => {
   cache.addAllKeys(keys);
   return keys;
});

// In lib/OnyxCache.js, add a new method
addAllKeys(keys) {
   const tempKeys = [...keys];
   this.storageKeys = new Set(tempKeys);
   this.keysArray = tempKeys;
}

Since we are getting all the keys from storage, we can set all the keys in cache directly, without setting them individually. This now takes around 4-5 ms for the execution of Onyx->getAllKeys function 👏

@hurali97
Copy link
Contributor Author

hurali97 commented Feb 5, 2024

Analysis: Optimise getCollectionDataAndSendAsObject in Onyx

In `getCollectionDataAndSendAsObject` we are getting the missing keys from sqlite storage leveraging `promise.all` and then setting it to cache as well. Now, we have this function being called for each `Onyx.connect` which has `waitForCollectionCallback: true` and for each `withOnyx` calls.

Below is what we have now:

function getCollectionDataAndSendAsObject(matchingKeys, mapping) {
    Promise.all(_.map(matchingKeys, (key) => get(key)))
        .then((values) =>
            _.reduce(
                values,
                (finalObject, value, i) => {
                    // eslint-disable-next-line no-param-reassign
                    finalObject[matchingKeys[i]] = value;
                    return finalObject;
                },
                {},
            ),
        )
        .then((val) => sendDataToConnection(mapping, val, undefined, true));
}

The get method, if it doesn’t have a key in OnyxCache will fetch it from Storage . We can improve this by leveraging multiGet from Storage for all the missing keys. And then doing a cache.merge to add the new keys to the cache.

This will look like follow:

function getCollectionDataAndSendAsObject(matchingKeys, mapping) {
    const missingKeys = [];
    const pendingTasks = [];
    const pendingKeys = [];

    const data = {};

    matchingKeys.forEach((key) => {
        const cacheValue =  cache.getValue(key);
        const pKey = `get:${key}`;
        if (cacheValue) {
            data[key] = cacheValue;
        } else if (cache.hasPendingTask(pKey)) {
            pendingTasks.push(cache.getTaskPromise(pKey));
            pendingKeys.push(key);
        } else {
            missingKeys.push(key);
        }
    });

        Promise.all(pendingTasks)
        .then((values) => {
            values.forEach((value, index) => {
                data[pendingKeys[index]] = value;
            });

            return Promise.resolve();
        })
        .then(() => {
            if (missingKeys.length === 0) {
                return Promise.resolve();
            }
            return Storage.multiGet(missingKeys);
        })
        .then((values) => {
            if (values.length === 0) {
                return Promise.resolve();
            }

            const temp = {};
            values.forEach((value) => {
                data[value[0]] = value[1];
                temp[value[0]] = value[1];
            });
            cache.merge(temp);
            return Promise.resolve();
        })
        .finally(() => {
            sendDataToConnection(mapping, data, undefined, true);
        });
}

This bring us about ~2 seconds of reduction in the app startup time. It also depends on the length of missing keys, the greater the missing keys the higher the gain we will have from this refactor. 🚀

@hurali97
Copy link
Contributor Author

hurali97 commented Feb 5, 2024

Analysis: Memoize SidebarLinksData

From the hermes profile trace, we see that `getOrderedReportIDs` from `Sidebarlinks` takes about ~8 seconds and it happens in 3 occurrences. Which means we have `getOrderedReportIDs` being called for roughly 3 times. We can reduce it to 2 times only by adding memoization to `Sidebarlinks` . This reduces the app startup by ~2.5 seconds, which is a huge reduction.

Details for the implementation can be seen in the commit

@hurali97
Copy link
Contributor Author

hurali97 commented Feb 5, 2024

Analysis: Remove IntlPolyfill

Analysing the hermes profile trace, we see that `IntlPolyfill` takes about 450 ms of the App Startup Time. We can remove this safely for native platforms, Android and iOS. This is probably duplicated by [here](https://github.com//issues/33070#issuecomment-1877329192). Credits to @adhorodyski for finding this originally 👏

Below is what we have now:

// In src/libs/IntlPolyfill/index.native.ts

const intlPolyfill: IntlPolyfill = () => {
    // Native devices require extra polyfills
    require('@formatjs/intl-getcanonicallocales/polyfill');
    require('@formatjs/intl-locale/polyfill');
    require('@formatjs/intl-pluralrules/polyfill');
    polyfillNumberFormat();
    polyfillDateTimeFormat();
    polyfillListFormat();
};

export default intlPolyfill;

Since Hermes has now almost all of the cases covered that we need, we can safely remove the Polyfills from here after testing 👍

@hurali97
Copy link
Contributor Author

hurali97 commented Feb 5, 2024

Final Measurements After Analysis:

Before we started the Analysis, we had the TTI about 48.5 seconds .

After we implemented all of the Analysis items, we have around 11.372 seconds .

It's great that we have managed to reduce the App Startup Time significantly but there's still much to do. Out of this 11.372 seconds, most of the time is spent in Onyx and getOrderedReportIDs which we can further reduce in the implementation phase but it's going to be tricky yet some fun 🚀

So please let us know what do you think of current analysis and whether you'd like to add/ change some proposed solutions by us. We will be very happy to hear your thoughts 💯

cc: @mountiny @roryabraham

@mountiny
Copy link
Contributor

mountiny commented Feb 5, 2024

@hurali97 @kacper-mikolajczak @adhorodyski I love all the suggestions, its really impressive.

I guess it would be great to start as soon as possible with the biggest hitters. replacing the localeCompare though I think we need to make sure that whatever we choose plays well with the Spanish locale too!

Check if number doesn’t contain SMS code, don’t pass it to awesome-phone library

This one is as well quite good place to look into, we had recently quite nasty regression where the app was unusable when Search page was opened because of this regex/ checking for the phone number

@hurali97
Copy link
Contributor Author

hurali97 commented Feb 8, 2024

This one was missed in posting

Analysis: Remove cache and Optimise getOrderedReportIDs

getOrderedReportIDs is still costly. It has basically 4 scenarios which are heavy:

  • Stringifying the huge arguments using JSON.Stringify which is used as cache key

  • Filter all reports, which uses ReportUtils.shouldReportBeInOptionListgetLastVisibleMessagegetLastVisibleAction

    • Internally, there’s some costly calculations being performed. We first merge the reportActions, then filtering for visibleReportActions, then sorting reportActions and finally accessing only the first item and returning it. By the looks of this function, it can be optimised.
  • A foreach loop to generate different arrays that will be used later on for sorting. Mostly, getReportName is consuming most of the time. This function has a call at the bottom, which takes up most of the execution time of the foreach loop:

return participantsWithoutCurrentUser.map((accountID) => getDisplayNameForParticipant(accountID, isMultipleParticipantReport)).join(', ');
  • Then we have sorting of 4 different arrays, which we have already reduced by using Intl.Collator or Basic sorting algorithm.

About Cache Mechanism:

Stringifying for the cache key is pretty much useless as cache will only be read if hasInitialReportActions is true, which is not the case in App Startup flow. Below is what we are referring too:

const cachedReportsKey = JSON.stringify(.....);

    //Check if the result is already in the cache
    const cachedIDs = reportIDsCache.get(cachedReportsKey);
    if (cachedIDs && hasInitialReportActions) {
        return cachedIDs;
    }

Now in App Startup, using JSON.Stringify doesn’t seem helpful. If we really care about using it for the edge cases, we can use the following approach to generate stringified key:

let cachedReportsKey = '';

if (hasInitialReportActions) {
     cachedReportsKey = JSON.stringify(.....);
}

const cachedIDs = reportIDsCache.get(cachedReportsKey);
if (cachedIDs) {
    return cachedIDs;
}

However, placing the cache mechanism in getOrderedReportIDs doesn't add any value since we have such a diverse arguments which are used as cache key and these arguments are prone to be not unique. It's better to remove the cache mechanism altogether from this function.

@hurali97
Copy link
Contributor Author

hurali97 commented Feb 8, 2024

@hurali97 @kacper-mikolajczak @adhorodyski I love all the suggestions, its really impressive.

I guess it would be great to start as soon as possible with the biggest hitters. replacing the localeCompare though I think we need to make sure that whatever we choose plays well with the Spanish locale too!

Check if number doesn’t contain SMS code, don’t pass it to awesome-phone library

This one is as well quite good place to look into, we had recently quite nasty regression where the app was unusable when Search page was opened because of this regex/ checking for the phone number

@mountiny Agree with you 👍 We will need to test the cases associated with each analysis. I will sort them in the order which brings the most improvements and then we can pick the action items and proceed.

@muttmuure muttmuure self-assigned this Feb 13, 2024
Copy link

melvin-bot bot commented Mar 29, 2024

Triggered auto assignment to @mountiny, see https://stackoverflow.com/c/expensify/questions/7972 for more details.

@melvin-bot melvin-bot bot added Weekly KSv2 Awaiting Payment Auto-added when associated PR is deployed to production and removed Weekly KSv2 labels Apr 8, 2024
@melvin-bot melvin-bot bot changed the title [Tracking] Metric - App Startup (Performance audit by Callstack) [HOLD for payment 2024-04-15] [Tracking] Metric - App Startup (Performance audit by Callstack) Apr 8, 2024
@melvin-bot melvin-bot bot removed the Reviewing Has a PR in review label Apr 8, 2024
Copy link

melvin-bot bot commented Apr 8, 2024

Reviewing label has been removed, please complete the "BugZero Checklist".

Copy link

melvin-bot bot commented Apr 8, 2024

The solution for this issue has been 🚀 deployed to production 🚀 in version 1.4.60-13 and is now subject to a 7-day regression period 📆. Here is the list of pull requests that resolve this issue:

If no regressions arise, payment will be issued on 2024-04-15. 🎊

@melvin-bot melvin-bot bot added Daily KSv2 and removed Weekly KSv2 labels Apr 14, 2024
Copy link

melvin-bot bot commented Apr 15, 2024

Skipping the payment summary for this issue since all the assignees are employees or vendors. If this is incorrect, please manually add the payment summary SO.

@melvin-bot melvin-bot bot added the Overdue label Apr 16, 2024
@mountiny
Copy link
Contributor

@hurali97 @adhorodyski This one can be closed now as we are working on the new audit right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Awaiting Payment Auto-added when associated PR is deployed to production Daily KSv2
Projects
Development

No branches or pull requests

4 participants