Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Network: Persisted Requests Queue - clear persisted requests after the response #6556

Merged
merged 13 commits into from
Jan 31, 2022
Merged
3 changes: 3 additions & 0 deletions src/CONST.js
Original file line number Diff line number Diff line change
Expand Up @@ -275,6 +275,9 @@ const CONST = {
METHOD: {
POST: 'post',
},
MAX_PERSISTED_REQUEST_RETRIES: 10,
PROCESS_REQUEST_DELAY_MS: 1000,
SUCCESS_CODE: 200,
kidroca marked this conversation as resolved.
Show resolved Hide resolved
},
NVP: {
IS_FIRST_TIME_NEW_EXPENSIFY_USER: 'isFirstTimeNewExpensifyUser',
Expand Down
6 changes: 6 additions & 0 deletions src/libs/API.js
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ import setSessionLoadingAndError from './actions/Session/setSessionLoadingAndErr
let isAuthenticating;
let credentials;
let authToken;
let currentUserEmail;

function checkRequiredDataAndSetNetworkReady() {
if (_.isUndefined(authToken) || _.isUndefined(credentials)) {
Expand All @@ -37,6 +38,7 @@ Onyx.connect({
key: ONYXKEYS.SESSION,
callback: (val) => {
authToken = lodashGet(val, 'authToken', null);
currentUserEmail = lodashGet(val, 'email', null);
checkRequiredDataAndSetNetworkReady();
},
});
Expand Down Expand Up @@ -82,6 +84,10 @@ function addDefaultValuesToParameters(command, parameters) {
// Setting api_setCookie to false will ensure that the Expensify API doesn't set any cookies
// and prevents interfering with the cookie authToken that Expensify classic uses.
finalParameters.api_setCookie = false;

// Unless email is already set include current user's email in every request and the server logs
finalParameters.email = lodashGet(parameters, 'email', currentUserEmail);
Comment on lines +88 to +89
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was originally part of the network "forEach" here:

Network.js

        // If we haven't passed an email in the request data, set it to the current user's email
        if (email && _.isEmpty(requestEmail)) {
            requestData.email = email;
        }

        const finalParameters = _.isFunction(enhanceParameters)
            ? enhanceParameters(queuedRequest.command, requestData)
            : requestData;

Here (addDefaultParameters) seems to be a better place for this logic


return finalParameters;
}

Expand Down
159 changes: 73 additions & 86 deletions src/libs/Network.js
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,9 @@ import createCallback from './createCallback';
import * as NetworkRequestQueue from './actions/NetworkRequestQueue';

let isReady = false;
let isOffline = false;
let isQueuePaused = false;
let persistedRequestsQueueRunning = false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's interesting that this value is sort of the inverse of isQueuePaused. I think we should try to keep these two things consistent, so maybe one of the following two options:

let isQueuePaused = false;
let isOfflineQueuePaused = true;

or

let isMainQueueRunning = true;
let isOfflineQueueRunning = false;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name comes from this usagage and readability reasons
This method is starting something rather than making it not paused

function flushPersistedRequestsQueue() {
    if (persistedRequestsQueueRunning) {
        return;
    }

    persistedRequestsQueueRunning = true;

vs

function flushPersistedRequestsQueue() {
    if (!persistedRequestsQueuePaused) {
        return;
    }

   persistedRequestsQueuePaused = false;

running is the "special case" for this queue, while paused is the special case for the regular queue

I'm reluctant to refactor isQueuePaused to isMainQueueRunning it would change usages and might affect their intent as well, also it kind a sound like we're online too


// Queue for network requests so we don't lose actions done by the user while offline
let networkRequestQueue = [];
Expand All @@ -26,80 +28,84 @@ const [onResponse, registerResponseHandler] = createCallback();
const [onError, registerErrorHandler] = createCallback();
const [onRequestSkipped, registerRequestSkippedHandler] = createCallback();

let didLoadPersistedRequests;
let isOffline;
function processRequest(request) {
kidroca marked this conversation as resolved.
Show resolved Hide resolved
const finalParameters = _.isFunction(enhanceParameters)
? enhanceParameters(request.command, request.data)
: request.data;

const PROCESS_REQUEST_DELAY_MS = 1000;
onRequest(request, finalParameters);
return HttpUtils.xhr(request.command, finalParameters, request.type, request.shouldUseSecure);
}

/**
* Process the offline NETWORK_REQUEST_QUEUE
* @param {Array<Object> | null} persistedRequests - Requests
*/
function processOfflineQueue(persistedRequests) {
// NETWORK_REQUEST_QUEUE is shared across clients, thus every client will have similiar copy of
// NETWORK_REQUEST_QUEUE. It is very important to only process the queue from leader client
// otherwise requests will be duplicated.
// We only process the persisted requests when
// a) Client is leader.
// b) User is online.
// c) requests are not already loaded,
// d) When there is at least one request
if (!ActiveClientManager.isClientTheLeader()
|| isOffline
|| didLoadPersistedRequests
|| !persistedRequests
|| !persistedRequests.length) {
function processPersistedRequestsQueue() {
const persistedRequests = NetworkRequestQueue.getPersistedRequests();

// This sanity check is also a recursion exit point
if (isOffline || _.size(persistedRequests) === 0) {
kidroca marked this conversation as resolved.
Show resolved Hide resolved
return Promise.resolve();
}

const tasks = _.map(persistedRequests, request => processRequest(request)
.then((response) => {
if (response.jsonCode !== CONST.NETWORK.SUCCESS_CODE) {
throw new Error('Persisted request failed');
}

NetworkRequestQueue.removeRetryableRequest(request);
})
.catch(() => {
const retryCount = NetworkRequestQueue.incrementRetries(request);
if (retryCount >= CONST.NETWORK.MAX_PERSISTED_REQUEST_RETRIES) {
// Request failed too many times removing from persisted storage
NetworkRequestQueue.removeRetryableRequest(request);
}
}));

// Do a recursive call in case the queue is not empty after processing the current batch
return Promise.all(tasks)
.then(processPersistedRequestsQueue);
}
Comment on lines +72 to +75
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can have remaining requests when

  • the request failed for some reason and is not removed from the queue - as seen in the catch block
  • we became offline again, which would again lead to the catch block above. In that case the queue would still stop and will be retriggered once we're back online, more requests can be added to it in the meantime

Not sure if we should capture that in a comment or something


function flushPersistedRequestsQueue() {
if (persistedRequestsQueueRunning) {
return;
}

// Queue processing expects handlers but due to we are loading the requests from Storage
// we just noop them to ignore the errors.
_.each(persistedRequests, (request) => {
request.resolve = () => {};
request.reject = () => {};
});
// NETWORK_REQUEST_QUEUE is shared across clients, thus every client/tab will have a copy
// It is very important to only process the queue from leader client otherwise requests will be duplicated.
if (!ActiveClientManager.isClientTheLeader()) {
return;
}

// Merge the persisted requests with the requests in memory then clear out the queue as we only need to load
// this once when the app initializes
networkRequestQueue = [...networkRequestQueue, ...persistedRequests];
NetworkRequestQueue.clearPersistedRequests();
didLoadPersistedRequests = true;
persistedRequestsQueueRunning = true;

// Ensure persistedRequests are read from storage before proceeding with the queue
const connectionId = Onyx.connect({
key: ONYXKEYS.NETWORK_REQUEST_QUEUE,
callback: () => {
Onyx.disconnect(connectionId);
processPersistedRequestsQueue()
.finally(() => persistedRequestsQueueRunning = false);
},
});
}

// We subscribe to changes to the online/offline status of the network to determine when we should fire off API calls
// We subscribe to the online/offline status of the network to determine when we should fire off API calls
// vs queueing them for later.
Onyx.connect({
key: ONYXKEYS.NETWORK,
callback: (val) => {
if (!val) {
callback: (network) => {
if (!network) {
return;
}

// Client becomes online, process the queue.
if (isOffline && !val.isOffline) {
const connection = Onyx.connect({
key: ONYXKEYS.NETWORK_REQUEST_QUEUE,
callback: processOfflineQueue,
});
Onyx.disconnect(connection);
if (isOffline && !network.isOffline) {
flushPersistedRequestsQueue();
}
isOffline = val.isOffline;
},
});

// Subscribe to NETWORK_REQUEST_QUEUE queue as soon as Client is ready
ActiveClientManager.isReady().then(() => {
Onyx.connect({
key: ONYXKEYS.NETWORK_REQUEST_QUEUE,
callback: processOfflineQueue,
});
});

// Subscribe to the user's session so we can include their email in every request and include it in the server logs
let email;
Onyx.connect({
key: ONYXKEYS.SESSION,
callback: val => email = val ? val.email : null,
isOffline = network.isOffline;
},
});

/**
Expand Down Expand Up @@ -210,43 +216,25 @@ function processNetworkRequestQueue() {
return;
}

const requestData = queuedRequest.data;
const requestEmail = lodashGet(requestData, 'email', '');

// If we haven't passed an email in the request data, set it to the current user's email
if (email && _.isEmpty(requestEmail)) {
requestData.email = email;
}

const finalParameters = _.isFunction(enhanceParameters)
? enhanceParameters(queuedRequest.command, requestData)
: requestData;

onRequest(queuedRequest, finalParameters);
HttpUtils.xhr(queuedRequest.command, finalParameters, queuedRequest.type, queuedRequest.shouldUseSecure)
processRequest(queuedRequest)
.then(response => onResponse(queuedRequest, response))
.catch(error => onError(queuedRequest, error));
});

// We should clear the NETWORK_REQUEST_QUEUE when we have loaded the persisted requests & they are processed.
// As multiple client will be sharing the same Queue and NETWORK_REQUEST_QUEUE is synchronized among clients,
// we only ask Leader client to clear the queue
if (ActiveClientManager.isClientTheLeader() && didLoadPersistedRequests) {
NetworkRequestQueue.clearPersistedRequests();
}

// User could have bad connectivity and he can go offline multiple times
// thus we allow NETWORK_REQUEST_QUEUE to be processed multiple times but only after we have processed
// old requests in the NETWORK_REQUEST_QUEUE
didLoadPersistedRequests = false;

// We clear the request queue at the end by setting the queue to retryableRequests which will either have some
// requests we want to retry or an empty array
networkRequestQueue = requestsToProcessOnNextRun;
}

// Process our write queue very often
setInterval(processNetworkRequestQueue, PROCESS_REQUEST_DELAY_MS);
function startDefaultQueue() {
setInterval(processNetworkRequestQueue, CONST.NETWORK.PROCESS_REQUEST_DELAY_MS);
}

// Post any pending request after we launch the app
ActiveClientManager.isReady().then(() => {
flushPersistedRequestsQueue();
startDefaultQueue();
});

/**
* @param {Object} request
Expand Down Expand Up @@ -328,7 +316,6 @@ function clearRequestQueue() {
export {
post,
pauseRequestQueue,
PROCESS_REQUEST_DELAY_MS,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is now exported from CONST.NETWORK.PROCESS_REQUEST_DELAY_MS and usages updated

unpauseRequestQueue,
registerParameterEnhancer,
clearRequestQueue,
Expand Down
35 changes: 34 additions & 1 deletion src/libs/actions/NetworkRequestQueue.js
Original file line number Diff line number Diff line change
@@ -1,15 +1,48 @@
import Onyx from 'react-native-onyx';
import _ from 'underscore';
import lodashUnionWith from 'lodash/unionWith';
import ONYXKEYS from '../../ONYXKEYS';

const retryMap = new Map();
let persistedRequests = [];

Onyx.connect({
key: ONYXKEYS.NETWORK_REQUEST_QUEUE,
callback: val => persistedRequests = val || [],
});

function clearPersistedRequests() {
Onyx.set(ONYXKEYS.NETWORK_REQUEST_QUEUE, []);
retryMap.clear();
}

function saveRetryableRequests(retryableRequests) {
Onyx.merge(ONYXKEYS.NETWORK_REQUEST_QUEUE, retryableRequests);
persistedRequests = lodashUnionWith(persistedRequests, retryableRequests, _.isEqual);
Copy link
Contributor

@sobitneupane sobitneupane Feb 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change has introduced this bug. (Title: pinned chat become unpinned when user pinned chat in offline mode)

If a user takes an action that is identical(_.isEqual) to older action then it will be removed.

To fix the issue we have replace above code by:

persistedRequests = persistedRequests.concat(requestsToPersist);

The PR that fixes the issue: #14608

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea is to prevent the user from accumulate duplicate requests while offline
If you perform the same action 10x while offline, we skip posting 10 Network request but just one
This is an optimization that helps users in low network conditions to not waste bandwidth

From the information shared here it seems that pinning and unpinning the chat uses the same command / request, I would suggest to make them distinct actions so that _.isEqual can differentiate them correctly

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Onyx.set(ONYXKEYS.NETWORK_REQUEST_QUEUE, persistedRequests);
}

function removeRetryableRequest(request) {
retryMap.delete(request);
persistedRequests = _.reject(persistedRequests, r => _.isEqual(r, request));
Onyx.set(ONYXKEYS.NETWORK_REQUEST_QUEUE, persistedRequests);
}

function incrementRetries(request) {
const current = retryMap.get(request) || 0;
const next = current + 1;
retryMap.set(request, next);

return next;
}
Comment on lines +30 to +36
Copy link
Contributor Author

@kidroca kidroca Dec 1, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Retry counts are not persisted to disk - no need to
If a request is retried 5 times then the app quit, when the app is launched again the request will have a fresh count of up to 10 attempts, and not only the remaining 5

Might be an edge case but I think it would be best to have a retry limit in case some corrupt data was persisted and it cannot be sent no matter how many times we try.
That's why we keeping a count per request here and we forfeit a request after the count grows past a certain limit (CONST.NETWORK.MAX_PERSISTED_REQUEST_RETRIES)


function getPersistedRequests() {
return persistedRequests;
}

export {
clearPersistedRequests,
saveRetryableRequests,
getPersistedRequests,
removeRetryableRequest,
incrementRetries,
};
Loading