-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Messages sent while offline disappear after connection is regained if the user closes/reopen app - Reported by: @Santhosh-Sellavel #5987
Comments
Triggered auto assignment to @johnmlee101 ( |
@johnmlee101 Whoops! This issue is 2 days overdue. Let's get this updated quick! |
@johnmlee101 Huh... This is 4 days overdue. Who can take care of this? |
@johnmlee101 6 days overdue. This is scarier than being forced to listen to Vogon poetry! |
@johnmlee101 10 days overdue. I'm getting more depressed than Marvin. |
@johnmlee101 12 days overdue. Walking. Toward. The. Light... |
"Add Comment" requests are persisted to storage when we're offline 1. Persisted request getting cleared when they're added to the in-memory queueLines 55 to 59 in b12eb5f
2. Persisted requests getting cleared by the queue itselfLines 215 to 220 in b12eb5f
The request are only moved in memory, they are initiated but not completed - making a browser refresh (or closing the app) after the above code has emptied persistent storage would lose the data when the request didn't complete |
In theory it should be possible to recreate this with shorter steps like
Considering that it's very specific window of time that will cause the issue it's less likely to happen often IRL cases that this bug can happen are:
All of these are the case of: A persisted request is moved to memory and executed but something unexpected happens IMO we're only trying to "bend" the network queue to fit this "process persisted requests" functionality
|
Looking now! |
Hmm, I would also say that a separate queue should have a maximum timeout, so we should keep that data with the requests. However, I like the idea. Do you know which files would need to be modified, or some psuedo code to represent this change? |
I think the only change to acomodate the suggestion would be in https://github.com/Expensify/App/blob/main/src/libs/Network.js
Onyx.connect({
key: ONYXKEYS.NETWORK,
callback: (network) => {
// when we transition from offline to online trigger the persisted request queue
if (isOffline && !network.isOffline) {
startPersistedRequestsQueue();
}
isOffline = network.isOffline;
}
});
function startPersistedRequestsQueue() {
// If the queue is already running we don't have to do anything
// it will sort itself in case more requests are persisted after it started
if (persistedRequestsQueueTask) {
return;
}
processPersistedRequestsQueue();
}
function processPersistedRequestsQueue() {
// Onyx.get is necessary in the event that persisted requests are not yet read when we call this function
// I'll mention alternatives outside the example
persistedRequestsQueueTask = Onyx.get(ONYXKEYS.NETWORK_REQUEST_QUEUE)
.then(persistedRequests => {
// This sanity check is also a recursion exit point
if (_.size(persistedRequests) === 0 || isOffline) {
persistedRequestsQueueTask = null;
return;
}
const tasks = _.map(persistedRequests, request => {
// processRequest - reusable logic common to both request queues (should return a promise)
return processRequest(request)
.then(() => removeFromPersistedStorage(request)) // when a request completes remove it from storage
.catch((error) => {
// Decide whether to discard the request despite the error, or keep it for a retry
if (!shouldRetryRequest(error)) {
return removeFromPersistedStorage(request);
}
});
});
// More request could have been persisted by now
// If that's the case we'll handle them with a recursive call
// Otherwise the recursion will exit at the recursion exit point
return Promise.allSettled(tasks)
.finally(roessPersistedRequestsQueue)
})
}
// Have something unique in every request and use it to distinct which one we ought to remove
function removeFromPersistedStorage(request) {
return Onyx.get(ONYXKEYS.NETWORK_REQUEST_QUEUE)
.then(persistedRequests => {
const nextPersistedRequests = _.filter(persistedRequests, r => r.id !== request.id);
return Onyx.set(ONYXKEYS.NETWORK_REQUEST_QUEUE, nextPersistedRequests);
});
} Additional requirements like a request timeout can be implemented in This queue would start as much request as possible in parallel, the underlying environment would prioritize how they're actually executed, but we can assume a pool of 5-10 requests running at the same time We might be fine with a simpler queue that processes one request at a time, then removes it from storage and reads the next one and so on until we run out of processed requests. Alternatives to
|
// Client becomes online, process the queue. | |
if (isOffline && !val.isOffline) { | |
const connection = Onyx.connect({ | |
key: ONYXKEYS.NETWORK_REQUEST_QUEUE, | |
callback: processOfflineQueue, | |
}); | |
Onyx.disconnect(connection); |
- we lose the ability to promise chain and we'll have to deal with flags for the recursive part of the queue
Top level Onyx.connect call for everything that we need
Not as reliable as the Onyx.get version - we're basically hoping that everything we need is already updated by the top level connect when we use it
Here we're discussion an alternative to top level connections: #6151
Hmm seems like I misunderstood the timeout part, you mean like persisted requests getting stale after a day or so? We can include some stale request filtering logic in there |
Ah yeah sorry, if things become too stale we should ignore. I'd imagine there's a situation where you're offline with a backlog of messages and you only check days weeks or months from then and having it auto-post wouldn't be useful. So yep! Stale request filtering. |
I like this though! I'm going to mark as external, get a manager to create the job then I'll have them assign you. |
Triggered auto assignment to @stephanieelliott ( |
Sorry for the shuffle there. @stephanieelliott once you create the job can you assign kidroca? |
|
Triggered auto assignment to @SofiedeVreese ( |
Triggered auto assignment to Contributor-plus team member for initial proposal review - @rushatgabhane ( |
Triggered auto assignment to @roryabraham ( |
I'm about to go OOO, so reapplied the label to assign this to another CM team member (thanks @SofiedeVreese!) |
Can we agree the issue is complete: #5987 (comment) |
@roryabraham just want to get your thumbs up for releasing payment for this GH (5987) which seems to have been actioned via the PR here? |
Yeah, we should release payment here 👍 |
Sorry for the long wait there @kidroca I've just released payment. @Santhosh-Sellavel I'm going to add you to the Upwork job for your reporting bonus now too. |
Current assignee @rushatgabhane is eligible for the Exported assigner, not assigning anyone new. |
Current assignee @roryabraham is eligible for the Exported assigner, not assigning anyone new. |
Thanks! @SofiedeVreese |
Thanks 🎊 |
ok reporting bonus paid out too! All is paid out, removed Upwork posting and closing GH. |
If you haven’t already, check out our contributing guidelines for onboarding and email contributors@expensify.com to request to join our Slack channel!
Action Performed:
Expected Result:
Messages that were sent while offline should be sent to the other user
Actual Result:
Messages are displayed on chat but disappear once the user regains the connection.
Workaround:
None, sent messages while offline are lost.
Platform:
Where is this issue occurring?
Version Number: 1.1.8-8
Reproducible in staging?: Yes
Reproducible in production?: Yes
Logs: https://stackoverflow.com/c/expensify/questions/4856
Notes/Photos/Videos: Any additional supporting documentation
WhatsApp.Video.2021-10-21.at.6.49.54.PM.mp4
Expensify/Expensify Issue URL:
Issue reported by: @Santhosh-Sellavel
Slack conversation: https://expensify.slack.com/archives/C01GTK53T8Q/p1634765287021500
View all open jobs on GitHub
The text was updated successfully, but these errors were encountered: