Skip to content
This repository has been archived by the owner on Jan 11, 2023. It is now read-only.

Waiting for all requests to finish before closing q. #690

Closed
wants to merge 2 commits into from

Conversation

btakita
Copy link
Contributor

@btakita btakita commented May 13, 2019

Fixes issues/689

await Promise.all(promises)

@mrkishi
Copy link
Member

mrkishi commented May 16, 2019

How does this relate to #604 and #641?

It looks good at first glance, but does it reintroduce #604's problem?

@btakita
Copy link
Contributor Author

btakita commented May 17, 2019

You're right, it does reintroduce those bugs. The problem is the fix for those bugs creates 404s in exports due to a race condition.

Should we introduce rate limiting to mitigate the previous issues?

@btakita
Copy link
Contributor Author

btakita commented May 17, 2019

d3fc0c1 adds rate limiting based on the concurrent option. The default is 8.

@btakita
Copy link
Contributor Author

btakita commented May 17, 2019

I took another look & realized I was missing a big part of the picture.

Here's how I see the situation. This PR fixes #689 which is caused by handle not waiting for all requests to finish.

The await Promise.allsimply waits for all of the requests to finish. There is some rate limiting by chunking calls to handle, however this does not guarantee the limit to be concurrent due to handle being recursive & chunking not being a proper queue.

q.add should be the solution, however removing q.add was the fix for #604 & #641.

Does this mean there's a bug with the queue? If so, then the bug with queue should be fixed, since the queue is needed to guarantee a concurrent request limit of concurrent.

@btakita
Copy link
Contributor Author

btakita commented May 17, 2019

@mrkishi To answer your question, this PR has no effect on #604 or #641. It only causes the export to wait for the requests to finish.

The only way this would reintroduce #604 is if the UnhandledPromiseRejectionWarning: Error: Cannot add to a closed queue "fixes" the issue by limiting the number of requests due to the exception.

The rate limiting is another concern from this issue, which is specifically for the exception. If we do need a way to rate limit, we could introduce a wait before each request or every n requests. However, I want to be careful not to conflate multiple issues together, since this PR does not change the properties of the #604 fix.

@mrkishi mrkishi mentioned this pull request May 17, 2019
@mrkishi
Copy link
Member

mrkishi commented May 17, 2019

Hey, @btakita. It's not that I disagree that there's a problem, it's just that the root of the issue theoretically shouldn't be awaiting those promises, since handle is supposed to use queue and thus guaranteed to finish sequentially.

I tried another fix that shouldn't ignore promises in case of timeouts at #698. Could you test that version to verify if it solves your problem? I don't have a codebase where #689 happens and I'm not sure what's required to create a test (if you do, we should add a test!).

@mrkishi
Copy link
Member

mrkishi commented May 17, 2019

Err, I seem to have misunderstood everything. Indeed we should be waiting for all handles since the q is only happening for fetch. In any case, the q should enclose the whole request or else timeouts would break.

I force-pushed on that PR, could you check if it's a solution?

@btakita
Copy link
Contributor Author

btakita commented May 17, 2019

I tested your branch along with the dist & runtime files "sapper": "btakita/sapper#gh-689" & the issue is fixed.

@btakita
Copy link
Contributor Author

btakita commented May 17, 2019

I agree that the queue should be used to manage all of the tasks, since it would consistently process the concurrent amount of tasks vs batch processing. Well done!

I'll update the btakita/sapper#0.27.0-hotfix branch.

@Rich-Harris
Copy link
Member

Closing in favour of #698 — will cut a release soon

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants