Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: change network request sweep interval from 1s -> 10s #25209

Merged
merged 9 commits into from
Dec 21, 2022

Conversation

lmiller1990
Copy link
Contributor

@lmiller1990 lmiller1990 commented Dec 19, 2022

User facing changelog

Addresses issue where projects loading a large number of JS modules load very slowly. It might also impact other types of files, I am not sure.

Additional details

A lot of context written here: #22868

In #22462 a refactor was introduced in the part of the code base where we handle pre-requests. It also introduced some code that would periodically clear out requests that were never matched up (matching up refers to matching a request that is intercepted by our browser extension and the same request that comes into our HTTP proxy). We shipped this in Cypress 10.3.

The initial PR had a bug that caused some requests would be delayed for 500ms. That was fixed in #23227, which was in Cypress 10.5.

I believe there was another bug which flew under the radar. The sweep logic was supposed to be run every 10 seconds, according to #22462, but if you look at the code, it's a setInterval triggered every 1 second (500ms * 2).

I changed this to be every 10 seconds, and it fixed the regression. I included a simple app you can use as a minimal example of the regression: https://github.com/lmiller1990/lots-of-modules (recordings below).

My understanding is the 1s sweep was too fast - the JS module request took a while to come in (more than 1s). Having said that, I don't understand why that causes a bottleneck - but you can see the weird behavior in the first recording below, where the JS modules are processed in "chunks". I don't understand why this is happening.

Recording 1: Sweep Interval 1s. JS modules loading slowly, tests time out.

Notice there is a weird "loading in chunks" behavior when you look at the terminal.

simplescreenrecorder-2022-12-19_17.21.55.mp4

Recording 2: Sweep Interval 10s, test does not time out.

Notice there is no "loading in chunks" behavior.

simplescreenrecorder-2022-12-19_17.22.51.mp4

Recording 3: Cypress 9

The test runs in 9s - as opposed to the 11s Cypress 11 shows. This is pretty consistent. I am not sure if this is in the network layer, server, or elsewhere, or a bit of both. It's slower, though, which isn't really ideal.

simplescreenrecorder-2022-12-19_17.23.59.mp4

Steps to test

Not sure on the best way to test this. I can look into it once we decide if we are happy with this fix, though. If you want to experience the difference, you could grab this repo https://github.com/lmiller1990/lots-of-modules and see the instructions in the README to start it up. Then, you can try playing with the sweep interval (just grab this branch and change variable).

How has the user experience changed?

Loads apps that import lots of JS modules more quickly.

PR Tasks

  • Have tests been added/updated?
  • Has the original issue (or this PR, if no issue exists) been tagged with a release in ZenHub? (user-facing changes only)
  • Has a PR for user-facing changes been opened in cypress-documentation?
  • Have API changes been updated in the type definitions?

@cypress-bot
Copy link
Contributor

cypress-bot bot commented Dec 19, 2022

Thanks for taking the time to open a PR!

@cypress
Copy link

cypress bot commented Dec 19, 2022



Test summary

26392 0 1179 0Flakiness 25


Run details

Project cypress
Status Passed
Commit 53cfa64
Started Dec 21, 2022 2:16 AM
Ended Dec 21, 2022 2:34 AM
Duration 18:04 💡
OS Linux Debian -
Browser Multiple

View run in Cypress Dashboard ➡️


Flakiness

create-from-component.cy.ts Flakiness
1 ... > runs generated spec
commands/net_stubbing.cy.ts Flakiness
1 network stubbing > intercepting request > can delay and throttle a StaticResponse
2 network stubbing > intercepting request > can delay and throttle a StaticResponse
3 ... > with `times` > only uses each handler N times
4 ... > stops waiting when an xhr request is canceled
This comment includes only the first 5 flaky tests. See all 25 flaky tests in the Cypress Dashboard.

This comment has been generated by cypress-bot as a result of this project's GitHub integration settings. You can manage this integration in this project's settings in the Cypress Dashboard

@mike-plummer mike-plummer added type: feature New feature that does not currently exist E2E Issue related to end-to-end testing and removed type: feature New feature that does not currently exist E2E Issue related to end-to-end testing labels Dec 19, 2022
@BlueWinds
Copy link
Contributor

I'm very surprised that the sweep is expensive - I suppose that makes sense if there are hundreds or thousands of requests pending. I was expecting the list iterated over to have 1-5 elements.

I originally had a sweep interval of 10s, arbitrarily - I believe it was changed to 2x the timeout after there were some review concerns about having a fixed number? It was a while ago and I do not recall exactly.

@@ -80,7 +80,8 @@ export class PreRequests {
requestTimeout: number
pendingPreRequests = new StackMap<PendingPreRequest>()
pendingRequests = new StackMap<PendingRequest>()
sweepInterval: ReturnType<typeof setInterval>
sweepIntervalId: NodeJS.Timeout
Copy link
Contributor

@flotwig flotwig Dec 19, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NodeJS.Timeout is an object, not an ID

Suggested change
sweepIntervalId: NodeJS.Timeout
sweepIntervalTimer: NodeJS.Timeout

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Woah, did not know this. Fixed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lmiller1990
Copy link
Contributor Author

lmiller1990 commented Dec 20, 2022

I'm very surprised that the sweep is expensive - I suppose that makes sense if there are hundreds or thousands of requests pending. I was expecting the list iterated over to have 1-5 elements.
I originally had a sweep interval of 10s, arbitrarily - I believe it was changed to 2x the timeout after there were some review concerns about having a fixed number? It was a while ago and I do not recall exactly.

Me too... I think there's more at work here, but this is a good enough fix for now. We need to do a deeper dive and audit of the network layer and see if we've missed something.

@adam-thomas-privitar
Copy link

I suppose that makes sense if there are hundreds or thousands of requests pending

Just as a use case, If you try and run E2E tests locally against an application in dev mode that uses an unbundled dev tool like Vite, you have thousands of requests for static files. It takes about 20 seconds to load the app instead of < 0.5 seconds without cypress in play.

@lmiller1990
Copy link
Contributor Author

@adam-thomas-privitar can you try the pre-release with this fix and see if it helps? 4a9bc6f#comments

@adam-thomas-privitar
Copy link

adam-thomas-privitar commented Dec 20, 2022

Yep! The timing is crazy since I just started looking into this issue today. Thanks a lot for the hard work on this. Give me an hour or so I'll give it a go.

@adam-thomas-privitar
Copy link

adam-thomas-privitar commented Dec 20, 2022

So my case is extreme -- theres 3.3K requests.

It's still problematic, I think if there's an improvement its too small on the scale of things to be picked up in this particular use case.

Before #23227, all of these requests were over 500ms. After it, they sort of float around 250ms-600ms with very high variance. Overall though it made a massive difference the overall app load time.

That range though seems to be the case still with this change. I observe later requests are slower than earlier ones, all coming in over 600ms.

The thing where it seemed to load in batches at a time isn't there, but I'm not sure if that was already gone after above fix for me.

Never mind! It works great. I stupidly forgot to close cypress. Everything is actually mostly around 100ms to 400ms. Nice one!" I need to do some more testing to verify this is consistent. Its still quite a high level of variance though. I guess this is fundamental to the approach.

Personally would love to be able to opt out of all this altogether for certain URLs. My URLs for these req's are static. I don't care about any cypress processing.

Even if experimental or undocumented, a flag to allow me to set the requestTimeout to be 0 would be incredible for me. I appreciate it does for others, but for me it all adds no value as I don't care about it -- so the slow down is a source of great frustration.

@lmiller1990
Copy link
Contributor Author

Never mind! It works great.

Great to here - this will be in the next release (early Jan, 2023).

@lmiller1990 lmiller1990 merged commit 1682a3f into develop Dec 21, 2022
@lmiller1990 lmiller1990 deleted the fix/correct-sweep-interval branch December 21, 2022 04:37
@cypress-bot
Copy link
Contributor

cypress-bot bot commented Jan 3, 2023

Released in 12.3.0.

This comment thread has been locked. If you are still experiencing this issue after upgrading to
Cypress v12.3.0, please open a new issue.

@cypress-bot cypress-bot bot locked as resolved and limited conversation to collaborators Jan 3, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants