Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf: set experimental min chunk size to 3500 bytes #6987

Merged
merged 3 commits into from
Jul 18, 2024

Conversation

VIKTORVAV99
Copy link
Member

Description

Sets the experimentalMinChunkSize to 3500 bytes which removes 2 of the smallest chunks and bundles them with a common parrent of the module that use it instead.

Double check

  • I have run pnpx prettier@2 --write . and poetry run format in the top level directory to format my changes.

@VIKTORVAV99 VIKTORVAV99 changed the title set experimental min chunk size to 3500 bytes perf: set experimental min chunk size to 3500 bytes Jul 15, 2024
@madsnedergaard
Copy link
Member

Would you mind explaining why this is preferable to more smaller files? :)

@VIKTORVAV99
Copy link
Member Author

VIKTORVAV99 commented Jul 16, 2024

Would you mind explaining why this is preferable to more smaller files? :)

Basically if the files are smaller than roughly 4kb with overhead it costs more to create the http connection than you gain by splitting it up in different files. And since browsers impose a limit on the maximum of parallel connections (8 on chromium and 4 on safari I believe) waiting for the server takes longer than downloading the actual content which means they are blocking other requests without any real benefit.

@madsnedergaard
Copy link
Member

Would you mind explaining why this is preferable to more smaller files? :)

Basically if the files are smaller than roughly 4kb with overhead it costs more to create the http connection than you gain by splitting it up in different files. And since browsers impose a limit on the maximum of parallel connections (8 on chromium and 4 on safari I believe) waiting for the server takes longer than downloading the actual content which means they are blocking other requests without any real benefit.

I was under the impressions that with HTTP/2 multiple smaller files would always be beneficial, but that makes sense - thanks for explaining :)

Copy link
Member

@madsnedergaard madsnedergaard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@VIKTORVAV99
Copy link
Member Author

Would you mind explaining why this is preferable to more smaller files? :)

Basically if the files are smaller than roughly 4kb with overhead it costs more to create the http connection than you gain by splitting it up in different files. And since browsers impose a limit on the maximum of parallel connections (8 on chromium and 4 on safari I believe) waiting for the server takes longer than downloading the actual content which means they are blocking other requests without any real benefit.

I was under the impressions that with HTTP/2 multiple smaller files would always be beneficial, but that makes sense - thanks for explaining :)

Http/2 and later H3 does improve it a lot and it does make sense to split up the files to benefit from multiplexing but as it is right now er are already saturating the allowed connections.

Basically you want to split them up if your just loading 1 file at the time and combine them if your loading 8+ at the time.

@VIKTORVAV99 VIKTORVAV99 enabled auto-merge (squash) July 18, 2024 11:42
@VIKTORVAV99 VIKTORVAV99 merged commit d62cba5 into master Jul 18, 2024
20 checks passed
@VIKTORVAV99 VIKTORVAV99 deleted the vik/experimental_min_chunk_size branch July 18, 2024 11:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants