-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase IO read size default #97
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Just curious how you settled on 4x as opposed to 2x or 10x?
(IMO, another justification for diverging from Chrome here is that we typically expect dev server <-> device to be local or at least over a fast network, whereas Chrome needs to think about fetching over slow internet.)
Not this PR but just to note something closely related that we absolutely should upstream - CDT currently requires at least two It'd shouldn't be difficult to allow |
@robhogan Good question! It's arbitrary. My thought process was that upon hitting a real-world bottleneck, we typically double capacity to add sufficient overhead — in this case, doubling twice until I could see a sufficient end to end improvement. No idea what the safe maximum would be, but felt we could keep this relatively constrained. |
@huntie, @robhogan I'll defer to you both but I have some non-blocking (heh) thoughts:
|
AFAICT, this is mostly a CDP design issue - CDT waits until it has received and decoded an (This is arguably another symptom of our high bandwidth but high latency network arrangement, plus our relatively large bundles + source maps vs what's typical on web) *Worth measuring this - parsing and reserialising in the proxy might introduce a non-trivial overhead for these multi-MB messages. |
Let's start with this then keep iterating. @motiz88 Do we have any internal doc/inbox for items to discuss with the Chrome team? |
Summary: Temporaily disable the `nativeSourceCodeFetching` capability — which reverts this to the legacy handling in the Inspector Proxy. This is because we've noticed performance issues when loading large bundle source maps, particularly on Android, with a nontrivial path to optimising this ([raising the frontend `IO.read` size](facebookexperimental/rn-chrome-devtools-frontend#97) further is leading to WebSocket disconnections on Android 😐). Changelog: [Internal] Differential Revision: D61543480
…6132) Summary: Pull Request resolved: #46132 Temporaily disable the `nativeSourceCodeFetching` capability — which reverts this to the legacy handling in the Inspector Proxy. This is because we've noticed performance issues when loading large bundle source maps, particularly on Android, with a nontrivial path to optimising this ([raising the frontend `IO.read` size](facebookexperimental/rn-chrome-devtools-frontend#97) further is leading to WebSocket disconnections on Android 😐). Changelog: [Internal] Reviewed By: robhogan Differential Revision: D61543480 fbshipit-source-id: ee66b4cebd40f8cc6466270c5875df744d2b588a
Summary
Increases (4x) the size limit used when making
IO.read
CDP requests. This can dramatically improve performance when requesting large files (e.g. bundle sourcemap) from Metro (particularly when loaded remotely) as fewer file chunks need to be sequentially requested — following facebook/react-native#45850.Test plan
(Using Android build based on following facebook/react-native#45850.)
rn-chrome-devtools-frontend
using RNsync-and-build
workflowsize: 1048576
size: 4194304
, rendering source file in browser significantly fasterBefore/after side-by-side 🎬: https://pxl.cl/5nnHF
Upstreaming plan
devtools-frontend
repo. I've reviewed the contribution guide.Note
This is absolutely upstreamable, but is an opinionated increase targeting React Native (mobile app) engineers, and our single-window use case, where it's assumed we'll have more system memory available compared to the browser use case.