-
Notifications
You must be signed in to change notification settings - Fork 29.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Explore using ArrayBuffer
as backend for editor text contents
#167719
Comments
Some interesting discoveries:
[1] vscode/src/vs/platform/files/common/files.ts Lines 1321 to 1322 in 9718242
|
We discussed offline about this task and one of the idea is using The real challenge is currently most our editor internal parts eagerly read line contents (through
|
👏 for pushing this forward, would be great if we can look into the 2 remaining ones, I am happy to help if possible. |
Hi Everyone, I still see this issue happening to me whenever I try to open big files (Example: Larger than 10 mb) will all installed plugin and settings. Any update on this? |
What issue? |
Fyi I have removed support for |
max-memory
ArrayBuffer
as backend for editor text contents
Continued the
|
@rebornix yes, we stream contents when saving. there is a heuristic to not stream if the overall size is below some threshold, just to reduce the overhead of streaming and make 99% of saves fast by just sending the buffer over entirely. But as soon as you are above the threshold, we use a stream. vscode/src/vs/platform/files/common/fileService.ts Lines 372 to 390 in f36df69
If you see it works differently with large files, please let me know. |
We don't have any customizations for V8 memory cage in electron builds, follows the same restrictions as web. The reason for the weird output in The computation happens in https://source.chromium.org/chromium/chromium/src/+/main:third_party/blink/renderer/core/timing/memory_info.cc;l=50-54
On the other hand, Node.js separates the values properly in their For sandboxed renderers, you can use process.getHeapStatistics() from Electron which should give the expected output. |
@rebornix I conducted a test and was able to verify that writes happen chunked ( |
… the model is very large. (#193055)
@rebornix were you able to find a file test case that triggers a OOM crash due to the heap limit ? |
@bpasero this is exactly what I was seeing, backup kicks in and attempts to save the whole concatenated buffer. I'll give it another check once it's fixed. @deepak1556 I could not find a test case that can exceed heap limit, it seems nodejs decoder always creates It means for file opening, we don't have to use array buffer as the backing store as @deepak1556 @bpasero are you aware of any duplicate issues where users run into crashes due to heap limit (based on the crash dump they shared)? My current hypothesis is users ran into trouble as we have editor contributions which try to read every line (for example, intent guide used to read every line content) and trigger massive internal string objects to be created on heap. Now I have all of them disabled, the chances of running into the OOMs is very low. |
Most recent one I got was #184298, I will check for other similar issues in my list. |
…ider when the model is very large. (microsoft#193055)
We discussed offline on latest learnings and issues we found, here is a gist of them:
|
Great finds @rebornix !
|
Thanks @alexdima!
agree, it's not worth it to optimize for such corner case.
Great suggestion. I added an additional check in the text model via #193309 and validate if the model is too large for any expensive heap operations:
|
…ider when the model is very large. (microsoft#193055)
@bpasero not just |
Explore to not use
string
butArrayBuffer
for the contents of a text editor which brings higher memory limits. See #167719 (comment)Original Issue (now outdated)
I see how https://github.com//issues/42839 introduced a way to avoid opening large files by telling the user to quit and restart with a larger `--max-memory`. First of all, I think asking the user to confirm opening a very large file is good because it may result in freezes or slowdown. However, I am no longer able to reproduce the endless freeze, even when opening a very large file (see comment [1] how to create a large file). Maybe given our work on sandboxing this issue is now different or not existent anymore?At the same time, with the Electron 22 update, we get https://www.electronjs.org/blog/v8-memory-cage so I wonder if
--max-memory
is even still supported (maybe Deepak could clarify).This issue is to:
--max-memory
still works with Electron 22[1] How to generate a large file
Inspired by this page, the following script that you can run from a macOS terminal produces a 4.23 GB random file for me after ~2min:
//cc @deepak1556
The text was updated successfully, but these errors were encountered: