-
Notifications
You must be signed in to change notification settings - Fork 357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
.ntvs_analysis.dat size and mem use #88
Comments
Sorry to hear about the perf issue.... thanks for reporting. #44 includes some steps to help us diagnose the issue (as well as known workarounds). Any chance you could provide that info? Thanks in advance! |
Disabling intellisense reduced the loading time dramatically and the VS memory footprint significantly (therefore, the frequent VS GC which causes delays in the editor). Example: mem use before consistently ~2.5GB, after disabling intellisense ~1.5GB. Unfortunately - not an option for us to use it this way - intellisense is probably the #1 reason to use VS with the node plugin. For this:
I could not make this work, gave up after several hours of trying. Nothing I did seemed to have any effect. No Resharper/McAfee etc. installed.
Complete dependency list of the main project and its imported private projects: "dependencies": { Created an empty project, just requiring all these. That did not suffice (150MB analysis.dat file, 1GB RAM footprint of VS). But: many of these dependencies are in reality sub-dependencies of other private tool projects, leading to a fair amount of duplication. So, next step:
Which lead to tripling of the analysis.dat file and dramatic increase in mem use (~2.9GB) and solid 30 minutes of analysis time before the analysis.dat was updated - so that is a fairly exact replication of the issues with have with our projects, maybe a bit exaggerated. So load (mem and analysis.dat file size) is purely a function of the js file sizes being part of the project, as expected. Which naturally will run into issues with larger projects. There is no file hashing/skipping of redundant files involved in the analysis I believe? Seems to be the only way to stretch the limits a bit. |
Add just the ''web-component-tester" dev dependency. I do not have Resharper |
I take one more step and test with latest bits of source code. Same error, high mem usage and a lot of CPU. One thing to notice - The web-component-tester when installed on Windows installed with errors due to long path. This lead to following observations:
Since I mostly not interested in the deep nested projects which are dependencies of my dependencies, I try to modify source code, so too deep NPM modules would be excluded from the analysis. But when I close and reopen solution, file size grows to the 200Mb, so maybe nothing is changed, or I don't implement something correctly. But CPU usage still high during loading. That's my observations so far. |
excellent idea, trying to patch&check. Another approach (not working for all cases but just to test)
This excludes any path with more than one 'node_modules' folder. So something like expanding analysisIgnoredDirs with regex matching could solve all issues. Thanks for the idea @kant2002 |
That's a very interesting idea... @DinoV what do you think about using this as a way to scale down IntelliSense? |
I think this makes sense, maybe as both a per-project and a global option? At least per-project. But it seems like a reasonable way to dial the analysis. There may be some losses on the edges but the greater the depth allowed the less significant those losses are likely to be. We could also run AnalysisDriver with a similar change to get an idea of how it effects top-level modules. If there was a value where it seemed to have little impact on a lot of packages other then analysis time we could set that as the default. |
I could made pull request with some of my changes which lead to this results. |
Just in case if this could be helpful: When analyze 3 modules depth deep (3 node_modules in relative path) output is: When analyze 2 modules depth deep (2 node_modules in relative path) output is: |
The folder depth within my own code is deeper than the folder depth within node_modules but less explosive. Unless the config value would be depth within node_modules only. |
ah, - nm, I see the pull request considers this well. |
@objelke please take a look at my analysis.
From what I could see as you told initially, analysis is dependent on count of files. After some threshold average file analysis time increased. Could you try to project based on your subset of data? |
more scary numbers:
we can see, there is no golden "depth" value. For me, 2 already is out of control. To ensure we mean the same by "Depth", in your pull request: |
Full output, in case the other values are of interest: no limit: |
You are right, the number more scary, but they fit the same picture. Let's combine our values.
Which if we sort this by files count give us following picture
Based on the I could do following guesses:
This lead to conclusion that this fix not enough, and parser should be further improved to be more efficient. |
yes, makes perfect sense. However, let me point out that the developers in our company are truly happy users for almost a week now with this simple fix (just limit to 1) - env. went from 100% unusable to 100% usable. So if I could recommend: including a low effort solution as described here can help tremendously, despite perhaps not being perfect. All that said in case the true solution would cause high effort engineering. |
I really like the direction of this change, but like any analysis changes, no fix is going to be perfect given the wide variety of patterns available. So a couple ideas in that vein:
|
Limit depth analysys to relative paths which contains only 2 node_modules. This should be sufficient enough to capture a) Capture API of the root package b) Capture API of the dependencies. This is obiviously the must to analyze (1 node_modules) c) Capture API of subdependencies (2 node_modules). This is also the must, since a lot of packages are wrapper/glue for another library. For example using gulp-typescript require that gulp-typescript should be parsed (level b) and typescript should be parsed (level c) since typescritp provide some config options for gulp-typescript.
Now working on create configuration page for my fix.Next directions from my oversvations is eliminate huge amount of exceptions generated. There two common types of exceptions:
Without PathToLongException debugger run much faster and most likely CPU usage goes low a bit. |
Fix #138 Add medium level of IntelliSense / limit to the analysis depth (Related to #88). Limit depth analysys to relative paths which contains only 2 node_modules. This should be sufficient enough to capture a) Capture API of the root package b) Capture API of the dependencies. This is obiviously the must to analyze (1 node_modules) c) Capture API of subdependencies (2 node_modules). This is also the must, since a lot of packages are wrapper/glue for another library. For example using gulp-typescript require that gulp-typescript should be parsed (level b) and typescript should be parsed (level c) since typescritp provide some config options for gulp-typescript. Related to #88 #138 Added Medium level of Intellisense. This level now same as Full level, but limit analysis depth to the 2 nested modules depth. Full mode is now has limit to 4 modules depth which I almost sure practical enough, but could be increased if needed. Low level is analyse 1 level depth.
Some improvements in this area are available in the latest dev build |
Just some feedback on 1.1 RC on this matter: Certainly much better than 1.0 - but unfortunately, still a true pain (compared to my brute force "level 1" hack). Loading time of larger solutions with completed analysis: > 2..3 min, without (deleting the .dat): much longer. 15 min++. Intellisense settings are at "quick". Tried to force the depth ("NestedModulesLimit") via registry to 1 - but no improvement from that (not sure I got it right - couldn't see a difference). From looking at the source code, it seems the following key should be respected for VS2013:
Will compile an own version again and dig deeper - just posting this here in case somebody has a "but you need to...." moment which could help on this. |
a few days now - I can confirm, behavior on 1.1 RC is still unusable for our environment. What is a bit confusing about this: .ntvs_analysis.dat is about 230 MB - that is not so horrible (compared to earlier) - scratching head a bit - perhaps something else was changed? Don't have time to dive into this now (need to re-apply my patch to the current source code and test; all very time consuming) - but still wanted to report that something is still a bit off here. Not sure what the 1.1 release plan is - will check deeper next week, maybe there is still time for changes? |
I'm also having issues with this extension's memory usage. In VS2015 Enterprise, I have four projects in a solution:
My VS memory ranges from 1.8GB to 2.5GB. I have ReSharper Ultimate, and I originally thought that was the issue. I've disabled, suspended, and uninstalled ReSharper. ReSharper is definitely not the issue. I can unload the Node.js project from my solution and memory drops to around 450MB. My The Node.js project is just the yeoman template from generator-angular with a handful of small directives added. |
I dug a little deeper into this and it looks like I've resolved my memory issues. The problem appears to be ndoejstools is hitting MAX_PATH for modules installed with npm 2.x. Although I'd installed npm 3 with To resolve my memory issues, I deleted Memory usage of Visual Studio is now steadily 628MB, which is slightly higher than the 450MB without the Node.js project loaded, but way less than the 2.5GB that was causing VS to crash. There's a |
@jimschubert fyi you can check the version of npm being executed by running Regarding MAX_PATH memory concerns, probably a red herring since we would ignore those files altogether so there's no reason it would be more memory intensive. It's likely more to do with the fact that things are deduplicated by default with npm v3, so we don't have to analyze as much stuff (see _#_1 in #233 (comment)) |
Regarding memory usage. I have ~1.5Gb used by VS + NTVS. I take a dump and run Here objects which use memory the most:
Please take a not that I lookup on which strings are help in memory and here the examples:
Here couple interesting thoughts
|
I am having the same issue. With NTVS (both the Oct 22 Dev release, and the stable on the main branch), I can't get VS to load my project. It freezes VS completely when I open any node.js project. I have npm 2.14.7, and VS 2013 with Web Essentials. On the machines with VS 2015 it works for some reason (Same project). This happens only on one of the machines. Also, the Given that the project hasn't changed, looks like the traversal of the tree is now getting further and yet stops again. Question for you! Is the large file size because you traverse the tree multiple times and some sort of multiple recursion is happening causing a stack overflow? Side note |
Using vs 2013 on Win 10 x64. I've also had this issue for a while - every time I open a nodejs project it freezes, although my memory usage doesn't go up past 500mb (maybe since it's the only project in the solution). I found a really strange way to keep it from freezing, though. Before opening the solution, I open Tools > NodeJS Tools > Diagnostic Info. Then close it, then I open the NodeJS Interactive window and let it finish loading. Then I open the solution and it doesn't freeze. Just did this 3 separate times and that exact sequence keeps it from freezing. Hope this helps. |
Still a major issue... 2.6gigs of ram usage for Visual Studio alone is unacceptable. |
Thank you @jimschubert: i deleted |
I've created a troubleshooting page to document some possible work arounds for high memory usage: https://github.com/Microsoft/nodejstools/wiki/Troubleshooting#high-memory-usage-workarounds If you on the NTVS 1.2 Alpha, try switching to use ES6 Preview Intellisense as a first step ( |
Have same issue with NTVS 1.2 Alpha + ES6 Preview Memory usage in Task Manager @mjbvz What could be the issue? |
@kant2002 That seems high, but the raw numbers are not that useful since they do not tell us how much of that is NTVS. Depending on VS configuration, what other extensions are installed, and what you are doing or have been doing in Visual Studio, this may be expected. If none of the other solutions have worked and there is no easy repo for the problem so that we can investigate, try building the source and use the VS memory profiling tools to look at what is consuming all that memory. #815 will improve this somewhat, but unless you have like thirty projects in a solution, each with super complex dependency trees, I don't imagine that alone will drop memory usage by more than 20% or so. |
@mjbvz My personal issues is in strings first (that's on released Alpha 1.2)
I suspect that this was not resolved by #815, but maybe I'm wrong. I will try find time to build NETVS from source and let you know, if issue still in place. I see that there a lot of good changes in NTVS after last release, so I give it a shot |
Please try downloading and using NTVS 1.2 Beta. We've made some significant performance improvements to the product, and the analyzer specifically. We now use the Typescript analysis engine by default, instead of our old static analysis engine. This means that we no longer need Please give 1.2 Beta a try, report any issues you run into, and feel free to submit any general feedback. Thanks. |
Node: v 0.12.2
VS 2012 and 2013, both Ultimate
NTVS 1.0
On larger projects, memory use becomes a significant issue. I suspect it's related to the analyze function; the .ntvs_analysis.dat files for individual projects range from a few MB to >400MB.
The advise for using AnalysisIgnoredDirectories to exclude modules has not resulted in any measurable improvements. Also the .dat file sizes stayed const - even after playing with reference pathes etc.
(note: every project reload after a change takes ~ 10 minutes until the new .dat is completed so hard to test many scenarios).
Combining multiple projects in a single solution had to be given up at an early stage already (sadly) - the VS process was quickly approaching 4GB mem use and then often crashing. If it survived, VS became unusable (many seconds between possible keystrokes, often pausing 20sec+ before VS could be used again).
But even when just loading individual projects, load time is a drag - on slower/older laptops, 10 min.+ for a single project. Even on fast machines (SSD/16GB RAM etc.) many minutes.
Disabling intellisense may be an option - but that would invalidate the use of VS in the first place for this.
Once it's all loaded, editing performance is actually quite all right.
Launching the debugger takes a bit of time but survivable.
While debugging though, every keystroke is followed by painfully slow execution.
The projects are admittedly complex, a typical dependency list:
"dependencies": {
"bcrypt-nodejs": "0.0.3",
"bluebird": "^2.9.14",
"chai": "^2.1.2",
"chai-as-promised": "^4.3.0",
"expiring-lru-cache": "^2.1.0",
"jsonschema": "^1.0.1",
"knex": "^0.7.6",
"lodash": "^3.6.0",
"moment": "^2.9.0",
"monitor": "^0.6.10",
"mysql": "^2.5.5",
"node-cache": "^1.1.0",
"node-uuid": "^1.4.3",
"nodemailer": "^1.3.2",
"nodemailer-smtp-transport": "^1.0.2",
"passport": "^0.2.1",
"passport-http": "^0.2.2",
"remove": "^0.1.5",
"restify": "^3.0.1",
"simple-lru-cache": "0.0.1",
"when": "^3.7.2"
},
(needles to say every dependency has a significant node_modules subfolder itself and so forth...)
Looking for advise
On VS2012 vs VS2013 - reverted to using 2012; 2013's behavior was similar but overall worse - longer delays, more frequent crashes.
More specifics:
https://docs.npmjs.com/cli/link
This maps, via file system links, 4 complete shared projects (with their node_modules) into the node_modules tree of the main project - that certainly can contribute to trouble.
Disabling analysis for those isn't an option - any "smarter" way to include shared libs into the visual studio project, reducing memory use?
Final words: not complaining here about a free piece - when it works (on smaller projects) it's a truly excellent integration. Just hoping we can keep on using it for a growing technology stack.
The text was updated successfully, but these errors were encountered: