-
Notifications
You must be signed in to change notification settings - Fork 131
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
terraform-ls.exe uses ~500 MB of RAM for each vscode window #986
Comments
Hi @KyleKolander
It looks like you're running multiple VS Code and therefore extension instances which don't know about each other and have relatively limited ways of finding each other. The server itself does support sessions, such that the same process can be reused by multiple clients, but I have not looked into how clients could actually make use of that. It also supports TCP as an alternative transport protocol (in addition to stdio) which could make it easier to do this, but this isn't supported yet on the extension side, see hashicorp/vscode-terraform#755 for more. One of the reasons this is quite an untried/under-documented area is because vast majority of users use VS code workspaces to manage multiple repositories, which already does what you suggest and also reuses the same client/extension and VS Code instance. FWIW You don't have to encode workspace configuration into any file. You can just add each repo as a folder on-the-fly: Some other editors have similar features, all of which bind to the native LSP workspace folders capability.
As mentioned, by using workspace folders you can reduce the memory usage by reducing the amount of server instances to just one. That said I agree that ~500MB of memory is still not great. 300MB is approximately currently expected memory usage for single instance without any open/indexed files. The majority of this is consumed by embedded provider schemas to provide decent UX even for users who do not have Terraform installed or cannot run Would you mind providing us a memory profile so we can see what the extra 200MB are made of? That would help us understand if there are any memory leaks or opportunities for optimisation. We have some plans to reduce the baseline memory usage by obtaining the schemas via Registry API, but that's not a trivial project and still comes with some trade-offs. Specifically there are users which work in air-gapped environments or for any other reason cannot reach such an API while editing. For those we will need to continue providing a decent user experience. We could possibly let users opt in or out of various schemas sources - i.e. if you wish to trade network calls for low memory you would be able to do so. This would unlikely be a default option though. |
Hi @radeksimko - I tried taking a memory profile, but I must be doing something wrong because it's not working (i.e., no memory dump file getting created). Suggestions? I see this in the output, so I know it's picking up my settings change: This is the relevant portion of my settings file: "terraform.languageServer": {
"external": true,
"args": [
"serve",
"-memprofile=C:/Sandbox/Dumps/terraform-ls-memprofile-{{timestamp}}.prof"
],
"ignoreSingleFileWarning": false
} |
@KyleKolander The memory/CPU profile is generally written after the process exits. i.e. when you (ideally gracefully) close VS Code, which then (gracefully) exits the language server process. You likely won't see any file, or only empty file at runtime. |
@radeksimko - 😄 I had already tried that, but for good measure I tried again (closed the folder then the application vs just closing the application). I even tried changing the path to eliminate the GO template: |
I don't have other ideas unfortunately, but I recall some Windows users struggling with profile collection due to Windows process exiting working differently to Unix (macOS/Linux) and the language server somehow being unable to catch the signals and essentially ending up "killed" without any opportunity to do anything like writing the profile to disk. I never had the chance to debug it though and the docs for the Go API we use suggests that it should just work unless I'm misreading the docs or our implementation. I guess I will need to spin up a Windows VM to reproduce and get to the bottom of the problem. Sorry! |
OK. I'll wait to hear back from you for what to try next. WRT to the workspaces, I've used that feature occasionally, but unfortunately VS Code doesn't support multiple monitors very well. I have three monitors and I often have code from multiple repositories open in different monitors. With the full blown Visual Studio, I can drag an editor window/tab out of VS and let it float on its own and even maximize it on a different monitor. The best "solution" I've seen is to drag the VS Code window so it spans multiple monitors and dock different editor windows accordingly, but that's far from ideal - not to mention the management of workspace json files. |
Understood about the workspace issue. I'm not sure we can do anything about that on our side, as explained. From what I can tell, other extensions and language servers would work the same way, i.e. launch 1 instance of LS per client (VS Code instance), even if each process consumes less memory. |
If we could get one of two things to work, it would be good either way:
|
In theory we could make it work with the default stdio (stdin/stdout) transport, but I don't see any practical way of discovering the running language server. The only reliable way would be specifying PID, but that wouldn't really scale well as you'd have to update the settings somehow on every re-launch. What seems more reasonable for this use case is the TCP transport, as drafted in hashicorp/vscode-terraform#755 The idea there is that you'd spin up a single LS instance with a particular flag ( |
The upcoming release contains significant improvements in memory usage (and consequently launch time), which was implemented as part of #1071 which contains some more implementation details. The TL;DR is being included in the Changelog:
We currently have no intentions to support the case of reusing the LS between individual client instances (e.g. individual VS Code windows) beyond what I already mentioned (manual configuration with TCP transport) - hence I am going to close this, but we'd be happy to discuss any resource-related concerns once we cut the next release, if it still causes issues there. Thank you for your patience and understanding. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
Versions
Extension
VS Code
Operating System
Terraform Version
Steps To Reproduce
Expected Behavior
Actual Behavior
Additional context
None that seems relevant.
The text was updated successfully, but these errors were encountered: