Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform-ls.exe uses ~500 MB of RAM for each vscode window #986

Closed
KyleKolander opened this issue Jul 1, 2022 · 11 comments
Closed

terraform-ls.exe uses ~500 MB of RAM for each vscode window #986

KyleKolander opened this issue Jul 1, 2022 · 11 comments
Labels
bug Something isn't working
Milestone

Comments

@KyleKolander
Copy link

Versions

Extension

2.23.0

VS Code

1.68.1

Operating System

Edition	Windows 10 Enterprise
Version	20H2
Installed on	‎7/‎1/‎2021
OS build	19042.1766
Experience	Windows Feature Experience Pack 120.2212.4180.0

Terraform Version

1.1.7

Steps To Reproduce

  1. code path/to/terraform/repository1
  2. code path/to/terraform/repository2
  3. code path/to/terraform/repository3
  4. Verify memory usage

Expected Behavior

  1. terraform-ls.exe shouldn't consume nearly 2 GB RAM for 3 repositories (that aren't all that large)
  2. Ideally, the extension could share a single language server across multiple instances of vscode

Actual Behavior

  1. terraform-ls.exe consumes an objectively large amount of RAM
  2. Each instance of vscode spins up its own instance of terraform-ls.exe

Additional context

None that seems relevant.

terraform-ls exe

@KyleKolander KyleKolander added the bug Something isn't working label Jul 1, 2022
@radeksimko radeksimko transferred this issue from hashicorp/vscode-terraform Jul 4, 2022
@radeksimko
Copy link
Member

Hi @KyleKolander
This looks like a few different problems, so I'll break them down and try to answer in detail below.

Ideally, the extension could share a single language server across multiple instances of vscode

It looks like you're running multiple VS Code and therefore extension instances which don't know about each other and have relatively limited ways of finding each other.

The server itself does support sessions, such that the same process can be reused by multiple clients, but I have not looked into how clients could actually make use of that. It also supports TCP as an alternative transport protocol (in addition to stdio) which could make it easier to do this, but this isn't supported yet on the extension side, see hashicorp/vscode-terraform#755 for more.

One of the reasons this is quite an untried/under-documented area is because vast majority of users use VS code workspaces to manage multiple repositories, which already does what you suggest and also reuses the same client/extension and VS Code instance.

FWIW You don't have to encode workspace configuration into any file. You can just add each repo as a folder on-the-fly:
Screenshot 2022-07-04 at 07 27 24
Screenshot 2022-07-04 at 07 27 44

Some other editors have similar features, all of which bind to the native LSP workspace folders capability.

terraform-ls.exe consumes an objectively large amount of RAM
terraform-ls.exe shouldn't consume nearly 2 GB RAM for 3 repositories (that aren't all that large)

As mentioned, by using workspace folders you can reduce the memory usage by reducing the amount of server instances to just one. That said I agree that ~500MB of memory is still not great. 300MB is approximately currently expected memory usage for single instance without any open/indexed files. The majority of this is consumed by embedded provider schemas to provide decent UX even for users who do not have Terraform installed or cannot run terraform init just to install providers. You can read more about memory usage at https://github.com/hashicorp/terraform-ls/blob/main/docs/benchmarks.md#memory-usage

Would you mind providing us a memory profile so we can see what the extra 200MB are made of? That would help us understand if there are any memory leaks or opportunities for optimisation.

We have some plans to reduce the baseline memory usage by obtaining the schemas via Registry API, but that's not a trivial project and still comes with some trade-offs. Specifically there are users which work in air-gapped environments or for any other reason cannot reach such an API while editing. For those we will need to continue providing a decent user experience. We could possibly let users opt in or out of various schemas sources - i.e. if you wish to trade network calls for low memory you would be able to do so. This would unlikely be a default option though.

@KyleKolander
Copy link
Author

Hi @radeksimko -

I tried taking a memory profile, but I must be doing something wrong because it's not working (i.e., no memory dump file getting created). Suggestions?

I see this in the output, so I know it's picking up my settings change:
Launching language server: c:\Users\redacted\.vscode\extensions\hashicorp.terraform-2.23.0-win32-x64\bin\terraform-ls.exe serve -memprofile=C:/Sandbox/Dumps/terraform-ls-memprofile-{{timestamp}}.prof

This is the relevant portion of my settings file:

"terraform.languageServer": {
    "external": true,
    "args": [
        "serve",
        "-memprofile=C:/Sandbox/Dumps/terraform-ls-memprofile-{{timestamp}}.prof"
    ],
    "ignoreSingleFileWarning": false
}

Here you can see it is over 800 MB:
terraform-ls exe_2

@radeksimko
Copy link
Member

radeksimko commented Jul 5, 2022

@KyleKolander The memory/CPU profile is generally written after the process exits. i.e. when you (ideally gracefully) close VS Code, which then (gracefully) exits the language server process. You likely won't see any file, or only empty file at runtime.

@KyleKolander
Copy link
Author

@radeksimko - 😄 I had already tried that, but for good measure I tried again (closed the folder then the application vs just closing the application). I even tried changing the path to eliminate the GO template: "-memprofile=C:/Sandbox/Dumps/memprofile.log", but no dice. Other ideas?

@radeksimko
Copy link
Member

I don't have other ideas unfortunately, but I recall some Windows users struggling with profile collection due to Windows process exiting working differently to Unix (macOS/Linux) and the language server somehow being unable to catch the signals and essentially ending up "killed" without any opportunity to do anything like writing the profile to disk. I never had the chance to debug it though and the docs for the Go API we use suggests that it should just work unless I'm misreading the docs or our implementation.

I guess I will need to spin up a Windows VM to reproduce and get to the bottom of the problem. Sorry!

@KyleKolander
Copy link
Author

OK. I'll wait to hear back from you for what to try next.

WRT to the workspaces, I've used that feature occasionally, but unfortunately VS Code doesn't support multiple monitors very well. I have three monitors and I often have code from multiple repositories open in different monitors. With the full blown Visual Studio, I can drag an editor window/tab out of VS and let it float on its own and even maximize it on a different monitor. The best "solution" I've seen is to drag the VS Code window so it spans multiple monitors and dock different editor windows accordingly, but that's far from ideal - not to mention the management of workspace json files.

@radeksimko
Copy link
Member

Understood about the workspace issue. I'm not sure we can do anything about that on our side, as explained. From what I can tell, other extensions and language servers would work the same way, i.e. launch 1 instance of LS per client (VS Code instance), even if each process consumes less memory.

@KyleKolander
Copy link
Author

If we could get one of two things to work, it would be good either way:

  1. Extension would support a configuration option to designate an already running language server instead of launching its own. It seems like this wouldn't be all that difficult, but you would know better than me.
  2. Language server - and by extension, the Extension (see what I did there... ha ha) - would consume a lot less memory.

@radeksimko
Copy link
Member

Extension would support a configuration option to designate an already running language server instead of launching its own. It seems like this wouldn't be all that difficult, but you would know better than me.

In theory we could make it work with the default stdio (stdin/stdout) transport, but I don't see any practical way of discovering the running language server. The only reliable way would be specifying PID, but that wouldn't really scale well as you'd have to update the settings somehow on every re-launch.

What seems more reasonable for this use case is the TCP transport, as drafted in hashicorp/vscode-terraform#755 The idea there is that you'd spin up a single LS instance with a particular flag (-port=XXXX) which makes the server listen on that port - which solves the discoverability problem. Then you'd put that port number into your configuration, so instead of the extension launching a new process, it reaches out to localhost:PORT-NUMBER.

@radeksimko
Copy link
Member

radeksimko commented Oct 11, 2022

The upcoming release contains significant improvements in memory usage (and consequently launch time), which was implemented as part of #1071 which contains some more implementation details. The TL;DR is being included in the Changelog:

  • Reduced runtime memory consumption from static ~572MB (representing ~220 providers) to more dynamic depending on providers in use.
    For example, no configuration (no provider requirements) should consume around 10MB, indexed folder w/ hashicorp/aws requirement ~70MB.
  • Reduced launch time from ~ 2 seconds to 1-3 ms.

We currently have no intentions to support the case of reusing the LS between individual client instances (e.g. individual VS Code windows) beyond what I already mentioned (manual configuration with TCP transport) - hence I am going to close this, but we'd be happy to discuss any resource-related concerns once we cut the next release, if it still causes issues there.

Thank you for your patience and understanding.

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 11, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants