Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

internal: Send less data during textDocument/completion if possible #18167

Merged
merged 6 commits into from
Sep 30, 2024

Conversation

SomeoneToIgnore
Copy link
Contributor

@SomeoneToIgnore SomeoneToIgnore commented Sep 23, 2024

Similar to #15522, stops sending extra data during textDocument/completion if that data was set in the client completions resolve capabilities, and sends those only during completionItem/resolve requests.
Currently, rust-analyzer sends back all fields (including potentially huge docs) for every completion item which might get large.

Same as the other one, this PR aims to keep the changes minimal and does not remove extra computations for such fields — instead, it just filters them out before sending to the client.

The PR omits primitive, boolean and integer, types such as deprecated, preselect, insertTextFormat, insertTextMode, etc. AND additionalTextEdits — this one looks very dangerous to compute for each completion item (as the spec says we ought to if there's no corresponding resolve capabilities provided) due to the diff computations and the fact that this code had been in the resolution for some time.
It would be good to resolve this lazily too, please let me know if it's ok to do.

When tested with Zed which only defines documentation and additionalTextEdits in its client completion resolve capabilities, rust-analyzer starts to send almost 3 times less characters:

Request:

{"jsonrpc":"2.0","id":104,"method":"textDocument/completion","params":{"textDocument":{"uri":"file:///Users/someonetoignore/work/rust-analyzer/crates/ide/src/inlay_hints.rs"},"position":{"line":90,"character":14},"context":{"triggerKind":1}}}
image

Before: 381944 characters
before.json

After: 140503 characters
after.json

After Zed's patch to enable all resolving possible: 84452 characters
after-after.json

@rustbot rustbot added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Sep 23, 2024
@SomeoneToIgnore
Copy link
Contributor Author

SomeoneToIgnore commented Sep 23, 2024

To note, previous PR caused a regression due to a VSCode bug: #15604
I have not found anything odd when testing this change in VSCode, but would appreciate somebody else double checking that.

@ChayimFriedman2
Copy link
Contributor

This applies to #15522 as well, but: is there a specific advantage to not sending this characters? They are communicated over stdout, which should be ~free. And lazily resolving it means redoing the computation.

@SomeoneToIgnore
Copy link
Contributor Author

SomeoneToIgnore commented Sep 23, 2024

  1. ~free does not apply to the actual [de]serialization and whatever processing attempts the client has to do when accepting the parameters.
    E.g. Zed parses the markdown for every completion item: https://github.com/zed-industries/zed/blob/03a2bacbf476d538a3fdf143809d4d161675f3d4/crates/project/src/lsp_store.rs#L6843
    which is totally redundant if we're not even seeing those docs due to not selecting anything from the list.

  2. However ~free the messaging is, does it make any sense to send extra ~235 KB (for the example above) just because we can?
    While further results may send less, textDocument/completion are very frequently called on continuous typing, so we're talking extra megabytes sent for nothing here.

They are communicated over stdout

This is not right for some cases, e.g. Zed's remote mechanism — in that case, we're sending requests over the wire.

And lazily resolving it means redoing the computation.

Correct me if I'm wrong, but isn't Salsa supposed to take care of memoization for this case, at least partially?
And if/when someone pushes this further and excludes extra computations, there's supposed to be even less of that.


Edit: after improving Zed's capabilities and stripping down every possible field, the char difference is even more drastic: from 381944 characters down to 84452 characters

@ChayimFriedman2
Copy link
Contributor

isn't Salsa supposed to take care of memoization for this case, at least partially?

Salsa will take care of what hits the DB. But completion does many things beyond raw DB accesses - and those won't be cached by Salsa.

Anyway, my main concern is that if there is no reason to minimize the transfer, we'll do more work unnecessarily. Since I think you've demonstrated there is utility in minimizing transfer, I no longer have a concern, considering that we only resolve for the selected item.

SomeoneToIgnore added a commit to zed-industries/zed that referenced this pull request Sep 23, 2024
Related to rust-lang/rust-analyzer#18167

* Declare more completion item fields in the client completion resolve
capabilities
* Do resolve completions even if their docs are present
* Instead, do not resolve completions that could not be resolved when
handling the remote client resolve requests
* Do replace the old lsp completion data with the resolved one

Release Notes:

- Improved completion resolve mechanism
@Veykril
Copy link
Member

Veykril commented Sep 24, 2024

It really feels like we ought to have some simplistic extra request x file cache for requests that have very simple inputs with resolve capability which is cleared after any doc change (though I fear this might be useless depending on the order of completion request and change notification, i'd hope the notification comes first). I am not a fan of the recompute the entire thing and pick out the one item we are interested in approach we do for almost all if not all resolve capable requests. OTOH it would be nice to skip computing the resolvable fields altogether in the initial computation (which would make that cache pointless).

@SomeoneToIgnore
Copy link
Contributor Author

SomeoneToIgnore commented Sep 24, 2024

we do for almost all if not all

I think it's all indeed, and the reason for that before was the fact that an LSP server should stay stateless.

Fully agree with the extra computations sentiment (sounds almost like "water is wet" to me), but what practical, actionable steps can I do to move this PR forward?
I feel that introducing an entire caching layer is quite large and dangerous task to do just along the way with this change.

@Veykril
Copy link
Member

Veykril commented Sep 24, 2024

Oh no sorry, I did not mean to imply that should should block this PR (I just haven't reviewed this yet). That comment is a separate issue and discussion entirely imo. If this turns out to cause problems for VSCode for whatever reasons (or in general) we can just disable it for the time being (for VSCode specifically or all respectively)

@jhgg
Copy link
Contributor

jhgg commented Sep 24, 2024

This applies to #15522 as well, but: is there a specific advantage to not sending this characters? They are communicated over stdout, which should be ~free. And lazily resolving it means redoing the computation.

Don't forget about people who use the LSP over network boundary, eg: vscode remote ssh extension. So, lowering payload size can significantly speed up perceived completion speed.

@Veykril
Copy link
Member

Veykril commented Sep 25, 2024

Will merge this next monday

@Veykril Veykril added S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. and removed S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. labels Sep 25, 2024
@Veykril
Copy link
Member

Veykril commented Sep 30, 2024

Let's see how this goes!
@bors r+

@bors
Copy link
Contributor

bors commented Sep 30, 2024

📌 Commit c03f5b6 has been approved by Veykril

It is now in the queue for this repository.

@bors
Copy link
Contributor

bors commented Sep 30, 2024

⌛ Testing commit c03f5b6 with merge dfe6d50...

@bors
Copy link
Contributor

bors commented Sep 30, 2024

☀️ Test successful - checks-actions
Approved by: Veykril
Pushing dfe6d50 to master...

SomeoneToIgnore added a commit to zed-industries/zed that referenced this pull request Oct 9, 2024
…ts (#18907)

After rust-lang/rust-analyzer#18167 and certain
people who type and complete rapidly, it turned out that we have not
waited for `completionItem/resolve` to finish before applying the
completion results.

Release Notes:

- Fixed completion items applied improperly on fast typing
SomeoneToIgnore added a commit to zed-industries/zed that referenced this pull request Oct 9, 2024
…ts (#18907)

After rust-lang/rust-analyzer#18167 and certain
people who type and complete rapidly, it turned out that we have not
waited for `completionItem/resolve` to finish before applying the
completion results.

Release Notes:

- Fixed completion items applied improperly on fast typing
noaccOS pushed a commit to noaccOS/zed that referenced this pull request Oct 19, 2024
Related to rust-lang/rust-analyzer#18167

* Declare more completion item fields in the client completion resolve
capabilities
* Do resolve completions even if their docs are present
* Instead, do not resolve completions that could not be resolved when
handling the remote client resolve requests
* Do replace the old lsp completion data with the resolved one

Release Notes:

- Improved completion resolve mechanism
noaccOS pushed a commit to noaccOS/zed that referenced this pull request Oct 19, 2024
…ts (zed-industries#18907)

After rust-lang/rust-analyzer#18167 and certain
people who type and complete rapidly, it turned out that we have not
waited for `completionItem/resolve` to finish before applying the
completion results.

Release Notes:

- Fixed completion items applied improperly on fast typing
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants