-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
internal: Send less data during textDocument/completion
if possible
#18167
Conversation
To note, previous PR caused a regression due to a VSCode bug: #15604 |
This applies to #15522 as well, but: is there a specific advantage to not sending this characters? They are communicated over stdout, which should be ~free. And lazily resolving it means redoing the computation. |
This is not right for some cases, e.g. Zed's remote mechanism — in that case, we're sending requests over the wire.
Correct me if I'm wrong, but isn't Salsa supposed to take care of memoization for this case, at least partially? Edit: after improving Zed's capabilities and stripping down every possible field, the char difference is even more drastic: from 381944 characters down to 84452 characters |
Salsa will take care of what hits the DB. But completion does many things beyond raw DB accesses - and those won't be cached by Salsa. Anyway, my main concern is that if there is no reason to minimize the transfer, we'll do more work unnecessarily. Since I think you've demonstrated there is utility in minimizing transfer, I no longer have a concern, considering that we only resolve for the selected item. |
Related to rust-lang/rust-analyzer#18167 * Declare more completion item fields in the client completion resolve capabilities * Do resolve completions even if their docs are present * Instead, do not resolve completions that could not be resolved when handling the remote client resolve requests * Do replace the old lsp completion data with the resolved one Release Notes: - Improved completion resolve mechanism
It really feels like we ought to have some simplistic extra |
I think it's all indeed, and the reason for that before was the fact that an LSP server should stay stateless. Fully agree with the extra computations sentiment (sounds almost like "water is wet" to me), but what practical, actionable steps can I do to move this PR forward? |
Oh no sorry, I did not mean to imply that should should block this PR (I just haven't reviewed this yet). That comment is a separate issue and discussion entirely imo. If this turns out to cause problems for VSCode for whatever reasons (or in general) we can just disable it for the time being (for VSCode specifically or all respectively) |
Don't forget about people who use the LSP over network boundary, eg: vscode remote ssh extension. So, lowering payload size can significantly speed up perceived completion speed. |
Will merge this next monday |
Let's see how this goes! |
☀️ Test successful - checks-actions |
…ts (#18907) After rust-lang/rust-analyzer#18167 and certain people who type and complete rapidly, it turned out that we have not waited for `completionItem/resolve` to finish before applying the completion results. Release Notes: - Fixed completion items applied improperly on fast typing
…ts (#18907) After rust-lang/rust-analyzer#18167 and certain people who type and complete rapidly, it turned out that we have not waited for `completionItem/resolve` to finish before applying the completion results. Release Notes: - Fixed completion items applied improperly on fast typing
Related to rust-lang/rust-analyzer#18167 * Declare more completion item fields in the client completion resolve capabilities * Do resolve completions even if their docs are present * Instead, do not resolve completions that could not be resolved when handling the remote client resolve requests * Do replace the old lsp completion data with the resolved one Release Notes: - Improved completion resolve mechanism
…ts (zed-industries#18907) After rust-lang/rust-analyzer#18167 and certain people who type and complete rapidly, it turned out that we have not waited for `completionItem/resolve` to finish before applying the completion results. Release Notes: - Fixed completion items applied improperly on fast typing
Similar to #15522, stops sending extra data during
textDocument/completion
if that data was set in the client completions resolve capabilities, and sends those only duringcompletionItem/resolve
requests.Currently, rust-analyzer sends back all fields (including potentially huge docs) for every completion item which might get large.
Same as the other one, this PR aims to keep the changes minimal and does not remove extra computations for such fields — instead, it just filters them out before sending to the client.
The PR omits primitive, boolean and integer, types such as
deprecated
,preselect
,insertTextFormat
,insertTextMode
, etc. ANDadditionalTextEdits
— this one looks very dangerous to compute for each completion item (as the spec says we ought to if there's no corresponding resolve capabilities provided) due to the diff computations and the fact that this code had been in the resolution for some time.It would be good to resolve this lazily too, please let me know if it's ok to do.
When tested with Zed which only defines
documentation
andadditionalTextEdits
in its client completion resolve capabilities, rust-analyzer starts to send almost 3 times less characters:Request:
Before: 381944 characters
before.json
After: 140503 characters
after.json
After Zed's patch to enable all resolving possible: 84452 characters
after-after.json