Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lsp: Handle responses in background thread #12640

Merged
merged 3 commits into from
Jun 5, 2024

Conversation

bennetbo
Copy link
Contributor

@bennetbo bennetbo commented Jun 4, 2024

Release Notes:

  • Improved performance when handling large responses from language servers

bennetbo and others added 2 commits June 4, 2024 15:01
Co-Authored-By: Piotr <piotr@zed.dev>
@cla-bot cla-bot bot added the cla-signed The user has signed the Contributor License Agreement label Jun 4, 2024
@bennetbo bennetbo changed the title lsp: Handle response in background thread lsp: Handle responses in background thread Jun 4, 2024
@osiewicz osiewicz merged commit 9824e40 into main Jun 5, 2024
8 checks passed
@osiewicz osiewicz deleted the handle-lsp-response-in-background branch June 5, 2024 21:06
osiewicz added a commit that referenced this pull request Jan 14, 2025
This is a follow-up to #12640.
While profiling latency of working with a project with 8192 diagnostics I've noticed that while we're parsing the LSP messages into a generic message struct on a background thread, we can still block the main thread
as the conversion between that generic message struct and the actual LSP message (for use by callback) is still happening on the main thread.
This PR significantly constrains what a message callback can use, so that it can be executed on any thread; we also send off message conversion to the background thread.

This has improved throughput of my 8192-benchmark from 40s to send out all diagnostics after saving to ~20s. Now main thread is spending most of the time updating our diagnostics sets, which can probably be improved.
osiewicz added a commit that referenced this pull request Jan 14, 2025
This is a follow-up to #12640.
While profiling latency of working with a project with 8192 diagnostics I've noticed that while we're parsing the LSP messages into a generic message struct on a background thread, we can still block the main thread
as the conversion between that generic message struct and the actual LSP message (for use by callback) is still happening on the main thread.
This PR significantly constrains what a message callback can use, so that it can be executed on any thread; we also send off message conversion to the background thread.

This has improved throughput of my 8192-benchmark from 40s to send out all diagnostics after saving to ~20s. Now main thread is spending most of the time updating our diagnostics sets, which can probably be improved.
osiewicz added a commit that referenced this pull request Jan 14, 2025
This is a follow-up to #12640.
While profiling latency of working with a project with 8192 diagnostics I've noticed that while we're parsing the LSP messages into a generic message struct on a background thread, we can still block the main thread
as the conversion between that generic message struct and the actual LSP message (for use by callback) is still happening on the main thread.
This PR significantly constrains what a message callback can use, so that it can be executed on any thread; we also send off message conversion to the background thread.

This has improved throughput of my 8192-benchmark from 40s to send out all diagnostics after saving to ~20s. Now main thread is spending most of the time updating our diagnostics sets, which can probably be improved.
osiewicz added a commit that referenced this pull request Jan 14, 2025
This is a follow-up to #12640.
While profiling latency of working with a project with 8192 diagnostics I've noticed that while we're parsing the LSP messages into a generic message struct on a background thread, we can still block the main thread
as the conversion between that generic message struct and the actual LSP message (for use by callback) is still happening on the main thread.
This PR significantly constrains what a message callback can use, so that it can be executed on any thread; we also send off message conversion to the background thread.

This has improved throughput of my 8192-benchmark from 40s to send out all diagnostics after saving to ~20s. Now main thread is spending most of the time updating our diagnostics sets, which can probably be improved.
osiewicz added a commit that referenced this pull request Jan 14, 2025
This is a follow-up to #12640.
While profiling latency of working with a project with 8192 diagnostics I've noticed that while we're parsing the LSP messages into a generic message struct on a background thread, we can still block the main thread
as the conversion between that generic message struct and the actual LSP message (for use by callback) is still happening on the main thread.
This PR significantly constrains what a message callback can use, so that it can be executed on any thread; we also send off message conversion to the background thread.

This has improved throughput of my 8192-benchmark from 40s to send out all diagnostics after saving to ~20s. Now main thread is spending most of the time updating our diagnostics sets, which can probably be improved.
github-merge-queue bot pushed a commit that referenced this pull request Jan 14, 2025
This is a follow-up to #12640.
While profiling latency of working with a project with 8192 diagnostics
I've noticed that while we're parsing the LSP messages into a generic
message struct on a background thread, we can still block the main
thread as the conversion between that generic message struct and the
actual LSP message (for use by callback) is still happening on the main
thread.
This PR significantly constrains what a message callback can use, so
that it can be executed on any thread; we also send off message
conversion to the background thread. In practice new callback
constraints were already satisfied by all call sites, so no code outside
of the lsp crate had to be adjusted.

This has improved throughput of my 8192-benchmark from 40s to send out
all diagnostics after saving to ~20s. Now main thread is spending most
of the time updating our diagnostics sets, which can probably be
improved too.

Closes #ISSUE

Release Notes:

- Improved app responsiveness with huge # of diagnostics.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla-signed The user has signed the Contributor License Agreement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants