You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the server only provides semantic tokens for the full document, which may result in a higher latency when highlighting larger documents as tokens for the whole document need to be recomputed.
Proposal
In general the textDocument/semanticTokens/range method was designed to allow clients to request tokens just for the visible part of the document and "pre-load" more as the user is scrolling.
It is therefore proposed to provide range-based highlighting, i.e. support this method.
@ndubey-adobe this is not on our near-term roadmap if that is what you're asking. We would however happily review PRs for that functionality.
Just curious - Is there anything in particular that motivates you to address this problem in particular? Is the highlighting slow in any particular client or any particular circumstance?
Current Version
Use-cases
Currently the server only provides semantic tokens for the full document, which may result in a higher latency when highlighting larger documents as tokens for the whole document need to be recomputed.
Proposal
In general the
textDocument/semanticTokens/range
method was designed to allow clients to request tokens just for the visible part of the document and "pre-load" more as the user is scrolling.It is therefore proposed to provide range-based highlighting, i.e. support this method.
It may be necessary to change
SemanticTokensInFile
function interface or add a new one (which accepts range) tohcl-lang
.The text was updated successfully, but these errors were encountered: