Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Basic test explorer for VSCode based on testing API #14589

Closed
wants to merge 1 commit into from

Conversation

ShuiRuTian
Copy link
Contributor

@ShuiRuTian ShuiRuTian commented Apr 17, 2023

Another implementation for #3601

The client side controls life-cycles of tests in this impl.

Another basic idea is to based on runnables to reuse existing logic.

NOTE: the names of many variables and API might be reconsidered.

You might want to read the "Known issue" section in the new added editors/code/src/test_explorer/README.md.
These issues might need to be fixed before the first release, waiting for opinions :)

Although we still lack some setting to control the behavior, the basic logic is done.

Behavior:
Bench, example and build script is not listed in the test explorer.

Features:

Basic VSCode test explorer feature

  1. Test item tree and status update:

Note that, run will update the test item on the fly(as long as the output is logged), but debug will update the status when the debug session is over.
image

  1. Test run history and error binding

Note that the error is bound to the item rahter than the line which occurs error.
image

Some special feature
3. test item flatten
This is the "normal" test item tree
image
Workspace - package - target - test modules - test
However, this might be kind of verbose, if you have

  • only one workspace, then maybe you do not want to click the only workspace to expand.
  • or only one package in the workspace,
  • or only one target in package

So we choose to flatten the test items, if there is only one workspace/package/target, we will ignore to build its corresponding test item.
image
In this picture, the target level is flattened.

And, if there is multiple workspaces/packages, but there is only one which has tests, we will not flatten it. but we will not show the ones which does not have tests. So, you might still see the result as if it's not flattened.
image
In this picture, there are two workspaces in fact, but the other does not have any tests.
So as if the workspace is not flattened.

However, this might be kind of confused, because it introduces inconsistent, and VSCode would not remember the last run result if the test item tree is changed.

Please leave your opinions whether the behavior is reasonable.

Issues:

  1. Not support select multi items, like:
    image
    In this screenshot, user chooses two items, however, click run/debug button will get a notification like "Please only pick one to run/debug"

  2. Because we load test items lazily now, when running a test suite, whose test cases are not loaded, their status is not updated. And if we load them later, it will be shown as if they are not executed:
    image

@rustbot rustbot added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Apr 17, 2023
@ShuiRuTian ShuiRuTian changed the title Basic test explorer for VSCode based on testing API [WIP]Basic test explorer for VSCode based on testing API Apr 17, 2023
@ShuiRuTian ShuiRuTian changed the title [WIP]Basic test explorer for VSCode based on testing API [BETA]Basic test explorer for VSCode based on testing API Apr 22, 2023
@Veykril Veykril self-assigned this Apr 22, 2023
crates/ide/src/test_items.rs Outdated Show resolved Hide resolved
* In most cases the work is finished by commands, and the context will pass itself
* as parameter to the command callback, which is defined when registering.
*/
export let raContext: Ctx | undefined;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should not re-introduce this global, we had it before and intentionally removed it, as it makes it difficult to reason about.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then everything should be done through commands, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it should go through commands, yes.

crates/rust-analyzer/src/handlers.rs Outdated Show resolved Hide resolved
@Veykril Veykril removed their assignment May 3, 2023
Copy link
Contributor

@davidbarsky davidbarsky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for making this PR! I volunteered to Lukas to give this a first pass review as this is quite large and they're somewhat busy. Overall, I tried to leave comments here, but I have some questions and concerns:

  1. The PR is rather large and I'm having difficulty understanding how a lot of the extension-side code is meant to work and interact with rust-analyzer-the-language-server. It seems to be reinventing a lot of functionality that already exists, but without the extensibility that already exists within rust-analyzer.
  2. It's not clear to me if you're trying to do a global analysis to get all test targets or lazily update as tests update (or both). As best as I can tell, the client fetchs all crates, and whenever a new test is written, all displayed tests are removed and replaced. that doesn't feel like a good approach.
  3. if I were to suggest an approach, i think the correct solution would be something similar to what Alex suggested: find all targets in the workspace, and for a given file, find the runnables. the explorer view can be updated as people run tests.

Ultimately, I'm not sure what direction you're interested in taking this, but I think an O(n) approach to test discovery should not be the approach, given that it doesn't work with lazy analysis.

I'll give some further comments in the future.

crates/ide/src/test_items.rs Outdated Show resolved Hide resolved
// sometimes it's useful to let the client know the project
// structure.
// This property should only be used as JSON
pub origin_metadata: cargo_metadata::Metadata,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you shouldn't need to add Metadata here. if you really need to have the crate structure on the extension side of things—and I'm not sure you, given matklad's comment here—you only need to expose local packages and targets in the extension handler, which can be queried off the crate graph and salsa db.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to find the target based on BSP protocal, but matklad seems to(please correct me if I understand it wrongly!) suggest to use just the Cargo metadata. #14589 (comment)

crates/rust-analyzer/src/to_proto.rs Outdated Show resolved Hide resolved
Comment on lines 834 to 821
if let ProjectWorkspace::Cargo { cargo, .. } = workspace {
Some(cargo.origin_metadata.clone())
} else {
None
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I agree with Alex that Buck (or related extensions) should handle setting up runnables, I think that they'd still benefit from getting a non-Cargo centric view of the immediate workspace.

(Don't treat this as a blocking nit/review.)

crates/rust-analyzer/src/lsp_ext.rs Outdated Show resolved Hide resolved
editors/code/src/test_explorer/discover_and_update.ts Outdated Show resolved Hide resolved
Comment on lines 211 to 249
const targetUri = parentLocation[0].targetUri;
assert(targetUri.toLowerCase().endsWith("cargo.toml"));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the project manifest is not necessarily guaranteed to a Cargo.toml. I'd consider making use of the manifest path in #14931 if/when it lands.

Comment on lines 200 to 347
// create target node
// This is necessary, because we do not know how many targets a package contains unless we fetch data of `cargo metadata`
// But we want to only fetch it when cargo file is changed, to make things more lazily.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if this is strictly necessary, as a change in the Cargo.toml will be picked up rust-analyzer-the-server, and querying the language server to get the new workspace structure feels like a far more scalable and reasonable solution than attempting to figure this out in the client.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's only necessary for current implmentation.

We maintain a test model tree, and build test-item tree from test model tree.

For now, we only create test item for target when it does have test children. Which means, for target that not have test children, we remove it from our test model tree. So, when a new test is added to a file, the test might not have a valid target in the test model tree. We add the target node back to test model tree here.

Of course, we could choose to refactor the logic a bit, like, keep target nodes forever and change the logic how we rebuild test-item tree from test model tree.

editors/code/src/test_explorer/run_or_debug.ts Outdated Show resolved Hide resolved
request: vscode.TestRunRequest,
token: vscode.CancellationToken
) {
// TODO: Never run tests concurrently in client side.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's not entirely clear to me as to why this limitation exists. if a user wants to run multiple tests, can the client not simply pass multiple arguments to the test runner?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This says, not allow user to click two "Run" button quickly, or , when a test is running, not allow another test is runed.

I am not sure what would happen if we allow this, because many process might download same thing meanwhile, or is there any lock file for test command.

And, in fact, we do not allow people run multi tests(but they could run tests for a package or for a whole test module) anyway for now. When user choose multi test-items and click run, we will log some message like "You could run one test item at one time."

Copy link
Contributor

@davidbarsky davidbarsky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for making this PR! I volunteered to Lukas to give this a first pass review as this is quite large and they're somewhat busy. Overall, I tried to leave comments here, but I have some questions and concerns:

  1. The PR is rather large and I'm having difficulty understanding how a lot of the extension-side code is meant to work and interact with rust-analyzer-the-language-server. It seems to be reinventing a lot of functionality that already exists, but without the extensibility that already exists within rust-analyzer.
  2. It's not clear to me if you're trying to do a global analysis to get all test targets or lazily update as tests update (or both). As best as I can tell, the client fetchs all crates, and whenever a new test is written, all displayed tests are removed and replaced. that doesn't feel like a good approach.
  3. if I were to suggest an approach, i think the correct solution would be something similar to what Alex suggested: find all targets in the workspace, and for a given file, find the runnables. the explorer view can be updated as people run tests.

Ultimately, I'm not sure what direction you're interested in taking this, but I think an O(n) approach to test discovery should not be the approach, given that it doesn't work with lazy analysis.

I'll give some further comments in the future.

@ShuiRuTian
Copy link
Contributor Author

ShuiRuTian commented Jun 1, 2023

Thanks a lot for the comments!

Let me explain a bit at the high level how it works for now:

We maintain a test model tree, and build test-items(which is used by VSCode) tree by it.

At first time, we fetch workspaces, packages and targets form RA to initialize test model tree. Then we fetch runnables for target file. And because RA creates runnable for declaration moule(like "mod xxx;") if it contains tests, we know whether a declaration module contains tests. Then we get the definition file of the declaration module and fetch runnables for it. Repeat until there is no tests.

If it's not the first time, when a file is create/update/removed, we communicate with RA and update the test model tree.

And we rebuild the test-item tree everytime (this is kind of like red-green tree!), we do this because we try to flatten the test tree. We omit the workspace/package/target test-item if it only has one children.

It's kind of hard for me to understand whether this is O(n) for test discovery, because, when you find runnables for a file, you have to traversal some of its children in fact, to know whether a declaration module has tests or not.

@ShuiRuTian ShuiRuTian force-pushed the basic-test-explorer branch from ee6ae8d to 6ee2ebe Compare June 1, 2023 15:25
@ShuiRuTian
Copy link
Contributor Author

Maybe we also need think about whether we need to flatten test or not in which levels.

@ShuiRuTian
Copy link
Contributor Author

The refactoring is finished 95%, and we fetch the children most lazily now.

PS: the only exception should be the direct children of targets. Because we do not want to show the target which does not have any tests. However, we get targets info through "cargo metadata", which will list all targets, so we fetch the children of targets eagerly to verify they have children or remove it if it does not.

@bors
Copy link
Contributor

bors commented Jun 27, 2023

☔ The latest upstream changes (presumably #15151) made this pull request unmergeable. Please resolve the merge conflicts.

@ShuiRuTian
Copy link
Contributor Author

Should be ready to be reviewed. Thanks for the patience :)

@ShuiRuTian ShuiRuTian changed the title [BETA]Basic test explorer for VSCode based on testing API Basic test explorer for VSCode based on testing API Jul 11, 2023
@bors
Copy link
Contributor

bors commented Jul 13, 2023

☔ The latest upstream changes (presumably #15265) made this pull request unmergeable. Please resolve the merge conflicts.

@rustbot rustbot added has-merge-commits S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. labels Sep 6, 2023
@rustbot
Copy link
Collaborator

rustbot commented Sep 6, 2023

There are merge commits (commits with multiple parents) in your changes. We have a no merge policy so these commits will need to be removed for this pull request to be merged.

You can start a rebase with the following commands:

$ # rebase
$ git rebase -i master
$ # delete any merge commits in the editor that appears
$ git push --force-with-lease

The following commits are merge commits:

@rustbot
Copy link
Collaborator

rustbot commented Sep 12, 2023

There are merge commits (commits with multiple parents) in your changes. We have a no merge policy so these commits will need to be removed for this pull request to be merged.

You can start a rebase with the following commands:

$ # rebase
$ git rebase -i master
$ # delete any merge commits in the editor that appears
$ git push --force-with-lease

The following commits are merge commits (since this message was last posted):

@ShuiRuTian
Copy link
Contributor Author

ShuiRuTian commented Sep 17, 2023

@Veykril Hi, may I request to review this PR again if you have time recently?

And if it's kind of too large, maybe we could have a meeting to talk about it on Zulip or Teams or something else.

@Veykril
Copy link
Member

Veykril commented Sep 22, 2023

I'll try to take a look at this when I can but I can't make promises right now as to when that might be.

@ShuiRuTian
Copy link
Contributor Author

@Veykril Sure. Anyway this is a OpenSource Project which is driven by interest, I also do not want it to be like a burden. Just at your own pace.

And please let me know what I could do to help this PR merged.

PS: I just found this PR might be part of @matklad's nefarious plan

@bors
Copy link
Contributor

bors commented Nov 15, 2023

☔ The latest upstream changes (presumably #15902) made this pull request unmergeable. Please resolve the merge conflicts.

@davidbarsky
Copy link
Contributor

Sorry for the delay in responding to this PR and letting it stagnate: I'm genuinely, really sorry about that. That being said, I think this feature is very useful, but I'm not sure there is sufficient expertise to maintain and understand the amount of Typescript that lives in this PR. This is not something rust-analyzer can reasonably take on at this time.

I think that this can be more successfully implemented as a standalone companion extension (reusing all the code that you wrote!), but might would require some additional, custom LSP extensions to rust-analyzer. We can work on figuring this out in a separate issue.

@davidbarsky davidbarsky closed this Feb 6, 2024
@connor4312
Copy link

This is unfortunate, running tests in Rust is pretty painful in comparison to other languages with testing extensions available and this was something I was excited about. I hope a companion extension is viable 🙂

@MartinBernstorff
Copy link

Strong agree with @connor4312 here.

I completely understand your standpoint @davidbarsky. A 3000 line PR within a secondary language is quite a chunk.

I wonder whether this functionality might be sufficiently important to try and recruit maintainers for it? That said, I might be over-focusing on VSCode 😉

@connor4312
Copy link

I would be happy to contribute, but I can't guarantee the time it deserves to be the owner of the repo/extension

@HKalbasi HKalbasi mentioned this pull request Feb 8, 2024
@ShuiRuTian
Copy link
Contributor Author

ShuiRuTian commented Feb 22, 2024

Sorry for the late response, I was on Chinese New Year Vacation.

@davidbarsky Hi, some questions:

  1. This feature needs RA's functions, and personally, I think it's coupled closely. I am not sure, but it might make things harder if they are different program. RA needs to provide many events. Java's test extension is an example, it reuses the functions provided by Java's LSP. But I doubt whether RA should do the same thing. Extension/Plugin for a compiled language is always not that easy.

  2. I could also help to maintain this feature. How could I prove it? Fix some other issues?

Or, maybe at least we could provide a switch flag(And we need to tell others it's experimental), which is turned of by default. So that someone could try this feature and provide feedback.

bors added a commit that referenced this pull request Mar 5, 2024
Add test explorer

This PR implements the vscode testing api similar to #14589, this time using a set of lsp extensions in order to make it useful for clients other than vscode, and make the vscode client side logic simpler (its now around ~100 line of TS code)

Fix #3601
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
has-merge-commits S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. S-waiting-on-review Status: Awaiting review from the assignee but also interested parties.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants