-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Basic test explorer for VSCode based on testing API #14589
Conversation
* In most cases the work is finished by commands, and the context will pass itself | ||
* as parameter to the command callback, which is defined when registering. | ||
*/ | ||
export let raContext: Ctx | undefined; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should not re-introduce this global, we had it before and intentionally removed it, as it makes it difficult to reason about.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then everything should be done through commands, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it should go through commands, yes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for making this PR! I volunteered to Lukas to give this a first pass review as this is quite large and they're somewhat busy. Overall, I tried to leave comments here, but I have some questions and concerns:
- The PR is rather large and I'm having difficulty understanding how a lot of the extension-side code is meant to work and interact with rust-analyzer-the-language-server. It seems to be reinventing a lot of functionality that already exists, but without the extensibility that already exists within rust-analyzer.
- It's not clear to me if you're trying to do a global analysis to get all test targets or lazily update as tests update (or both). As best as I can tell, the client fetchs all crates, and whenever a new test is written, all displayed tests are removed and replaced. that doesn't feel like a good approach.
- if I were to suggest an approach, i think the correct solution would be something similar to what Alex suggested: find all targets in the workspace, and for a given file, find the runnables. the explorer view can be updated as people run tests.
Ultimately, I'm not sure what direction you're interested in taking this, but I think an O(n) approach to test discovery should not be the approach, given that it doesn't work with lazy analysis.
I'll give some further comments in the future.
// sometimes it's useful to let the client know the project | ||
// structure. | ||
// This property should only be used as JSON | ||
pub origin_metadata: cargo_metadata::Metadata, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you shouldn't need to add Metadata
here. if you really need to have the crate structure on the extension side of things—and I'm not sure you, given matklad's comment here—you only need to expose local packages and targets in the extension handler, which can be queried off the crate graph and salsa db.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I want to find the target based on BSP protocal, but matklad seems to(please correct me if I understand it wrongly!) suggest to use just the Cargo metadata. #14589 (comment)
crates/rust-analyzer/src/handlers.rs
Outdated
if let ProjectWorkspace::Cargo { cargo, .. } = workspace { | ||
Some(cargo.origin_metadata.clone()) | ||
} else { | ||
None | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While I agree with Alex that Buck (or related extensions) should handle setting up runnables, I think that they'd still benefit from getting a non-Cargo centric view of the immediate workspace.
(Don't treat this as a blocking nit/review.)
const targetUri = parentLocation[0].targetUri; | ||
assert(targetUri.toLowerCase().endsWith("cargo.toml")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the project manifest is not necessarily guaranteed to a Cargo.toml. I'd consider making use of the manifest path in #14931 if/when it lands.
// create target node | ||
// This is necessary, because we do not know how many targets a package contains unless we fetch data of `cargo metadata` | ||
// But we want to only fetch it when cargo file is changed, to make things more lazily. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if this is strictly necessary, as a change in the Cargo.toml will be picked up rust-analyzer-the-server, and querying the language server to get the new workspace structure feels like a far more scalable and reasonable solution than attempting to figure this out in the client.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's only necessary for current implmentation.
We maintain a test model tree, and build test-item tree from test model tree.
For now, we only create test item for target when it does have test children. Which means, for target that not have test children, we remove it from our test model tree. So, when a new test is added to a file, the test might not have a valid target in the test model tree. We add the target node back to test model tree here.
Of course, we could choose to refactor the logic a bit, like, keep target nodes forever and change the logic how we rebuild test-item tree from test model tree.
request: vscode.TestRunRequest, | ||
token: vscode.CancellationToken | ||
) { | ||
// TODO: Never run tests concurrently in client side. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's not entirely clear to me as to why this limitation exists. if a user wants to run multiple tests, can the client not simply pass multiple arguments to the test runner?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This says, not allow user to click two "Run" button quickly, or , when a test is running, not allow another test is runed.
I am not sure what would happen if we allow this, because many process might download same thing meanwhile, or is there any lock file for test command.
And, in fact, we do not allow people run multi tests(but they could run tests for a package or for a whole test module) anyway for now. When user choose multi test-items and click run, we will log some message like "You could run one test item at one time."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for making this PR! I volunteered to Lukas to give this a first pass review as this is quite large and they're somewhat busy. Overall, I tried to leave comments here, but I have some questions and concerns:
- The PR is rather large and I'm having difficulty understanding how a lot of the extension-side code is meant to work and interact with rust-analyzer-the-language-server. It seems to be reinventing a lot of functionality that already exists, but without the extensibility that already exists within rust-analyzer.
- It's not clear to me if you're trying to do a global analysis to get all test targets or lazily update as tests update (or both). As best as I can tell, the client fetchs all crates, and whenever a new test is written, all displayed tests are removed and replaced. that doesn't feel like a good approach.
- if I were to suggest an approach, i think the correct solution would be something similar to what Alex suggested: find all targets in the workspace, and for a given file, find the runnables. the explorer view can be updated as people run tests.
Ultimately, I'm not sure what direction you're interested in taking this, but I think an O(n) approach to test discovery should not be the approach, given that it doesn't work with lazy analysis.
I'll give some further comments in the future.
Thanks a lot for the comments! Let me explain a bit at the high level how it works for now: We maintain a test model tree, and build test-items(which is used by VSCode) tree by it. At first time, we fetch workspaces, packages and targets form RA to initialize test model tree. Then we fetch runnables for target file. And because RA creates runnable for declaration moule(like "mod xxx;") if it contains tests, we know whether a declaration module contains tests. Then we get the definition file of the declaration module and fetch runnables for it. Repeat until there is no tests. If it's not the first time, when a file is create/update/removed, we communicate with RA and update the test model tree. And we rebuild the test-item tree everytime (this is kind of like red-green tree!), we do this because we try to flatten the test tree. We omit the workspace/package/target test-item if it only has one children. It's kind of hard for me to understand whether this is O(n) for test discovery, because, when you find runnables for a file, you have to traversal some of its children in fact, to know whether a declaration module has tests or not. |
ee6ae8d
to
6ee2ebe
Compare
Maybe we also need think about whether we need to flatten test or not in which levels. |
The refactoring is finished 95%, and we fetch the children most lazily now. PS: the only exception should be the direct children of targets. Because we do not want to show the target which does not have any tests. However, we get targets info through "cargo metadata", which will list all targets, so we fetch the children of targets eagerly to verify they have children or remove it if it does not. |
☔ The latest upstream changes (presumably #15151) made this pull request unmergeable. Please resolve the merge conflicts. |
Should be ready to be reviewed. Thanks for the patience :) |
☔ The latest upstream changes (presumably #15265) made this pull request unmergeable. Please resolve the merge conflicts. |
There are merge commits (commits with multiple parents) in your changes. We have a no merge policy so these commits will need to be removed for this pull request to be merged. You can start a rebase with the following commands:
The following commits are merge commits: |
There are merge commits (commits with multiple parents) in your changes. We have a no merge policy so these commits will need to be removed for this pull request to be merged. You can start a rebase with the following commands:
The following commits are merge commits (since this message was last posted): |
98eb446
to
9fe4af6
Compare
@Veykril Hi, may I request to review this PR again if you have time recently? And if it's kind of too large, maybe we could have a meeting to talk about it on Zulip or Teams or something else. |
ab8acbe
to
62db8f6
Compare
62db8f6
to
bcd34c8
Compare
I'll try to take a look at this when I can but I can't make promises right now as to when that might be. |
@Veykril Sure. Anyway this is a OpenSource Project which is driven by interest, I also do not want it to be like a burden. Just at your own pace. And please let me know what I could do to help this PR merged. PS: I just found this PR might be part of @matklad's nefarious plan |
☔ The latest upstream changes (presumably #15902) made this pull request unmergeable. Please resolve the merge conflicts. |
Sorry for the delay in responding to this PR and letting it stagnate: I'm genuinely, really sorry about that. That being said, I think this feature is very useful, but I'm not sure there is sufficient expertise to maintain and understand the amount of Typescript that lives in this PR. This is not something rust-analyzer can reasonably take on at this time. I think that this can be more successfully implemented as a standalone companion extension (reusing all the code that you wrote!), but might would require some additional, custom LSP extensions to rust-analyzer. We can work on figuring this out in a separate issue. |
This is unfortunate, running tests in Rust is pretty painful in comparison to other languages with testing extensions available and this was something I was excited about. I hope a companion extension is viable 🙂 |
Strong agree with @connor4312 here. I completely understand your standpoint @davidbarsky. A 3000 line PR within a secondary language is quite a chunk. I wonder whether this functionality might be sufficiently important to try and recruit maintainers for it? That said, I might be over-focusing on VSCode 😉 |
I would be happy to contribute, but I can't guarantee the time it deserves to be the owner of the repo/extension |
Sorry for the late response, I was on Chinese New Year Vacation. @davidbarsky Hi, some questions:
Or, maybe at least we could provide a switch flag(And we need to tell others it's experimental), which is turned of by default. So that someone could try this feature and provide feedback. |
Another implementation for #3601
The client side controls life-cycles of tests in this impl.
Another basic idea is to based on
runnables
to reuse existing logic.You might want to read the "Known issue" section in the new added editors/code/src/test_explorer/README.md.
These issues might need to be fixed before the first release, waiting for opinions :)
Although we still lack some setting to control the behavior, the basic logic is done.
Behavior:
Bench, example and build script is not listed in the test explorer.
Features:
Basic VSCode test explorer feature
Note that, run will update the test item on the fly(as long as the output is logged), but debug will update the status when the debug session is over.
Note that the error is bound to the item rahter than the line which occurs error.
Some special feature
3. test item flatten
This is the "normal" test item tree
Workspace - package - target - test modules - test
However, this might be kind of verbose, if you have
So we choose to flatten the test items, if there is only one workspace/package/target, we will ignore to build its corresponding test item.
In this picture, the target level is flattened.
And, if there is multiple workspaces/packages, but there is only one which has tests, we will not flatten it. but we will not show the ones which does not have tests. So, you might still see the result as if it's not flattened.
In this picture, there are two workspaces in fact, but the other does not have any tests.
So as if the workspace is not flattened.
However, this might be kind of confused, because it introduces inconsistent, and VSCode would not remember the last run result if the test item tree is changed.
Please leave your opinions whether the behavior is reasonable.
Issues:
Not support select multi items, like:
In this screenshot, user chooses two items, however, click run/debug button will get a notification like "Please only pick one to run/debug"
Because we load test items lazily now, when running a test suite, whose test cases are not loaded, their status is not updated. And if we load them later, it will be shown as if they are not executed: