-
Notifications
You must be signed in to change notification settings - Fork 764
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
testOnSave: add option to only run the previously executed test #3521
Comments
I think this is a reasonable feature request, but I want to check with @firelizzard18 whether it is possible to continue to support this with the vscode test api. |
@hyangah vscode-go's autorun behavior deserves some consideration. See also #1952 and #2483. The default behavior and what options we should provide is not clear to me. VSCode imposes almost nothing on us in this respect so our choices are virtually unlimited. The underlying mechanism provided by vscode isn't much more than a standardized way for the user to notify the test provider (i.e. the extension), "I would like to run {this set of tests} continuously"1. When the user does that, vscode sends My initial goal was, "when a continuous run is started for {tests} if a test in that set is modified, (re)execute it". That seemed like the best combination of useful while avoiding executing every test in the package on every change, since that could be easily disruptive. I'd love to be able to trigger a test when a function in its call graph is modified, but is a much heavier lift and should be done in gopls. Barring that, I think we should carefully consider what set of options would capture the majority of use cases with the minimum of complexity. Which probably means we need to capture use cases. Maybe a GitHub discussion would work for that? Footnotes
|
Yes, this was my thinking as well - my conclusion was that It minimizes complexity by changing only a single function, and maximizes flexibility by letting the user easily reconfigure on-the-fly which tests should be continuously re-executed - all they have to do is manually run a different test once. Subtests/file tests allow for more/less granularity as needed, and they can even temporarily revert to the current behavior with |
If #3523 is accepted it will replace all the existing test support. Which is not to say you're not right about the complexity, but the testOnSave functionality will likely be replaced with vscode's continuous run mechanism. That being said, you have a point. Plus vscode's continuous runs would allow you to select an arbitrary set of tests (and/or subtests, packages, etc) so you wouldn't be limited to a single test. My concern is not the complexity of this specific request but rather the complexity of all the various requested configuration options combined together and how they may interact. |
I opened this with the assumption that the test refactoring would take a few more years, so this would act as a simple stopgap - of course, as timing would have it, #3523 was immediately opened afterwards. It's true that vscode's continuous run is a superset of this request - If the refactoring will be completed soon, I'm okay with closing this and just waiting for continuous run support. |
@myaaaaaaaaa My initial implementation of vscode's continuous run will not do exactly what you want.
When a continuous run is started (assuming it includes the test), the test will be re-executed whenever the test is modified. The test will not be re-executed when |
Okay, reopening, though I am actually in favor of removing it again (along with the rest of I'll leave it up to your discretion whether that churn is worth it. |
@myaaaaaaaaa I think we may have been talking past each other. I am not suggesting we retain the old system, I was interpreting this issue as "I want to be able to run the previously executed test" and that will still be applicable once (if) #3523 is merged. I'm assuming that is still a capability you would like to have? |
Sorry for making things confusing, let me see if I can clarify: From my understanding: if #3523 is merged, it will be an MVP that only re-executes tests when When will it be able to re-execute tests whenever any file at all is modified? That is to say, when will it reach feature parity with
Also just in case: I'm specifically referring to |
Reimplementing the existing Continuous runs allow the user to select an arbitrary set of tests that should be executed continuously. So the user can pick and choose. Then, the extension has to decide when to trigger a rerun and which subset of the selected tests should be rerun. For my initial implementation I chose the most conservative option of "if X is within (the selected tests of) a continuous run, when X's file is saved rerun X if it has been modified." But I had to add logic to do that, "when any file is modified, rerun all tests within continuous runs" would have been easier. So maybe we have two settings, the 'sensitivity' of the trigger and the specificity of which tests are rerun. The existing behavior could be replicated with the highest sensitivity (any file modification) and the broadest specificity (all tests in a continuous run). Your use case could be satisfied in essence with the same settings, but you select a specific test or tests instead of the entire workspace when you activate a continuous run. What I want is to get feedback on what people want so we can discuss the best solution, instead of adding a zoo of non-orthogonal settings that grow difficult to maintain. I think this issue is valuable feedback about what use cases we should support. |
"Sensitivity" would definitely be orthogonal - more so than Since the goal is to collect feedback, I suppose I should provide some anecdotal evidence (with the obvious caveat that I can only speak for myself). My own workflow generally resembles TDD, which can essentially be summarized as:
The spirit of TDD (at least in my opinion) is that the results of these tests should be as readily accessible as the red underlines in vscode that highlight compile and vet errors - all of which are ways to automatically review your code as you type. In a way, you can think of unit tests as custom compiler/vet checks. With that in mind, the only "sensitivity" settings that make sense to me are "callgraph" and "maximum" - "maximum" simply being a backup in case there are bugs in "callgraph". Any lower "sensitivity" seems to me to defeat the purpose of continuous runs, which is to make unit test failures as seamless and realtime as compile errors. In a perfect world, "specificity" would also be permanently set to "maximum" - unfortunately, test suites inevitably grow with projects, so having ways to easily change "specificity" is a necessary compromise. Fortunately, "specificity" elegantly distills all the ways that test suite performance can be configured - if there are any problems, rather than trying to work around them with more configuration options, it's more straightforward to just reorganize the problematic tests into subtests/file tests.1 Of course, these are just my experiences. Footnotes
|
Is your feature request related to a problem? Please describe.
Testing on save is extremely convenient for TDD-style workflows. However, at the moment it involves running the entire test suite on every save, which can be excessive for such a frequent operation.
Describe the solution you'd like
Add a
previous
option togo.testOnSave
(essentially turning it into an enum with the optionsoff/previous/all
).When set to
previous
, the extension should effectively runGo: Test Previous
every time the user saves.The text was updated successfully, but these errors were encountered: