Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Entire test runner workflow is too slow and unenjoyable #623

Closed
davidfowl opened this issue Mar 17, 2017 · 57 comments
Closed

Entire test runner workflow is too slow and unenjoyable #623

davidfowl opened this issue Mar 17, 2017 · 57 comments
Labels

Comments

@davidfowl
Copy link

I'm on the ASP.NET team and using the VS 2017 Community RTM build and CLI tools. There are a few key issues that totally disrupt my workflow with the test runner and generally make the experience unenjoyable. We use xunit with the test explorer and here are the issues I experience daily:

  1. The test explorer window is too narrow and test names are unreadable by default.
    image
    Test names contain the full namespace which make it impossible to fit into the tiny winow. After I expand it to see actual, it moves the summary window to the side (which I never wanted to begin with):
    image

  2. The test runner rebuilds the test project and it's dependencies when running any test. This is horrible experience in .NET Core projects because of the missing of "up to date" check. The entire project graph is checked for stale assets to run a single test.

  3. I'm unable to right click a specific test in source and run the test. It seems to run all tests in that class instead of running the specific test method I targeted. This used to work before the move to csproj and has been broken with the new test runner and tooling.

  4. Test discovery is super slow across multiple projects. It seems as though test discovery happens sequentially for each project in the solution instead of in parallel. Why is that?

  5. The test runner inside visual studio doesn't show console output. I want to see my Console.WriteLine output in the test runner window. I use it to debug race conditions all the time. Sometimes using the debugger doesn't cut it because it slows everything down to a point where it's hard to reproduce the problem. There are developers are my team that built a hacked up version of the test runner to enable this on their dev boxes. This is a real blocker.

  6. I can't see the active test being run. There are situations where I want to see the current test being run (both on the command line and in visual studio) so I can tell which one is hanging (if it hangs). It would have saved me so many hours if there was a simple way to show the active test being run in both the test explorer and on the command line.

  7. Certain crashes make the test runner implode. I'm working on a refactoring and when I run tests sometimes this happens:

The active test run was aborted. Reason: Unable to communicate with test host process.

Which probably means the process crashed but the test runner doesn't help me diagnose things here. I don't know which test was running when it crashed, so I have to manually binary search until I find the one that is the culprit.

I've been actively using this product for the last week in-depth and I feel like a few simple tweaks could really make this much much better.

@issafram
Copy link

  1. For XUnit: https://xunit.github.io/docs/capturing-output.html

@ckurt
Copy link

ckurt commented Mar 17, 2017

6a. Please update the icon used for "test currently running". Ideally to an icon that moves, but at least to an icon that is not mainly green the same color indicating passed tests. This also make the currently running test very hard to find, especially if it is in a list of tests you already ran and passed.

  1. Session management. Please introduce test sessions or "playlists" without the need for saving them in a file.

@davidfowl
Copy link
Author

@issafram Wouldn't it be great if Console.WriteLine just worked?

@bradwilson
Copy link

For the things which appear to be xUnit.net-centric:

  1. Names are controlled by config. Use "methodDisplay": "method". https://xunit.github.io/docs/configuring-with-json.html
  2. This appears to be newly broken in 2.2, and there is an open bug: Visual Studio runner not showing test in CodeLens and only runs all xunit/xunit#1140
  3. In 2.x, xUnit.net doesn't capture the console output. You could use ITestOutputHelper: https://xunit.github.io/docs/capturing-output.html

@davidfowl
Copy link
Author

In 2.x, xUnit.net doesn't capture the console output. You could use ITestOutputHelper: https://xunit.github.io/docs/capturing-output.html

The problem is I'm not trying to Console.WriteLine from my test. I'm trying to do it in product code. Plumbing that through manually isn't fun.

@xoofx
Copy link

xoofx commented Mar 17, 2017

The problem is I'm not trying to Console.WriteLine from my test. I'm trying to do it in product code. Plumbing that through manually isn't fun.

I completely agree. I have been always staying out of using XUnit for this sole reason (+ another one that when using output capturing and you have lots of output, XUnit is damn slow - or with its resharper integration, not sure... - while NUnit works perfectly there)

@davkean
Copy link
Member

davkean commented Mar 17, 2017

I would add one more to that list: not only is discovery really, really slow - to even get to the discovery phase when opening a Solution takes > 1 minute on http://github.com/dotnet/roslyn-project-system. I suspect waiting for output groups from CPS to populate is probably a lot of that - but I've not investigated.

@davkean
Copy link
Member

davkean commented Mar 17, 2017

If I remember correctly about the "current" running test, I filed this back in VS 2012 when the Test Explorer was added, the reason they don't show current progress was the communication back to VS to update the "current" test was slowing down the test run. However, I really only want to see "current" when the test > a couple of seconds, by which case you could make it to only communicate "current" when running tests that takes a while.

@davkean
Copy link
Member

davkean commented Mar 17, 2017

@bradwilson A thought about the Console problem - what if you set your own writer for Console (via SetOut) that stored it's data based on AsyncLocal<T>. That way you could associate console output with the currently running test.

@tmat
Copy link
Member

tmat commented Mar 17, 2017

@ManishJayaswal

@codito
Copy link
Contributor

codito commented Mar 17, 2017

@davidfowl great feedback. Taking up the test window ones:

  1. The test runner rebuilds the test project and it's dependencies when running any test. This is horrible experience in .NET Core projects because of the missing of "up to date" check. The entire project graph is checked for stale assets to run a single test.

This could be related to dotnet/project-system#62? /cc @davkean

  1. Test discovery is super slow across multiple projects. It seems as though test discovery happens sequentially for each project in the solution instead of in parallel. Why is that?

It is possible to do parallel discovery with "Parallel" option enabled in Test Explorer. #499 is tracking if discovery should be parallel by default.
test_explorer_parallel

  1. I can't see the active test being run. There are situations where I want to see the current test being run (both on the command line and in visual studio) so I can tell which one is hanging (if it hangs). It would have saved me so many hours if there was a simple way to show the active test being run in both the test explorer and on the command line.

This is an usability issue. Created #626. Test explorer shows the in progress test in the Not Run tests group. And the Passed Tests group shows first. If a user has thousand tests, clearly Not Run is below the screen real estate to provide a meaningful feedback to user.
test_explorer_inprogress

  1. Certain crashes make the test runner implode. I'm working on a refactoring and when I run tests sometimes this happens:
The active test run was aborted. Reason: Unable to communicate with test host process.

Which probably means the process crashed but the test runner doesn't help me diagnose things here. I don't know which test was running when it crashed, so I have to manually binary search until I find the one that is the culprit.

Created #627 to track this.

Thanks again for bringing this up. Request readers to vote the individual issues (it will help us prioritize).

@Mike-E-angelo
Copy link

I do have to chime in here and say that I have yet to see a unit test experience that I really enjoy. Even JetBrains which is usually pretty good with their .NET implementations/improvements really misses the mark in their efforts here with their dotCover product. I might be off the usual beaten path here, but I think at a minimum there should be a grid view of some sort for any test viewer that allows us to easily grok the state of our current test state. This of course would not only cover the display of the obvious expected fail/pass expectations, but also all the cool metadata and statistics that are very difficult to discover and understand as currently presented. Plus, I think it would attend to at least some of the points that Mr. Fowler presents in this issue.

So, very much in favor of seeing any innovation here.

Btw @davkean, in addition to your solid suggestion, @gfraiteur and the talented PostSharp team are working on a fully async-aware and capable logging solution known as Frigate. I saw it during a recent webinar and immediately thought of the challenges that @bradwilson once described (to put it kindly 😄) in having to deal with logging within an async environment. Something like this would be very useful for logging in async-based testing, if by the very least by module/extension. Definitely at least worth knowing about, IMO. 👍

@gfraiteur
Copy link

@davkean @bradwilson @Mike-EEE

Mike, thanks for mentioning Frigate. The project is still very early, but if you want to know about it, you can watch the webinar recording from the 19th minute. Don't hesitate to ping me with questions or feedback. I'm happy to chat about integration with test runners.

@matt-psaltis
Copy link

@bradwilson I added an issue with examples for changes to xunit that would allow ITestOutputHelper to be registered for all tests without the need for explicit constructor injection. The changes don't affect existing functionally, just provide better subclassing opportunities for those who want/need it. Sorry to partially hijack this issue!

See xunit/xunit#1119

@issafram
Copy link

@davidfowl

Wouldn't it be great if Console.WriteLine just worked?

Yes it would, but this is very similar to the Microsoft Build & Release Management Team deciding that Write-Host should not be allowed in PowerShell scripts. I think it is a design decision where they decided that there is no concept of a Console (Host) when running on an agent. That is my guess at least.

The problem is I'm not trying to Console.WriteLine from my test. I'm trying to do it in product code. Plumbing that through manually isn't fun.

I like to use the adapter pattern for this issue. I use an ILogger, or your favorite logger in my code. I then configure my logger to write to the console for certain classes or certain log types (trace or verbose or debug or whatever). NLog is great for this and has a built in logger for the console (I'm sure most loggers do as well). This keeps my production code working as expected.

I then setup my unit tests to call XUnit's ITestOutputHelper for my ILogger calls.
This is very flexible and you get what you want in your product code while also getting the output in your unit tests.

I know it sounds painful, but you are basically hard coding a dependency with Console.WriteLine.
Abstract it and you will be ok.

I'm not saying that I agree with the decision to disallow the Console from testing, but I'm ok with making my code more flexible.

@tannergooding
Copy link
Member

@codito, for 2.

  1. The test runner rebuilds the test project and it's dependencies when running any test. This is horrible experience in .NET Core projects because of the missing of "up to date" check. The entire project graph is checked for stale assets to run a single test.

This is a real pain point on large desktop solutions as well (Roslyn). A simple change in one project causes 3-5 minutes of discovery time (which is ridiculous considering the console does it in milliseconds).

@clairernovotny
Copy link
Member

clairernovotny commented Mar 17, 2017

For 6, xUnit has long running test detection but it's off by default. Add a config value called longRunningTestSeconds and set it to a value that you want diagnostic messages sent to. https://github.com/Reactive-Extensions/Rx.NET/blob/develop/Rx.NET/Source/tests/Tests.System.Reactive/xunit.runner.json#L4

@ManishJayaswal
Copy link

tagging @sbaid

@benaadams
Copy link

Often also leaves lots of dotnet processes hanging around

@agocke
Copy link
Member

agocke commented Mar 17, 2017

cc @jcouv As ref-assembly generation may provide some benefits here.

@codito
Copy link
Contributor

codito commented Mar 18, 2017

@tannergooding wrote:

@codito, for 2.
The test runner rebuilds the test project and it's dependencies when running any test. This is horrible experience in .NET Core projects because of the missing of "up to date" check. The entire project graph is checked for stale assets to run a single test.
This is a real pain point on large desktop solutions as well (Roslyn). A simple change in one project causes 3-5 minutes of discovery time (which is ridiculous considering the console does it in milliseconds).

There's a detailed RCA on the discovery perf at #485. Here's the current action plan we're working on:

  1. Source Information is a big contributor to slowness. We're prototyping if it's possible to find test source information in the IDE, instead of doing it in the adapter (remove the overhead of parsing PDBs and cross appdomain calls)
  2. App domains also slow down discovery. For desktop projects (in the older test platform), it is not possible to remove appdomains since there are file locks. This is removed in newer test platform (this repo), it supports parallel discovery and the discovery processes are short lived (so no file locking). We're planning to bring in parallel discovery by default (Making Discovery Parallel by default #499)
  3. Auto discovery of test projects is triggered as soon as a solution is opened. Also this operation is not cancellable, it makes things worse; user is blocked on doing any operation on Test Explorer until the auto discovery is complete. We're debating if this feature should be removed, and make discovery operations cancellable

Can you share the repro steps (project, sln) mentioned above? (2) is already available today, we can try and evaluate the benefits.

It will be great to get some feedback on (3), do people find auto test discovery in test explorer useful?

@mgravell
Copy link

mgravell commented Mar 18, 2017

On the names/width issue; since the names are hierarchical, this seems to be a perfect opportunity for an implicit tree structure, perhaps grouped so that you don't have 5 levels with only one sub-level and nothing else; so in your example there would be just one parent node of

  • Microsoft.AspNetCore.Server.Kestrel

With nodes underneath that for every separate thing with a different FQN, and the test methods as the leaf level

Relatively simple UI change that can be done purely on the text of the FQN (no input from the provider needed), and could make it so much more usable

@tannergooding
Copy link
Member

@codito, Repro Steps:

  • Enlist in Roslyn: https://github.com/dotnet/roslyn
  • Run Restore.cmd
  • Open Roslyn.sln
  • Build Roslyn.sln
  • Wait minutes for Test Explorer to Discover all 60k tests
  • Make a whitespace change in some project
  • Build Roslyn.sln
  • Wait minutes for Test Explorer to Discover all 60k tests

I'm also not sure that reading source information is the slow part here, I'm pretty sure its the serialization mechanism between the test adapter and VS.

xUnit can discover all 60k tests in Roslyn in seconds (by default its just reflecting over all attributes that inherit from [Fact] and [Theory], it can be slightly more expensive with a custom discoverer, but we don't have one).

You can also crack the PDBs and pull out the source information for the tests in a very small amount of time (seconds) as well (and if this was a bottleneck, then it could be made lazy and only pulled/displayed when the user selects an individual testcase, either for navigation or for viewing details).

With both of those problems "gone", the remaining issues are:

  • The constant rediscovery of all tests, when you only need to rediscover for the assemblies that have been rebuilt (99% of the time, custom discoverers may break this logic)
  • The serialization of data between the test adapter and VS. It looks like you are using JSON and TCP, which is incredibly inefficient. Using IPC (when the host and client are on the same machine) and a binary serialization mechanism would be much more efficient. The only data that needs to be 'deserialized' is the data you care about. Everything else you can store as a byte[] and transmit back directly, without deserialization (on read from the adapter) and without reserialization (on write to the adapter).

@clairernovotny
Copy link
Member

@tannergooding

xUnit can discover all 60k tests in Roslyn in seconds (by default its just reflecting over all attributes that inherit from [Fact] and [Theory], it can be slightly more expensive with a custom discoverer, but we don't have one).

That's not quite accurate. xUnit never uses Reflection to statically look at tests. It always executes the discoverer's, which can happen to be facts/theories, in the assembly under test. That's core to how xUnit is so flexible, you can provide custom discovery/execution mechanisms directly in the assembly under test.

@tannergooding
Copy link
Member

@onovotny, you're correct, just didn't explain myself very well 😄. I meant that the default discoverers (asumming you don't have any custom discoverers) appear to use reflection over attributes to locate the tests. Customer discoverers can do whatever mechanism they wish.

In either case, xUnit is able to discover all 60k of the Roslyn tests in seconds (not minutes), so discovery itself is not the likely bottleneck.

@benaadams
Copy link

benaadams commented Mar 18, 2017

@tannergooding try opening Task manager next time and killing VsTest.Console while you are waiting...
(Once cpu has hit ~0%)

@davidfowl
Copy link
Author

This just happened to me, opened a largish solution, discovered all test. Then I clicked Run All, and it proceeded to rediscover all tests.

@codito
Copy link
Contributor

codito commented Mar 20, 2017

@tannergooding I am able to reproduce it. Here are the numbers and steps to try out the "parallel discovery" in Desktop targeted projects:.

1. Without any changes, I see following elapsed time for discovery roslyn.sln:
========== Discover test finished: 68797 found (0:10:50.8393771) ==========

2. With changes below, this is how the numbers look (~1/5th of earlier elapsed time):
========== Discover test finished: 68797 found (0:02:05.3858549) ==========

3. With project modification:

  • Click on the first test (AssemblyAttributeTests.AlgorithmAttribute)
  • Add a space
  • Build the project

Observation
I see only the single project built and discovered; in case of (1) and (2) both. Let me know if I am missing something in repro steps.

------ Discover test started ------
[xUnit.net 00:00:00.3935009]   Discovering: Roslyn.Compilers.VisualBasic.Emit.UnitTests
[xUnit.net 00:00:08.1349817]   Discovered:  Roslyn.Compilers.VisualBasic.Emit.UnitTests
========== Discover test finished: 3223 found (0:00:11.4669228) ==========

Steps to tryout parallel discovery

  1. Rename testplatform.config.txt to testplatform.config and place it in C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\Common7\IDE\Extensions\TestPlatform
  2. Restart VS
  3. Disable AppDomains for xunit adapter (rename test.runsettings.txt to test.runsettings and set it in Test -> Test Settings in VS or use xunit.runner.json)
  4. Enable parallel discovery in Test Explorer
  5. Build

Tests will start discovering. You may hit #76 though.


With both of those problems "gone", the remaining issues are:

  • The constant rediscovery of all tests, when you only need to rediscover for the assemblies that have been rebuilt (99% of the time, custom discoverers may break this logic)

The existing logic in Test Explorer should already do this. It assumes build will only compile projects that have changed. It diffs the timestamp of earlier Test Containers; triggers discovery only for the Test Container whose timestamp has changed after the build operation. If you have a repro (for Desktop targets) that violates this, we will be happy to look into it. Number of containers sent for rediscovery should be equal to number of dependent test projects impacted by the change in a project.

  • The serialization of data between the test adapter and VS. It looks like you are using JSON and TCP, which is incredibly inefficient. Using IPC (when the host and client are on the same machine) and a binary serialization mechanism would be much more efficient. The only data that needs to be 'deserialized' is the data you care about. Everything else you can store as a byte[] and transmit back directly, without deserialization (on read from the adapter) and without reserialization (on write to the adapter).

By default, Desktop tests run via older Test Platform. There is no JSON involved. The changes I've suggested in (2) above use the newer test platform (which uses JSON). I am not sure if we can change the protocol to binary, it is documented and public for consumption by various IDE/Editors (the response from test host is directly relayed back to Editors). There are several optimizations possible though (reducing JSON verbosity, using streams/custom serializers). They are tracked at https://github.com/microsoft/vstest/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aopen%20label%3Aperf

@davidfowl
Copy link
Author

davidfowl commented Mar 29, 2017

@codito Is anyone working on fixing any of the issues mentioned? How much of it is open sourced? Maybe the community can help with some of it.

@PureKrome
Copy link

First time VS Test user, long time R# user.

Started with VS 2017 Community Edition to see if it's caught up with Resharper (aka R#) for some/specific features. Literally unable to do any testing because of the tooling ... or more precise, the same issues @davidfowl has accurately and kindly mentioned.

Literally all the points raised impact us here too :(

Even something as simple and similar as David's point 3 - run/debug current test which the cursor is flashing, in .. or right click in that test method (code) and run/debug.


Here's some sample screenies from R#.

No, this is not trying to be 'MS is copying another product and trying to destroy it" .. but .. please be inspired by this premium product.

image

image

image

A number of us feel that we just can't use the built in VS Test while these features aren't around.


Sure we have paid R# lic's. But we find R# really slows down our VS experience (it does SOOO much, to be fair to it .. just tooo much for us) .. which is why we would love to see if we can start using VS Test instead.

This project should please be OpenSourced. Plz. Let the community help!

@davkean
Copy link
Member

davkean commented Mar 29, 2017

Test Explorer does let you group by one level:
image

@PureKrome
Copy link

@davkean That's just one level though. Notice how I mentioned Tiered levels?

It's nice to break it down by Tiers. I think @mgravell said the same thing, above?

@davkean
Copy link
Member

davkean commented Mar 29, 2017

Ah, misread your second image, apologies. :)

@ManishJayaswal
Copy link

@codito

@ManishJayaswal few scenarios: Editors (not all of which will be the same runtime as the test runner or the test host), the ability to decouple testhost in scenarios like Devices or remote execution. However I don't have the data to conclude that JSON protocol is the biggest bottleneck right now. There has been extensive discussion and profiling in #349 and #485 which suggested other big rocks. We learnt about extra appdomains, source information (with x-appdomain calls) in those exercises that we're working to fix first. What will be an acceptable percent for infrastructure overhead in a test run?

This does appear to be a significant bottleneck for LUT runs. With every LUT run ( which we do after every edit) here is what we are seeing:

  • This greatly increases the size of payload we have to send over the wire for IPC. Since we have to do a lot of IPC between Test Host, test console and LUT process, it is showing up as a significant bottleneck. For a test run of about 2000 tests, the payload appears to be about 30 MB which needs to go over IPC for every run.
  • This also results in a significant serialization/de-serialization cost which happens at every hop across the process boundary. Since LUT does not need the JSON format, we could have significant time saving if we do not have to serialize and de-serialize this data.
  • This also increases the memory used in LUT process because we are generating the strings to hold the data we get over IPC and then additional JSON objects when these strings are converted to JSON. We will save all of this memory cost if we don't do this.

Additionally. I think VS process will also have these memory overhead that I have mentioned below.

I do understand why you guys have designed it the way it is ( to make it work for all scenarios) , however to deliver the performance that LUT needs, we do need a by-pass from this. It should be an alternative that is available to TP clients. if the existing mechanism is too bulky for their scenarios. We ( @genlu ) do have perf traces which show these bottlenecks pretty clearly. We made changes to the Test Platform code to collect this information. Please let us know if you would like to see them.

@sbaid - we would need this to address perf and scale issues for LUT for the quarterly release. Please let us know if you want us to file a separate issue for this.
Additionally, it seems that a lot of issues are getting mixed up in this meta issue. It would be good to list all of the related issues at the very top and start marking them with updated status ( like under investigation, completed etc) so that all of us know about the progress. If we do not do that then I worry that things may fall through the cracks.

@codito
Copy link
Contributor

codito commented Mar 30, 2017

@ManishJayaswal I will follow up with you offline to get the traces.

@pvlakshm
Copy link
Contributor

pvlakshm commented Mar 31, 2017

Thank you for all the feedback and discussions.

Here is a summary of the concerns raised in this thread. I have pointed to the relevant issue tracking it.
At the end I have added a pointer to the Test Explorer backlog. Please take a look there and vote up the individual issues therein. We will be executing based on the vote count.

1. The test explorer window is too narrow and test names are unreadable by default.
This is xUnit specific.
Names are controlled by config. Use "methodDisplay": "method". Please see here: https://xunit.github.io/docs/configuring-with-json.html

2. The test runner rebuilds the test project and it's dependencies when running any test. This is horrible experience in .NET Core projects because of the missing of "up to date" check. The entire project graph is checked for stale assets to run a single test.
This is related to dotnet/roslyn-project-system#62.
/cc @davkean, @srivatsn

3. I'm unable to right click a specific test in source and run the test. It seems to run all tests in that class instead of running the specific test method I targeted. This used to work before the move to csproj and has been broken with the new test runner and tooling.
This is xUnit-specific.
This appears to be newly broken in 2.2, and there is an open bug: xunit/xunit#1140

4. Test discovery is super slow across multiple projects. It seems as though test discovery happens sequentially for each project in the solution instead of in parallel. Why is that?
We are using the following issue to track scaling up test discovery for large solutions: #674

5. The test runner inside visual studio doesn't show console output. I want to see my Console.WriteLine output in the test runner window …
This is xUnit specific.
Please see here: https://xunit.github.io/docs/capturing-output.html.
Also, In 2.x, xUnit.net doesn't capture the console output. You could use ITestOutputHelper: https://xunit.github.io/docs/capturing-output.html
Also, see here: xunit/xunit#1119

6. I can't see the active test being run. There are situations where I want to see the current test being run (both on the command line and in visual studio) so I can tell which one is hanging (if it hangs). It would have saved me so many hours if there was a simple way to show the active test being run in both the test explorer and on the command line.
We are using the following issue to track this: #626

7. Certain crashes make the test runner implode. I'm working on a refactoring and when I run tests sometimes this happens:
“The active test run was aborted. Reason: Unable to communicate with test host process.”

We are using the following issue to track this: #627

8. Is anyone working on fixing any of the issues mentioned? How much of it is open sourced? Maybe the community can help with some of it.
I have created a UV item: https://visualstudio.uservoice.com/forums/121579-visual-studio-ide/suggestions/18785326-make-the-vs-test-explorer-open-source. Please vote. Along with your vote feel free to indicate if you would be willing to contribute.

9. Group by namespace
Please note that the Test Explorer already supports group-by-namespace.

10. Run all or just several tests from Test Explorer under performance profiler
Please vote for the issue here: #660. We will use that to inform moving this into the backlog.

Test Explorer backlog
Here is our Test Explorer backlog: #676
It captures the focus of our upcoming work. Please considering voting on the individual issues therein.

@PureKrome
Copy link

@pvlakshm or @codito - how about my questions about:

  • Tier'd heriachy instead of the current two tiers. I think this question is different to question (1) "test window to narrow..."
  • Moving the test visual Green/Red button/circle from codlens to free. Not all of codelens stuff, just the RED/GREEN test buttons. (who is the manager to ask about this one? yes, i know this is a huge long shot .. but you never know..)

Ref:

image

Also .. today we were trying to use it again and had this issue: We ran all tests (say 50 or 100). A test (or two? three?) errored. We saw the "red error bar" at the top but had no idea which actual test errored.

Here's a simple example...

image

Which test has errored? Then compare this to a tool that makes things easier to work with:

image

Not hating at all - just trying to help and hope this feedback (under the OP's same topic) can be addressed :)

🙏

@PureKrome
Copy link

Polite ping to @pvlakshm or @codito again ☝️

@pvlakshm
Copy link
Contributor

Please see the summary above. Since there were some issues not related to the Test Explorer per se, I have created a new issue #725 to track Test Explorer specific improvements.

@DanHarman
Copy link

DanHarman commented Aug 18, 2017

One thing that vexes me every time I try vs test runner instead of resharper is the global 'default processor architecture'. I don't understand why it doesn't appear to be a default, but actually the architecture it runs all tests in. Resharper in contrast looks at the architecture of the test project and uses that on a per project basis.

It might be useful if the guys who own this component tried resharper unit testing for a while to appreciate the massive gaps in useability.

@llehn
Copy link

llehn commented Aug 19, 2017

+1 to test runner being suboptimal. Forced to use it due to bug in current Resharper and boy its so slow and annoying. In addition to things said the test discovery exe keeps locking xunit dlls, so I had to add a prebuild step to kill it. Can't wait for R# to fix the bug so I can use again, but it actually would be nice if VS wasn't such an unproductive experience.

@williamwgant
Copy link

Yeah, this is unusable. It builds every single time there is a test. I'm looking at a minute and a half before it runs my single test that is: Assert.True(1 == 1).

@soeron
Copy link

soeron commented Dec 1, 2017

01.12.2017 Sill valuable feedback, even with all the performance optimizations made.
What I can add is that analyzing problems is pain. The logs must be enabled by editing config files, restarting VS, looking in temp folder?? etc. Still after they are enabled they are useless for anything to be figured out.
Last 3 days wandering around why the test discovery fails, after windows update, vs update, xunit update without a clue ...

Add option in Options menu to specify where the log file should be rather than enabling it with config file editing.

@smadala
Copy link
Contributor

smadala commented Dec 2, 2017

@soeron Thanks for feedback, We have issue to improve enable logs internally (https://devdiv.visualstudio.com/DevDiv/VS.in%20Agile%20Testing%20IDE/_queries/edit/531191/?triage=true), I create one here #1315 for tracking.

Still after they are enabled they are useless for anything to be figured out.

Totally agreed 👍. Created an issue here #1316 .

Last 3 days wandering around why the test discovery fails, after windows update, vs update, xunit update without a clue ...

Can you create a separate issue for it with logs and repro project?

/cc @pvlakshm @cltshivash @ManishJayaswal for triage #1315 #1316

@ThatRendle
Copy link

Regarding the Console.WriteLine thing, I was working on something yesterday and wishing that Core had an IConsole interface, which could be automatically wired up with a simple pass-through to the static class by the IoC. I ended up daydreaming about a framework-level attribute or interface which would allow you to specify a default implementation for interfaces to use if one has not been specified during IoC configuration, something that could be supported by all DI systems. Like this:

public class DefaultConsole : IConsole, IFallbackImplementationFor<IConsole>
{
    public void WriteLine(string s) => Console.WriteLine(s);
}

@snkota
Copy link

snkota commented Dec 22, 2017

@jnm2
Copy link

jnm2 commented Dec 22, 2017

As some of you know, I'm on a mission to be able to stop using ReSharper. Without a doubt the most frequent reason I need to keep turning ReSharper back on is the convenience of these things:

  1. A hierarchical display of tests by project, namespace, class, method, and test cases within parameterized methods.
  2. The ability to filter the view to see only those nodes (and their ancestors) with failing tests, or ignored tests, or warning tests, etc.
  3. The ability to right click anywhere in a tree and run or debug that node and all children or navigate.
  4. Multiple unit testing sessions, tabbed within the test window. Each session running independently of the rest and showing a separate chosen subset of tests. I can't stress this enough. I rarely want to see all tests in the same window. To run all tests I prefer either continuous testing tools or keyboard shortcuts to run all. If I'm using the test window, that means I'm repeatedly running or debugging a very specific selection of tests. I should not have to re-select them each time or run more than I need.
  5. The ability to add tests to the current session by clicking icons inline with a class or method declaration in code.
  6. The ability to start a new session by doing the same, or by right-clicking a node in an existing session.
  7. The ability to press the delete key to remove a selection of tests from a test session.

Of the several testing UIs I've seen, Visual Studio is the only one that lacks the intuitive hierarchical paradigm. When trying to run an arbitrary selection of tests repeatedly, scrolling through a massive flat list with near-uselessly-primitive grouping abilities is distracting and tediously manual. It leaves me switching ReSharper or NCrunch back on so that I feel like I can focus on coding.

I genuinely believe that the VS teams want to provide a focused, considerate tool out of the box. You wouldn't want to blindly copy another tool. This is one of the few areas in which I can say urgently that specific other tools have gotten it very right; thus this feedback. 😃

@kendrahavens
Copy link

kendrahavens commented Mar 13, 2018

Hey all, I just wanted to give an update on the progress we've made in this experience. We still have a long way to go, but here are some improvements you can see in Visual Studio 2017 Update 15.6.

  1. The test explorer window is too narrow and test names are unreadable by default.

We've added a hierarchy view that should help the readability of test names as well as navigation. The Projects, namespaces, and classes are different tiers in the hierarchy.
image

  1. Test Discovery is super slow across multiple projects.

Real Time Test Discovery is on by default in Update 15.6 and should improve this greatly. This discovers tests from source code instead of by built assemblies and is much faster.

@PureKrome
Copy link

PureKrome commented Mar 13, 2018

@kendrahavens Thanks for the update and I really don't want to add another +1 comment to a github thread .. but ...

just adding the hierarchy view has been such a godsend! Testing is already a bazillion (yes, I used scientific metrics to calculate the improvement level) times easier to manage.

Thank you for listening!

Looking forward to the other improvements that hopefully coming too 🥂

@daiplusplus
Copy link

daiplusplus commented May 5, 2020

With apologies for the thread necromancy (this issue is still open after-all), but is there any "fix" for how the Test Explorer window automatically moves to a vertical-split when its made wider? (The whole point for making the Test Explorer window wider is to see more of the test-list section!) - David Fowler reported it in his original posting in 2017 and it's still a problem today:

After I expand it to see actual, it moves the summary window to the side (which I never wanted to begin with):

Screenshot

As I'm currently using xUnit_style_verbose_test_names_that_describe_what_should_happen and the still narrow layout of the Test Explorer window doesn't work-well with this naming convention (even with namespace and type grouping).

Also, right-clicking on the top few nodes in the explorer when tests are grouped by namespace and class (with xUnit at least, I don't think it affects MSTest but I might be wrong) has a 2-3 second delay before the context-menu actually appears - this is consistently reproducible on different machines. Right-clicking on a test node results in the menu appearing near-instantly (but still with a subtle delay).

@kendrahavens
Copy link

kendrahavens commented Mar 13, 2021

It's been a long beat since we've given an update on this ticket. Almost every experience request should be addressed in the past several Visual Studio and VS Test Platform updates. Let me give an roll call:

  • 1. Too narrow, can’t read test names.
  • 2. Test runner rebuilds test project and dependencies when running any test.
    • A lot of work has gone into optimizing this. Here are a few improvements:
    • Visual Studio Test Explorer no longer builds projects that are unrelated to the selected tests in the “Run selected” case.
    • The Test Explorer will still always request to build test projects that contain the tests that the user selected to run. However, VS will short-circuit the build if it detects nothing has changed for the test project or its dependencies.
    • If code in either the test project or in the production project that is referenced from the test project has changed then VS will build the test project.
  • 3. Running one test from the editor runs all tests in that class
    • Fixed for .NET test frameworks.
  • 4. Slow test discovery
    • Addressed with source based discovery, with more improvements to come. Also helped by cached test results between closing and opening Visual Studio
  • 5. I want to see Console.WriteLine in output in the test runner window
    • We have some major updates coming to the test detail pane in Visual Studio Update 16.10 that should improve viewing output in the Test Explorer. We paid extra attention on how to gracefully truncate long output so the user is not as often required to 'Open additional output' in a separate view.
    • Console.Writeline handling depends on the test framework used. If the Test framework/adapter sends Console.Writeline to StdOut/Stderr so that it is returned in the TestResult (which has fields for stdout and stderr) then it will appear in the summary pane (truncated as needed) and its full text will be inserted as a collapsible section in the full log file.
  • 6. I can’t see active test being run.
    • You can see a spinner icon on any test currently being run. There is a slight delay so it only appears for longer running tests. The delay was added because showing a different icon for only a few milliseconds at a time was too distracting by default.
  • 7. Test process failures and crashes need better reporting and blame on which test crashed the process.

Going forward, there is still lots to fix in the testing experience, but I'd suggest we open a fresh issues for the individual features and close this one. I'd like to thank the community for following, upvoting, and commenting their scenarios! Hopefully most of you see a night and day difference between the test experience of 2017 and what we have today. (And if you don't, please keep the feedback coming!)

Also, a note on this repo, the vstest repo specifically refers to the VSTest Platform. The majority of Visual Studio Test Explorer feedback should be submitted on https://developercommunity.visualstudio.com/ with the Provide Feedback Tool. in Visual Studio. This distinction is because the Test Explorer code is closed source and not a part of this repo. Thank you again for helping us improve this experience for the millions of developers who use this tool!

@Jehoel I believe we will be able to look at adding a splitter option in the next few sprints. Please follow this developer community ticket: "Group Summary" section in "Test Explorer" moves. If you are still experiencing perf issues, I’d also encourage you to file a bug on developer community.

@Haplois Haplois closed this as completed Mar 18, 2021
@nohwnd
Copy link
Member

nohwnd commented Mar 24, 2021

@davidfowl

5.2) I experimented with using AsyncLocal (as suggested above) to preserve the output per test in MSTest, and found this amazing XUnit extension by @SimonCropp that already implements it for XUnit. https://github.com/SimonCropp/XunitContext#xunitcontextbase

All you need to do is install the XUnitContext package and use the provided base class or register the context yourself. Then all Console writes (and other sources as well), will be written to the output stream per test and echoed in the test output when the test fails.

Here output from two tests running in parallel:

image

As you can hopefully see in the above output, the results are written to their respective tests even though the tests are running in parallel. The nice thing is that even non-awaited task that finishes while the test is still running will write into the correct output.

There are 3 more edge cases, that I was able to identify:

  1. When non-awaited task takes longer than the whole test run, you won't see the output in any test, because the process is killed before it completes.
  2. When non-awaited task completes after its test finishes, but before the process is killed, you won't see the output in any test because it went to the standard output and was discarded.
  3. When task explicitly suppresses dependency flow, you won't see the output because it went to the standard output and was discarded.

I was able to capture 2 and 3, in my extension because I used a global setup to replace the Console logger before any test run. That said we don't have a way of showing that output in TestExplorer at the moment anyway, but it might be an improvement to consider, and get it at least into logs. Each of the edge cases has an example in the code below, in case @SimonCropp would like to comment, or correct my usage of his nuget package.

using System;
using System.Diagnostics;
using System.Threading;
using System.Threading.Tasks;
using Xunit;
using Xunit.Abstractions;

namespace TestProject1
{
    public static class TestContext
    {
        public static bool Flag = true;
        public static Stopwatch Sw = Stopwatch.StartNew();
    }

    public class TestedClass
    {
        public async Task Method1(string id)
        {
            Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} method1 called from {id}");
            await Method2(id);
        }

        public Task Method2(string id)
        {
            Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} method2 called from {id}");

            // not awaited on purpose
            Task.Run(() => { Thread.Sleep(200); Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} Non-awaited short task finished. {id}."); });
            // this will never show in output because the process terminates before this task can finish
            Task.Run(() => { Thread.Sleep(10000); Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} Non-awaited long task finished. {id}."); });

            using (ExecutionContext.SuppressFlow())
            {
                // this won't show up in the test result because the flow of dependencies was suppressed
                Task.Run(() => Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} - Supressed. {id}"));
            }

            return Task.CompletedTask;
        }
    }

    public class UnitTest1 : IDisposable // Register the outputs yourself.
    {
        public UnitTest1(ITestOutputHelper output)
        {
            XunitContext.Register(output);
        }
        public void Dispose()
        {
            XunitContext.Flush();
        }

        [Fact]
        public async Task TestMethod1()
        {
            Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} t1-start");
            var tc = new TestedClass();
            await tc.Method1("t1");
            // wait for the other test so we know there are tests running in parallel
            while (TestContext.Flag)
            {
                Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} t1-wait");
                await Task.Delay(100);
            }

            await tc.Method1("t1");
            Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} t1-end");

            // wait for the short non-awaited tasks to complete to see if they print in the correct 
            // test output, if the test happens to run long enough
            await Task.Delay(500);
            
            throw new Exception("err");
        }
    }

    public class UnitTest2 : XunitContextBase // Or use the provided base class
    {
        public UnitTest2(ITestOutputHelper output) : base(output)
        {
        }

        [Fact]
        public async Task TestMethod2()
        {
            Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} t2-start");
            var tc = new TestedClass();
            await tc.Method1("t2");
            await Task.Delay(1000);
            TestContext.Flag = false;
            Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} t2-end");

            Task.Run(() => { Thread.Sleep(200); Console.WriteLine($"{TestContext.Sw.ElapsedMilliseconds:0000} Non-awaited short task finished after test t2 finished."); });

            throw new Exception("err");
        }
    }
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests