-
Notifications
You must be signed in to change notification settings - Fork 255
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to ask mstest to stop once it finds a discovery failure like UTA007 #499
Comments
A similar problem where discovery failure doesnt report as error and tests silently fail to run is apparently correctable by using package reference. Are you using package ref or packages config for your test project? |
This is asp.net core project using package ref |
@sadjadbp , currently this by design, both test platform, & framework try to run as many test as possible, until some fatal error occurs. @jayaranigarg, any thoughts on this? |
@sadjadbp , there is no backlog currently to add such a feature to vstest.console, as we intend to run as many test as possible for user, however for such cases users can always generate trx file via passing --logger:trx. The report of this can ve analyzed in vsts, or VS IDE as per user convenience, where an appropriate warning is shown for such methods. |
@mayankbansal018 - The user intention here is clearly to execute the code marked with It cannot be by design to report a successful overall result when there is a failure to execute any test. |
Yes thanks @StingyJack that's exactly my point. Doesn't seems to be a good answer. We didn't know we had a failing/missing test. |
I agree with the user intention to be able to run test not marked as I'll talk to team internally, & discuss is it okay to take such a fix, which we can break other customers. |
I'm fine with warning as long as I would have an option to "treat warnings as errors" and that's my original ask here. |
@mayankbansal018 - if you can take a fix for the older QualityFramework dll, then possibly. As I've noted in other issues, MSTest2 is not an option for me as it currently requires using nuget PackageReference in order to avoid the "silently skipped tests" issue at topic here. I'm not sure who would be depending on tests to not be executed and would complain about fixing this. If fixing this breaks someone's code, then that is code that needs to be broken. This is really not something to warn people about, the program cannot comply with the user request ("run all the tests defined"). Its not the same as the program running sub-optimally. Also a warning about a test adapter (any warning really) is easily missed in any project with more warnings than the build summary or errors list can show in a single page view (I am currently experiencing one type of warning that requires about 10 pages of scrolling in the error list at its default height). |
@sadjadbp @StingyJack , Apologies for getting delayed on this, I've not been able to spend enough time on it, will try to get back within a day or two. |
@jayaranigarg , @cltshivash I was looking into this issue, & it seems a fair ask, that we report test as failed if we cannot run them. E.g. being if a Test is marked as async, but does not have return type Task, we do not run such tests, & just throw a warning. We do similar thing for tests that are marked as Ignored, & show them as fail as per users input On similar lines we can report not run-able as Failed |
Closing this issue. Tests that are not runnable can be failed if specified so using |
@vagisha-nidhi - permitting silent failures is completely unexpected behavior. It looks like you have released software with broken functionality that is borderline "evil" and then are trying to avoid fixing it by sitting on the bug report for a few months and trying to sneak-close it. Instead of throwing an exception. You should ask a few of the other teams at msft if they are ok with some portion of their unit tests just never being executed, and how they feel about not getting an error or any notice when it happens. Try to use my plain, non wordsmithed description when you ask them. |
@StingyJack The change has been made keeping in mind backward compatibility. If you do want a stricter enforcement, please specify the option called out above. |
@cltshivash - To be clear, this is a program or module that does not heed user instruction. It fails to perform the command issued to it, and does so silently. And this program is one that acts as a quality gate for software - many times this is the only quality gate after the developers initial testing when coding a change or addition. I can understand some weird quirk being re-coded into an application to maintain user experience, backward compat, or general business continuity, but this is definitely not one of those weird quirks. This is directly counter to that last reason; In one case, I had about 500 of a team projects 2K unit tests not being executed and not throwing an error. This was happening while preparing a major releases and a series of updates, and there were other team projects that were /are still affected by this. The position that people want silent failures in their quality control and testing pipeline, because they are used to the silent failures and expect them, is not very defensible. If there are people who want this specific bug to be kept in, let them ask for it. But for now, they are out-voted 2-0. Please reopen and fix. Thank you. |
Description
We had bunch of test that their signature was async, but return types where void.
So we were receiving errors like this Azure DevOps test step
2018-10-11T06:06:39.0663002Z [MSTest][Discovery][x.dll] UTA007: Method Test1 defined in class Class1 does not have correct signature. Test method marked with the [TestMethod] attribute must be non-static, public, return-type as void and should not take any parameter. Example: public void Test.Class1.Test(). Additionally, if you are using async-await in test method then return-type must be Task. Example: public async Task Test.Class1.Test2()
The question here is what parameter we could pass to vstest.console.exe to break/stop on these errors. We basically didn't know these tests was not running for a long time
The text was updated successfully, but these errors were encountered: