Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Undeclared compilation errors should fail analyzer/fixer tests #21550

Open
sharwell opened this issue Aug 16, 2017 · 7 comments
Open

Undeclared compilation errors should fail analyzer/fixer tests #21550

sharwell opened this issue Aug 16, 2017 · 7 comments
Assignees
Labels
Milestone

Comments

@sharwell
Copy link
Member

sharwell commented Aug 16, 2017

Currently tests silently pass if they contain a compilation error, which can lead to false negatives. For example, if a test verifying that a diagnostic is not reported for some syntax scenario contains a syntax error, it's possible that the diagnostic is only omitted because of the syntax error.

Similarly, for code fixes, it's possible for a fix to introduce a compilation error without alerting the developer of the code fix to the fact that the fix is behaving incorrectly. All tests against source code should require the compilation errors present in the source be declared, with those errors validated as a core part of the test.

📝 This recently affected #21528.
🔗 StyleCop Analyzers adopted a similar approach in DotNetAnalyzers/StyleCopAnalyzers@746e551, and improved the usability in DotNetAnalyzers/StyleCopAnalyzers#743 and DotNetAnalyzers/StyleCopAnalyzers#2136.

@sharwell sharwell self-assigned this Aug 16, 2017
@Pilchie Pilchie added this to the 15.5 milestone Aug 16, 2017
@CyrusNajmabadi
Copy link
Member

All tests against source code should require the compilation errors present in the source be declared,

I disagree. A core point of the refactoring/fix tests was that they are describing the behavior the user should experience, without being dependent on what underlying mechanism leads to producing them. For example, GenerateMethod moved from being an entirely different system to one based on analyzers. The tests migrated entirely because they only define the experience a user has (i.e. i move to this location, i invoke feature, i get this final result), and not anything about how the feature is implemented (i.e. it is dependent on a particular diagnostic).

@sharwell
Copy link
Member Author

A core point of the refactoring/fix tests was that they are describing the behavior the user should experience, without being dependent on what underlying mechanism leads to producing them.

In theory this is fine. In practice it causes silent behavior changes in tests which can (and have) led to false positives (test passes even though the feature under test is broken). When StyleCop Analyzers adopted this approach, I believe there were more than 100 unintended errors found in inputs.

@bkoelman
Copy link
Contributor

bkoelman commented Nov 17, 2017 via email

@CyrusNajmabadi
Copy link
Member

I would like to see actual data on the counts here.

@CyrusNajmabadi
Copy link
Member

Note: i think it would be fine if a test had to declare any errors introduced by the fix. That's part of its behavior after all.

@bkoelman
Copy link
Contributor

bkoelman commented Nov 17, 2017 via email

@CyrusNajmabadi
Copy link
Member

That seems pretty low.

And, as i mentioned in my original statement. The errors present before the fix is run aren't part of the design of these features. It's testing an unrelated part of the system.

Now, I am ok with collecting hte errors prior to the fix being run to determine if any errors were added/changed. That user facing behavior is caused by the feature, and should be captured (in as far as to document it, or to make sure it doesn't change unintentionally).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants