-
Notifications
You must be signed in to change notification settings - Fork 610
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setup Codecoverage CI #993
Comments
I'm not a fan of the idea. I've seen it several times used in different repository. In theory the tool is useful, in practice, well... The issue is that the percentage of code covered by the tests depends on the codebase. Let's say you do a change to improve (make a longer) error message, if this error is not hit in the tests, well, the CI will fail. Even if it wasn't in the scope of the PR to add the test (the PR is focused on a better error message). Other situations when a codecoverage CI will fail when it should not:
Code coverage is also not a good proxy for the quality of the tests. For example it won't catch a user who adds a feature and doesn't bother changing the default inputs of the function. Even if we add a threashold (for example dropping by 0.1% the coverage is ok) then people making small pull requests won't be flagged when they should, and in the end it adds up... In all honesty, I've never seen a repo where when codecoverage was implemented and it was helpful, except for the ones where it's completely silent. I'd rather go toward other good ideas like automatic checks : #992 #989 , flake8 and other tools that we can trust will do a good job in whatever situation. I'd be ok if it's completely silent, meaning coverage appearing in the logs, file by file, for the people who care, but no check depending on it in the CI and no bot to post on the pull request for useless notifications. Sorry about my rant here 😓 I had too many failed CI, blocked pull requests and useless notifications. |
I looked into how to implement a minimal code coverage for Python with Bazel with just a small report that we could get by running a command locally. Some findings: bazelbuild/rules_python#43 Long story short: If we run unit tests with Bazel, we can't, except if someone is very motivated to make a pull request to the Bazel repository. |
Thanks for the info! I've subscribed to those issues and will keep an eye on them. I think you made a number of good points about its value anyway though so closing this until further information is available. |
Given the number of contributions and individual maintainers IMO it would be a good idea to add a code coverage check so we can see if new additions are decreasing coverage and need improved testing.
Thoughts / Suggestions for tooling?
The text was updated successfully, but these errors were encountered: