You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Code Coverage is a metric to see how much of the code by means of entities viz statements, branches, functions, etc run when the tests are run. It is a good metric to see how much of the business logic is tested(the expected & actual values are checked assuming no other measurable side effect) & it helps ensure that an existing functionality does not break when newer ones are added or older ones are refactored. While it's not the absolute metric of code quality, a higher code coverage usually implies well written code with minimal side-effects.
Currently tests are to be written down(tracked @ #49), so it'd be great
to run them on every Pull-Request & PR merge(I believe that'd be push/comm
to have informational alerts about them going UP/DOWN when PRs are raised, so that people feel encouraged to write tests(automatically managed by CodeCov), as we are ramping up unit-testing, it might not be a very good idea for the CI to fail because code coverage is less than some X%
to have badges on README.md, badges are cool 🎉
Good starting research points:
CodeCov is a go-to tool for many projects, but are there any alternatives which might be better?
Is it better to use the CodeCodv bash exporter or the CodeCov github action, if codecov is the choice of tool?
Likewise, should running UTs be a workflow separate from the workflow which runs code Linters(GolangCI-Lint)?
Thanks for the research, let's stick to codecov. On the bash uploader vs GH marketplace action do whichever is easier.
Let's keep linting and unit testing separate, it might help actions/checkout cache results better and it might help understand what failed without looking at the logs.
Code Coverage is a metric to see how much of the code by means of entities viz statements, branches, functions, etc run when the tests are run. It is a good metric to see how much of the business logic is tested(the expected & actual values are checked assuming no other measurable side effect) & it helps ensure that an existing functionality does not break when newer ones are added or older ones are refactored. While it's not the absolute metric of code quality, a higher code coverage usually implies well written code with minimal side-effects.
Currently tests are to be written down(tracked @ #49), so it'd be great
Good starting research points:
Good starting points:
Great readings:
The text was updated successfully, but these errors were encountered: