Project | |
Quality | |
Community | |
Maintainers |
Project Summarizer is a tool used to summarize various files produced by other tools during a build or test process. This tool is intended to be executed after running a test script or a build script. The benefit to using the Project Summarizer tool is a quick summary of more terse information provided by other tools. The goal is to provide the minimum level of feedback on changes to the project, avoiding a more costly lookup of the summarized information. If that goal is met, then a simple look at that summarized information can replace a more costly lookup, such as having to switch focus to a locally hosted web page to figure out the impact of a change.
Our hope is that we can help developers achieve that goal at least 50 percent of the time.
The currently supported summarizers are:
- JUnit for test results
- supported by JUnit and PyTest
- Cobertura for code coverage
- supported by pytest-cov
There are plans for an extension mechanism to support other summarizers soon.
The what is reported on and the how that information is generated is up to the developer team for every project. The Project Summarizer tool aims to condense that reported information into a simple, glanceable report.
Normally a project will have a build/test framework or a build/test script to perform this action.
The Project Summarizer tool can then be added to the end of that process, to be executed on success or failure.
If the project uses a test framework that exports a JUnit compatible results file, then the argument --junit <file>
is used when calling the Project Summarizer tool.
If the project uses a coverage framework that exports a Cobertura compatible coverage file, then the argument --cobertura <file>
is used when calling the Project Summarizer tool.
When setup this way, the tool will present a quick summary of the contents of those two files.
In addition, the tool will create summary files in the report
directory of the project.
While the report files are not intended for human consumption, their summarized information should be easy enough to read and understand, if needed.
While complete information on the current state of the project is useful, our development team finds that most often that they are looking for what has changed.
That is where the Project Summarizer tool shines.
But, to understand what has changed, a benchmark or snapshot of an earlier "current" state must be placed somewhere.
For the Project Summarizer tool, those summary files in the report
directory are published to the publish
directory using the --publish
argument.
In our team, publishing is performed as the last action before committing changes to a project's repository.
The intent of that action is that we can always determine what changes have occurred since the last commit.
If we have any doubts about the integrity of that information, we can publish the summaries at the start of working on a new issue, just to get the confidence that we have the right summaries.
Once the project's summaries have been published, the --only-changes
argument can then be used to report only on the changes that have occurred since the last published summaries.
With that argument present, the summaries will not only display values that have changed since the published summaries, along with the amount of change that has occurred.
If adding, removing, or enabling tests, this is useful to make sure that the count of changed tests is as expected.
If making any changes to the code that the tests are covering, this is useful to see what effect the change has on code coverage metrics.
Our team uses test driven development and keeps a high code coverage metric for all our projects. The Project Summarizer tool allows us to see the impact of our current changes on the current existing tests, enabled and disabled. It also allows us to keep track of the impact of any code changes on our coverage metrics. With both summaries, if the reported information is outside of our expectations, we can then look at the more comprehensive reports to find that needed information. But over half of the time, the summary information alone is enough to answer our questions about the changes we have made.
It is recommended that projects do not commit the contents of the report
directory to a repository, and only commit the contents of the publish
directory.
While the decision to follow that recommendation is up to development teams, our team has found that it provides a particularly useful summary of what has changed since the last commit.
That information has helped our team ensure that the right tests have changed and that our code coverage is not negatively affected, all with a simple glance.
To enforce this in out projects, we added the following line to our .gitignore
files:
report/
These samples were captured against the Project Summarizer project itself and have not been changed.
Generated against this commit.
Test Results Summary
--------------------
CLASS NAME TOTAL TESTS FAILED TESTS SKIPPED TESTS
test.test_coverage_model 6 0 0
test.test_coverage_profiles 3 0 0
test.test_coverage_scenarios 12 0 0
test.test_main 4 0 0
test.test_publish_scenarios 9 0 0
test.test_results_scenarios 19 0 0
test.test_scenarios 1 0 0
--- -- - -
TOTALS 54 0 0
Test Coverage Summary
---------------------
TYPE COVERED MEASURED PERCENTAGE
Instructions --- --- ------
Lines 563 563 100.00
Branches 184 184 100.00
Complexity --- --- ------
Methods --- --- ------
Classes --- --- ------
Generated against this commit with no observable changes.
Test Results Summary
--------------------
Test results have not changed since last published test results.
Test Coverage Summary
---------------------
Test coverage has not changed since last published test coverage.
Generated against this commit.
Test function test_junit_jacoco_profile
renamed to xxtest_junit_jacoco_profile
to not be executed.
Test Results Summary
--------------------
CLASS NAME TOTAL TESTS FAILED TESTS SKIPPED TESTS
test.test_coverage_profiles 2 (-1) 0 0
--- -- - -
TOTALS 53 (-1) 0 0
Test Coverage Summary
---------------------
TYPE COVERED MEASURED PERCENTAGE
Lines 557 (-6) 563 98.93 (-1.07)
Branches 178 (-6) 184 96.74 (-3.26)
Our team's development is primarily done on Windows systems. As such, any examples that we present will typically be Windows CMD scripts. We have a project note to supply Bash scripts for the project soon.
For the Project Summarizer project itself, there is a ptest.cmd
script that allows for various modes to be used in executing PyTest against the project.
These different modes allow for more focused testing depending on the needs of the developer at the time.
Those different modes use environment variables to specify how to execute PyTest, and for the purpose of this documentation, those variables are ignored to supply a clearer picture of how the Project Summarizer tool works.
With those other environment variables out of the way, the heart of the script is the PYTEST_ARGS
environment variable.
Under normal operation, that environment variable is set to:
--timeout=10 -ra --strict-markers --junitxml=report/tests.xml --html=report/report.html
When testing with code coverage applied, the following text is appended to the contents of that variable:
--cov --cov-branch --cov-report xml:report/coverage.xml --cov-report html:report/coverage
As the project uses pipenv
, when the tests are executed, they are effectively executed with the following command line:
pipenv run pytest %PYTEST_ARGS%
The important parts here are the --junitxml
argument and the --cov-report xml:
argument to PyTest.
The first argument specifies the location where the JUnit compatible report of test results will be written.
Similarly, the second argument specifies the location where the Cobertura compatible report of test coverage will be written.
Given that setup, adding in the Project Summarizer to the ptest.cmd
script is easy.
As code coverage can sometimes slow test execution, the following section of the script is used to setup the execution of the tool:
set PTEST_REPORT_OPTIONS=--junit %PTEST_TEST_RESULTS_PATH%
if defined PTEST_COVERAGE_MODE (
if not defined TEST_EXECUTION_FAILED (
set PTEST_REPORT_OPTIONS=%PTEST_REPORT_OPTIONS% --cobertura %PTEST_TEST_COVERAGE_PATH%
)
)
pipenv run %PTEST_PROJECT_SUMMARIZER_SCRIPT_PATH% %PTEST_PROJECT_SUMMARIZER_OPTIONS% %PTEST_REPORT_OPTIONS%
As the summarizer is being used in the same project as it is created in, the PTEST_PROJECT_SUMMARIZER_SCRIPT_PATH
variable is set to point to the project's main.py
module.
Normally, that value would be the name of the Python package project_summarizer
.
The PTEST_PROJECT_SUMMARIZER_OPTIONS
variable holds the --only-changes
argument by default but can be turned off if desired.
The PTEST_TEST_RESULTS_PATH
variable is set to report\tests.xml
to correspond with the --junitxml=report/tests.xml
argument passed to PyTest.
The PTEST_TEST_COVERAGE_PATH
variable is set to report\coverage.xml
to match the --cov-report xml:report/coverage.xml
argument passed to PyTest.
Because our team wanted publishing summaries to be a function of testing, there is an alternate flow in the ptest.cmd
script that published the summaries.
This is easy accomplished by invoking the summarizer with only the --publish
argument.
All the samples in the Sample Output section are generated using the ptest.cmd
script with the Project Summarizer tool added in.
- used as part of the normal development process,
ptest.cmd --publish
was previously invoked to benchmark the summaries for the last commit- this effectively invoked
project_summarizer --publish
- this effectively invoked
ptest -c -f
was invoked to run tests with coverage and supply the Full Output sample- this ran PyTest with coverage enabled, and effectively invoked
project_summarizer --junit report\tests.xml --cobertura report\coverage.xml
- this ran PyTest with coverage enabled, and effectively invoked
ptest -c
was invoked to run tests with coverage and present the No Change sample- this ran PyTest with coverage enabled, and effectively invoked
project_summarizer --only-changes --junit report\tests.xml --cobertura report\coverage.xml
- this ran PyTest with coverage enabled, and effectively invoked
ptest -c
was invoked to run tests with coverage and present the Expected Change sample- same as above, but with the test "missing", it reported that test as not being executed and reported the coverage that was missing
After use in different real-world scenarios, the ability to maintain better control on the report output became more apparent.
To that extent, two new arguments were added to the command line: --quiet
and --columns
.
The --quiet
argument instructs the report summarizers to only generate their output files, and not to generate any console output.
This feature is useful on servers where the tool is still executed to keep things coordinated with developer workflows, but the console output is not important.
In those cases where the output on servers is relevant, the --columns
argument is useful in specifying the number of character columns to use for the output.
The integer value passed after the --columns
argument is passed to the table formatting package that is suggested for plugins: columnar
.
In the past, when the tool has been run on server environments, there have been situations where the columnar
package's determination of number of output columns has been wrong or had returned 0
.
By using this setting, those situations can be sidestepped by hardwiring the number of columns from the command line.
As mentioned above, the report
directory is used to generate summary reports in, and those reports can then be published into the publish
directory.
If different names are wanted for these two directories, they can be set with the --report-dir
argument and the --publish-dir
argument.
Note that the report directory, either the default report
or the value supplied with the --report-dir
argument, must exist before this tool is called.
However, the publish directory, either the default publish
or the value supplied with the --publish-dir
argument, will be created if it does not exist.
The 0.5.0 releases are to get this project on the board. Once that is done, we have plans to implement an extension mechanism to support customized summaries.
The changelog for this project is maintained at this location.
If you still have questions, please consult our Frequently Asked Questions document.
If you would like to report an issue with the tool or the documentation, please file an issue using GitHub.
If you would like to us to implement a feature that you believe is important, please file an issue using GitHub that includes what you want to add, why you want to add it, and why it is important. Please note that the issue will usually be the start of a conversation and be ready for more questions.
If you would like to contribute to the project in a more substantial manner, please contact me at jack.de.winter
at outlook.com
.