Running tests may reveal potential regressions of code changes against existing and assumed correct output. Here, we test the newly built SWAT+ model executable against known scenario outputs and compare the results. Results should not differ significantly after code changes. Each SWAT+ unit test consists of one reference scenario run, located in data
. The repository's test
directory contains the Python test scripts.
Testing is performed through cmake
and ctest
implementing the following process:
-
The
data
folder contains SWAT+ reference scenarios. Those scenarios have input and valid output data. The output data is consideredgolden
, it means they represent an accepted correct output of the model. -
The test related entries in
CMakelists.txt
are shown below. Python is required. Various variables are set to point to resources, such as the script that performs the check (check.py
), the path to the executable, and paths to the reference data set and test data directory. Relative and Absolute tolerances for values deviations are specified viarel_err
andabs_err
. Per default, the relative error is 1% and the absolute error 1E-8. The individual tests are followed with theadd_test
commands. -
Building the project with
cmake -B build
. Two tests are being generated,Ames_sub1
andIthaca_sub6
. Additional tests will need extraadd_test
lines, one for each scenario. -
Next, the swat executable should be created. See sections below.
-
The tests can now be performed using
ctest
. -
When executing a single test, e.g.
Ames_sub1
,check.py
is called and it performs the following steps:-
The scenario folder
Ames_sub1
is copied from the data directory to the build folder asbuild/Ames_sub1)
-
The swat model is executed in the
build/Ames_sub1
folder and will overwrite previous outputs -
check.py
will read the filebuild/Ames_sub1/.testfiles.txt
, containing a list of output file names. It may contain the file namewb.txt
andsoc.txt
on separate lines. -
Each output file name in this list will be processed and the files are compared such as
data/Ames_sub1/wb.txt
<->build/Ames_sub1/wb.txt
, anddata/Ames_sub1/soc.txt
<->build/Ames_sub1/soc.txt
-
For each pair of files, only floating point values are compared that occur on the same line and column. If the difference of two floats (float1 and float2) exceeds abs_error and relative error, it is recognized as failure and the error is captured in the test output. float1 and float2 are considered equal if:
$$ abs(float1 - float2) <= abs_err + rel_err * abs(float2) $$
-
-
The test results are summarized after each scenario and the number of values that differ and the maximum relative and absolute error are printed as test result. A scenario test fails if one pair of floats are not equal according to the equation above.
#############################################################################
# Testing
find_package(Python REQUIRED)
set(check_py "${PROJECT_SOURCE_DIR}/test/check.py")
set(exe_path "${PROJECT_BINARY_DIR}/${SWATPLUS_EXE}")
set(test_dir "${PROJECT_BINARY_DIR}/data")
set(ref_dir "${PROJECT_SOURCE_DIR}/data")
# error tolerances
set(rel_err "0.01")
set(abs_err "1e-8")
add_test(Ithaca_sub6 python3 ${check_py} ${exe_path} ${ref_dir}/Ithaca_sub6 ${test_dir} ${abs_err} ${rel_err})
add_test(Ames_sub1 python3 ${check_py} ${exe_path} ${ref_dir}/Ames_sub1 ${test_dir} ${abs_err} ${rel_err})
Tests are run using the ctest
command, which is part if the cmake
installation. Alternatively, you can use the command make test
within the build
directory.
$ cmake -B build # -> Generate the build files
$ cmake --build build # -> Build SWAT+ executable
$ cd build # -> change into the build folder
$ ctest # -> Test all scenarios using ctest
ctest
provides a test summary as output. Somewhere in build/Testing
, there is also the detailed standard output of ctest
for each test. Also, you can run all tests, a selected subset of test, or just an individual one. ctest
is quite powerful and flexible.