Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extend r.watershed test suite for differences between ram and seg versions #2482

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

mwort
Copy link
Contributor

@mwort mwort commented Jul 14, 2022

These tests are designed to give a comprehensive overview of matching output between the in-memory (ram) and segmentation library versions of r.watershed with different optional inputs. It relies on running the module with both versions and a random raster (n=10000) for the optional inputs. The min/max of the difference between output rasters are compared and tests pass if they do not exceed the following precisions/thresholds (open for suggestions):

   output_precision = {
       'accumulation': 1,
       'tci': 0.01,
       'spi': 0.01,
       'drainage': 0,
       'basin': 0,
       'stream': 0,
       'half_basin': 0,
       'length_slope': 0.01,
       'slope_steepness': 0.01,
   }

The current state of matches with optional input in columns:

Output required flags=s flags=4 depression flow disturbed_land blocking retention
accumulation 🔴
tci
spi
drainage 🔴 🔴
basin 🔴 🔴 🔴
stream 🔴 🔴 🔴
half_basin 🔴 🔴 🔴
length_slope 🔴 🔴 🔴 🔴 🔴 🔴 🔴
slope_steepness 🔴 🔴 🔴 🔴 🔴 🔴 🔴

I was not able to reproduce the mismatch in accumulation in the NC demo location reported by @HuidaeCho in #2222 with randomly assigned flow fractions (0-1), although other output does not match.

@wenzeslaus wenzeslaus added this to the 8.4.0 milestone Jul 26, 2022
@neteler neteler added the raster Related to raster data processing label Aug 28, 2022
@wenzeslaus wenzeslaus modified the milestones: 8.3.0, 8.4.0 Feb 10, 2023
+ "\n"
)

self.assertTrue(all([all(p) for p in passes]), msg=msg)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this test looks good. The only thing I would change is the number of reported failures. It's currently 1 from AssertionError if any tests fail. It would be useful to report the number of individual failed tests (red circles) somewhere.

@github-actions github-actions bot added Python Related code is in Python module tests Related to Test Suite labels Mar 21, 2024
Copy link
Member

@wenzeslaus wenzeslaus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the test and the output, however the main issue is that the test fails and we can automatically run only tests which pass when everything is all right. Maybe there is a number of expected failures here to test against?

@wenzeslaus wenzeslaus modified the milestones: 8.4.0, Future Apr 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module Python Related code is in Python raster Related to raster data processing tests Related to Test Suite
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants