Explaining failing examples - by showing which arguments (don't) matter #3411
Labels
internals
Stuff that only Hypothesis devs should ever see
legibility
make errors helpful and Hypothesis grokable
new-feature
entirely novel capabilities or strategies
Hypothesis has many features designed to help users find bugs - but helping users understand bugs is equally important! Our headline feature for that is shrinking, but I think we should treat minimal failing examples as a baseline1. That's why I implemented basic fault-localization in explain mode, and want to take that further by generalizing failing examples.
One key insight here is that the feature should be UX-first, defined by the question "what output would help users understand why their test failed"2. The approach I've chosen amounts to:
# or any other generated value
next to each such argument.Of these, the difficult part is modifying the conjecture internals for (2):
@given
ConjectureData
internals)This approach is coarser-grained than the prior art (see #2192), but conversely can be used with data than does not match a context-free grammar. On the whole, I like it much more 🙂
Footnotes
not least because the threshold problem can make failures look less important, e.g. https://github.com/HypothesisWorks/hypothesis/issues/2180 ↩
rather than e.g. "what cool algorithmic or instrumentation trick could I pull?", as is considerably more common. ↩
The text was updated successfully, but these errors were encountered: