Skip to content
This repository has been archived by the owner on May 31, 2020. It is now read-only.

Hypothesis testing of voc #580

Open
mistermocha opened this issue May 26, 2017 · 1 comment
Open

Hypothesis testing of voc #580

mistermocha opened this issue May 26, 2017 · 1 comment

Comments

@mistermocha
Copy link
Contributor

Try using hypothesis.works for testing when appropriate. Documentation at https://hypothesis.readthedocs.io/en/master/index.html

mistermocha pushed a commit to mistermocha/voc that referenced this issue May 26, 2017
@Zac-HD
Copy link

Zac-HD commented Aug 10, 2017

Hi! I'm a maintainer of Hypothesis, and discussed using it to test VOC and Batavia with Russell (@freakboy3742) at the PyConAU sprints.

I've got more than enough to do working on Hypothesis itself, but would be delighted to consult, mentor, teach, or assist anyone who wants to use it to test beeware things. Just @-mention me, and I'll answer!


The idea is that instead of checking predefined examples in tests/utils.py:SAMPLE_DATA, you would pick the right Hypothesis strategy and get examples from that (using a test decorated with @given, so examples are reproducible and minimize correctly).

As a quick-and-dirty demo, we could also just (temporarily) replace SAMPLE_DATA with a dataset drawn from Hypothesis:

from hypothesis.strategies import from_type

SAMPLE_DATA = ...  # current definition
generated = {
    k: [from_type(eval(k)).example() for _ in range(100)
    if isinstance(eval(k), type)]
    for k in SAMPLE_DATA
}
SAMPLE_DATA = {
    k: sorted(set(repr(x) for x in v), key=lambda x: len(x), x)
    for k, v in generated.items()
}

This wouldn't show minimal examples, but it would probably turn up a bunch of unicode issues and doesn't require any changes to existing tests.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants