Hi,
I would like to run tests where I generate some (complex) input data with custom Hypothesis strategies, but additionally I need to systematically check all possible values of some other data. (Context: I want to test the behaviour of semi-automatically generated functions, and I want to explicitly test each of these functions.)
It seems I can write tests that combine Hypothesis strategies with the systematic checking of some data by combining Hypothesis' @given() decorator with the Pytest decorator @mark.parametrize(). However, my tests so far do now quite behave as I want them to. Below is some dummy test using such a combined test inputs strategy, but with simpler test data and no actual test.
import pytest
import hypothesis.strategies as hst
@hyp.given(
ys = hst.lists((hst.floats())),
seed=hst.integers(min_value=1, max_value=10000))
@pytest.mark.parametrize("x", range(3))
@hyp.settings(max_examples=10)
def test_dummy(ys, seed, x):
# No test for simplicity
assert True
When running this test with pytest -vvs, I get the logging output shown below this message. As you can see there, pytest systematically tries all the given values for x in their order, and the values from Hypothesis are randomised, as intended.
OK, here now finally are my questions/problems:
1. Is there a way to ensure that the test data from Hypothesis is more random? As you can see from the logging below, it appears that the data from Hypothesis contains many repeated value combinations (here, ys=[0.0], seed=1), and I see the same in my actual tests. I assume that there is no shrinking going on here, as none of these tests failed. Also, even if there would be some shrinking involved, I would still not expect to have exactly the same test inputs tried over and over.
2. Currently my test quasi loops over the data from mark.parametrize() and then quasi does an inner loop with data from Hypothesis. Is it possible to do this the other way round? The latter would result in more efficient tests for me, because my values given to mark.parametrize() already exist, but my custom Hypothesis strategies are more complex.
I already tried swapping the order of the decorators and test function parameters, but surprisingly that has no effect.
# Test logging
$ pytest my/test/file.py -vvs
...
my/test/file.py::test_dummy[0] Trying example: test_dummy(
ys=[], seed=1, x=0,
)
Trying example: test_dummy(
ys=[0.0], seed=1, x=0,
)
Trying example: test_dummy(
ys=[0.0], seed=1, x=0,
)
Trying example: test_dummy(
ys=[0.0], seed=1, x=0,
)
Trying example: test_dummy(
ys=[0.0], seed=1, x=0,
)
Trying example: test_dummy(
ys=[], seed=3068, x=0,
)
Trying example: test_dummy(
ys=[0.0], seed=1, x=0,
)
Trying example: test_dummy(
ys=[-2.00001], seed=6794, x=0,
)
Trying example: test_dummy(
ys=[], seed=1, x=0,
)
Trying example: test_dummy(
ys=[0.0], seed=1, x=0,
)
PASSED
my/test/file.py::test_dummy[1] Trying example: test_dummy(
ys=[], seed=1, x=1,
)
Trying example: test_dummy(
ys=[0.0], seed=1, x=1,
)
Trying example: test_dummy(
ys=[0.0], seed=1, x=1,
)
Trying example: test_dummy(
ys=[0.0], seed=1, x=1,
)
Trying example: test_dummy(
ys=[0.0], seed=1, x=1,
)
Trying example: test_dummy(
ys=[0.0], seed=1, x=1,
)
Trying example: test_dummy(
ys=[-2.2250738585072014e-308, -2.925800531803837e-308, inf, nan, nan],
seed=6971,
x=1,
)
Trying example: test_dummy(
ys=[0.0], seed=1, x=1,
)
Trying example: test_dummy(
ys=[0.5, -inf, 2.078615020738899e-106, nan], seed=3829, x=1,
)
Trying example: test_dummy(
ys=[0.0], seed=1, x=1,
)
PASSED
my/test/file.py::test_dummy[2] Trying example: test_dummy(
ys=[], seed=1, x=2,
)
Trying example: test_dummy(
ys=[0.0], seed=1, x=2,
)
Trying example: test_dummy(
ys=[0.0], seed=1, x=2,
)
Trying example: test_dummy(
ys=[0.0], seed=1, x=2,
)
Trying example: test_dummy(
ys=[0.0], seed=1, x=2,
)
Trying example: test_dummy(
ys=[], seed=240, x=2,
)
Trying example: test_dummy(
ys=[], seed=1, x=2,
)
Trying example: test_dummy(
ys=[], seed=1, x=2,
)
Trying example: test_dummy(
ys=[], seed=1, x=2,
)
Trying example: test_dummy(
ys=[0.0], seed=1, x=2,
)
PASSED
...