I'm having a similar problem, except with hypothesis-jsonschema.
I got slightly more descriptive error logs than you though. I used the command: pytest --hypothesis-show-statistics my_test.py
- during generate phase (385.67 seconds):
- Typical runtimes: 23-516 ms, ~ 100% in data generation
- 2 passing examples, 0 failing examples, 2997 invalid examples
- Events:
* 37.21%, Aborted test because unable to satisfy none().filter(lambda obj: all(v(obj) for v in validators))
* 36.95%, Aborted test because unable to satisfy text(min_size=1).filter(lambda obj: all(v(obj) for v in validators))
* 36.95%, Retried draw from text(min_size=1).filter(lambda obj: all(v(obj) for v in validators)) to satisfy filter
* 23.21%, Retried draw from text().filter(lambda s: s not in out) to satisfy filter
* 20.31%, Retried draw from text().filter(not_yet_in_unique_list) to satisfy filter
* 1.80%, Aborted test because unable to satisfy text().filter(lambda s: s not in out)
* 0.13%, Retried draw from sampled_from([***, ***]).filter(lambda s: s not in out) to satisfy filter
* 0.10%, Aborted test because unable to satisfy sampled_from([***, ***]).filter(lambda s: s not in out)
From the error message I got, I can see the problem: Somehow one of the knots of my JSON Schema was compiled to strategy: none().filter(lambda obj: all(v(obj) for v in validators))
Which fails generation 37% of the time. I wonder why it isn't 100% of the time, since shouldn't none() fail in the filter?
All that's left to do is either handcraf a strategy, exam hypothesis-jsonschema in a debugger, or substitute a lesser library like Faker for my data-generation needs.