On Jan 06 2017, Jussi Pakkanen <
jpak...@gmail.com> wrote:
> On Fri, Jan 6, 2017 at 6:38 PM, Nikolaus Rath <
Niko...@rath.org> wrote:
>
>> My first attempt was to simply declare the tests with test(), and have
>> them exit with status 77 if pre-requisites aren't met. However, this
>> means that Ninja will still attempt to run all the tests, and they will
>> all be skipped. So either I will get no error message at all, or I will
>> get an error message for every test.
>
> Why is this a bad thing? Skipped tests are presented in the statistics
> in their own column so they won't get mistaken for failed tests. This
> is also the approach taken by e.g. Wayland.
Mostly because *all* the tests would be skipped, so there'd be no point
in trying to run them. Furthermore, it wouldn't give the user any
indication of *why* everything has been skipped.
That said, I just realized it wouldn't be a good idea to declare the
unit tests individually at all. While they can run on their own, they
are designed to be called from a different test runner
(pytest). Starting them separately will incur a huge performance
penalty, the output will be confusing (when run standalone, each tests
considers itself to be a test suite with just one test), and I'm loosing
many of the pytest features.
At first I thought I'd just declare a single "run pytest" test for
Meson, but that's not a good idea either: there is no progress reporting
(since Meson or Ninja captures the test output in a file), and the
pytest output isn't colored anymore (because it's not connected to a
terminal).
So I think I'll just stick with the existing test suite and require
users to call 'python -m pytest' instead of 'ninja test'. The only
difficulty is that the tests are stored in the source directory, but the
binaries-to-be-tested are in the build directory. So I have to pass one
of them to the test runner explicitly...