[PATCH 1/2] kunit: tool: Do not error on tests without test plans

47 views
Skip to first unread message

David Gow

unread,
Oct 21, 2021, 2:28:27 AM10/21/21
to Brendan Higgins, Rae Moar, Daniel Latypov, Shuah Khan, David Gow, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
The (K)TAP spec encourages test output to begin with a 'test plan': a
count of the number of tests being run of the form:
1..n

However, some test suites might not know the number of subtests in
advance (for example, KUnit's parameterised tests use a generator
function). In this case, it's not possible to print the test plan in
advance.

kunit_tool already parses test output which doesn't contain a plan, but
reports an error. Since we want to use nested subtests with KUnit
paramterised tests, remove this error.

Signed-off-by: David Gow <davi...@google.com>
---
tools/testing/kunit/kunit_parser.py | 5 ++---
tools/testing/kunit/kunit_tool_test.py | 5 ++++-
2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index 3355196d0515..50ded55c168c 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -340,8 +340,8 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool:
"""
Parses test plan line and stores the expected number of subtests in
test object. Reports an error if expected count is 0.
- Returns False and reports missing test plan error if fails to parse
- test plan.
+ Returns False and sets expected_count to None if there is no valid test
+ plan.

Accepted format:
- '1..[number of subtests]'
@@ -356,7 +356,6 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool:
match = TEST_PLAN.match(lines.peek())
if not match:
test.expected_count = None
- test.add_error('missing plan line!')
return False
test.log.append(lines.pop())
expected_count = int(match.group(1))
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
index 9c4126731457..bc8793145713 100755
--- a/tools/testing/kunit/kunit_tool_test.py
+++ b/tools/testing/kunit/kunit_tool_test.py
@@ -191,7 +191,10 @@ class KUnitParserTest(unittest.TestCase):
result = kunit_parser.parse_run_tests(
kunit_parser.extract_tap_lines(
file.readlines()))
- self.assertEqual(2, result.test.counts.errors)
+ # A missing test plan is not an error.
+ self.assertEqual(0, result.test.counts.errors)
+ # All tests should be accounted for.
+ self.assertEqual(10, result.test.counts.total())
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
--
2.33.0.1079.g6e70778dc9-goog

David Gow

unread,
Oct 21, 2021, 2:28:29 AM10/21/21
to Brendan Higgins, Rae Moar, Daniel Latypov, Shuah Khan, David Gow, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
Currently, the results for individial parameters in a parameterised test
are simply output as (K)TAP diagnostic lines. However, the plan was
always[1] to make these (K)TAP subtests when kunit_tool supported them.

With [2], these are now supported. (v5 will print out an error about the
missing plan line, but this can safely be ignored, and will hopefully be
changed). As a result, individual test parameter results are parsed,
displayed in the formatted results, and counted for test statistics.

[1]: https://lore.kernel.org/linux-kselftest/CABVgOSnJAgWvTTABaF082LuY...@mail.gmail.com/
[2]: https://lore.kernel.org/linux-kselftest/20211006001447....@google.com/

Signed-off-by: David Gow <davi...@google.com>
---

Note that this was previously posted as:
https://lore.kernel.org/all/20211006071112.2...@google.com/

No changes have been made, save for a trivial rebase on the current
kselftest/kunit branch.

lib/kunit/test.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/lib/kunit/test.c b/lib/kunit/test.c
index 3bd741e50a2d..85265f9a66a1 100644
--- a/lib/kunit/test.c
+++ b/lib/kunit/test.c
@@ -508,6 +508,8 @@ int kunit_run_tests(struct kunit_suite *suite)
/* Get initial param. */
param_desc[0] = '\0';
test.param_value = test_case->generate_params(NULL, param_desc);
+ kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT
+ "# Subtest: %s", test_case->name);
}

do {
@@ -520,9 +522,8 @@ int kunit_run_tests(struct kunit_suite *suite)
}

kunit_log(KERN_INFO, &test,
- KUNIT_SUBTEST_INDENT
- "# %s: %s %d - %s",
- test_case->name,
+ KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT
+ "%s %d - %s",
kunit_status_to_ok_not_ok(test.status),
test.param_index + 1, param_desc);

--
2.33.0.1079.g6e70778dc9-goog

Daniel Latypov

unread,
Oct 21, 2021, 9:15:44 PM10/21/21
to David Gow, Brendan Higgins, Rae Moar, Shuah Khan, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
On Wed, Oct 20, 2021 at 11:28 PM David Gow <davi...@google.com> wrote:
>
> Currently, the results for individial parameters in a parameterised test
> are simply output as (K)TAP diagnostic lines. However, the plan was
> always[1] to make these (K)TAP subtests when kunit_tool supported them.
>
> With [2], these are now supported. (v5 will print out an error about the
> missing plan line, but this can safely be ignored, and will hopefully be

Should this commit description be updated?

> changed). As a result, individual test parameter results are parsed,
> displayed in the formatted results, and counted for test statistics.

This works for me.

One concern I have for the future is if showing all the parameters by
default might become too verbose?
Should there eventually be a verbosity/test-level flag that controls
how deep we go?
We could elect to only print FAILED subtests after we hit the depth limit.

Testing this out with:
$ ./tools/testing/kunit/kunit.py --kunitconfig=fs/fat

Before:
[17:55:48] Starting KUnit Kernel (1/1)...
[17:55:48] ============================================================
[17:55:51] ================== fat_test (3 subtests) ===================
[17:55:51] [PASSED] fat_checksum_test
[17:55:51] [PASSED] fat_time_fat2unix_test
[17:55:51] [PASSED] fat_time_unix2fat_test
[17:55:51] ==================== [PASSED] fat_test =====================
[17:55:51] ============================================================
[17:55:51] Testing complete. Passed: 3, Failed: 0, Crashed: 0,
Skipped: 0, Errors: 0
[17:55:51] Elapsed time: 7.784s total, 0.001s configuring, 4.790s
building, 2.877s running

[17:56:22] Starting KUnit Kernel (1/1)...
[17:56:22] ============================================================
[17:56:25] ================== fat_test (3 subtests) ===================
[17:56:25] [PASSED] fat_checksum_test
[17:56:25] ================== fat_time_fat2unix_test ==================
[17:56:25] [PASSED] Earliest possible UTC (1980-01-01 00:00:00)
[17:56:25] [PASSED] Latest possible UTC (2107-12-31 23:59:58)
[17:56:25] [PASSED] Earliest possible (UTC-11) (== 1979-12-31 13:00:00 UTC)
[17:56:25] [PASSED] Latest possible (UTC+11) (== 2108-01-01 10:59:58 UTC)
[17:56:25] [PASSED] Leap Day / Year (1996-02-29 00:00:00)
[17:56:25] [PASSED] Year 2000 is leap year (2000-02-29 00:00:00)
[17:56:25] [PASSED] Year 2100 not leap year (2100-03-01 00:00:00)
[17:56:25] [PASSED] Leap year + timezone UTC+1 (== 2004-02-29 00:30:00 UTC)
[17:56:25] [PASSED] Leap year + timezone UTC-1 (== 2004-02-29 23:30:00 UTC)
[17:56:25] [PASSED] VFAT odd-second resolution (1999-12-31 23:59:59)
[17:56:25] [PASSED] VFAT 10ms resolution (1980-01-01 00:00:00:0010)
[17:56:25] ============= [PASSED] fat_time_fat2unix_test ==============
[17:56:25] ================== fat_time_unix2fat_test ==================
[17:56:25] [PASSED] Earliest possible UTC (1980-01-01 00:00:00)
[17:56:25] [PASSED] Latest possible UTC (2107-12-31 23:59:58)
[17:56:25] [PASSED] Earliest possible (UTC-11) (== 1979-12-31 13:00:00 UTC)
[17:56:25] [PASSED] Latest possible (UTC+11) (== 2108-01-01 10:59:58 UTC)
[17:56:25] [PASSED] Leap Day / Year (1996-02-29 00:00:00)
[17:56:25] [PASSED] Year 2000 is leap year (2000-02-29 00:00:00)
[17:56:25] [PASSED] Year 2100 not leap year (2100-03-01 00:00:00)
[17:56:25] [PASSED] Leap year + timezone UTC+1 (== 2004-02-29 00:30:00 UTC)
[17:56:25] [PASSED] Leap year + timezone UTC-1 (== 2004-02-29 23:30:00 UTC)
[17:56:25] [PASSED] VFAT odd-second resolution (1999-12-31 23:59:59)
[17:56:25] [PASSED] VFAT 10ms resolution (1980-01-01 00:00:00:0010)
[17:56:25] ============= [PASSED] fat_time_unix2fat_test ==============
[17:56:25] ==================== [PASSED] fat_test =====================
[17:56:25] ============================================================
[17:56:25] Testing complete. Passed: 23, Failed: 0, Crashed: 0,
Skipped: 0, Errors: 0
[17:56:25] Elapsed time: 7.733s total, 0.001s configuring, 4.740s
building, 2.915s running

Looks similar when run with --kunitconfig=fs/ext4.

This "inverted" nesting of PASSED looks a bit "wrong" at first.

[17:56:25] [PASSED] VFAT 10ms resolution (1980-01-01 00:00:00:0010)
[17:56:25] ============= [PASSED] fat_time_unix2fat_test ==============
[17:56:25] ==================== [PASSED] fat_test =====================

But I know it's so that we can show results as incrementally as
possible, so I'm fine with it.
(I imagine our users won't necessarily make that connection, however.)
Signed-off-by: Daniel Latypov <dlat...@google.com>

Daniel Latypov

unread,
Oct 21, 2021, 9:29:55 PM10/21/21
to David Gow, Brendan Higgins, Rae Moar, Shuah Khan, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
On Wed, Oct 20, 2021 at 11:28 PM David Gow <davi...@google.com> wrote:
>
This works well, but there's an edge case.

This patch means we no longer print an error when there are no test
cases in a subtest.
We relied on a check just a bit lower in this function.

Consider

$ ./tools/testing/kunit/kunit.py parse <<EOF
TAP version 14
1..1
# Subtest: suite
1..1
# Subtest: case
ok 1 - case
ok 1 - suite
EOF

This produces the following output (timestamps removed)

============================================================
==================== suite (1 subtest) =====================
=========================== case ===========================
====================== [PASSED] case =======================
====================== [PASSED] suite ======================
============================================================

Should we surface some sort of error here?

David Gow

unread,
Oct 22, 2021, 2:10:34 AM10/22/21
to Daniel Latypov, Brendan Higgins, Rae Moar, Shuah Khan, KUnit Development, open list:KERNEL SELFTEST FRAMEWORK, Linux Kernel Mailing List
I thought about this a bit (and started prototyping it), and think the
answer is probably "no" (or, perhaps, "optionally"). Largely because I
think it'd be technically valid to have, e.g., a parameterised test
whose generator function can legitimately provide zero subtests. And
while that's probably worth warning about if it's the only test
running, if you're trying to run all tests, and one random subtest of
a test of a suite has no subtests, that seems like it'd be more
annoying to error on than anything else.

That being said, I'm not opposed to implementing it as an option, or
at least having the test status set to NO_ERROR. The implementation
I've experimented with basically moves the check to "parse_test", and
errors if the number of subtests is 0 after parsing, if parent_test is
true (or main, but my rough plan was to make main imply parent_test,
and adjust the various conditions to match). I haven't looked into
exactly how this is bubbled up yet, but I'd be okay with having an
error if there are no tests run at all.

I'll keep playing with this anyway: it's definitely a bit more of a
minefield than I'd originally thought. :-)

-- David

David Gow

unread,
Oct 22, 2021, 2:16:27 AM10/22/21
to Daniel Latypov, Brendan Higgins, Rae Moar, Shuah Khan, KUnit Development, open list:KERNEL SELFTEST FRAMEWORK, Linux Kernel Mailing List
On Fri, Oct 22, 2021 at 9:15 AM Daniel Latypov <dlat...@google.com> wrote:
>
> On Wed, Oct 20, 2021 at 11:28 PM David Gow <davi...@google.com> wrote:
> >
> > Currently, the results for individial parameters in a parameterised test
> > are simply output as (K)TAP diagnostic lines. However, the plan was
> > always[1] to make these (K)TAP subtests when kunit_tool supported them.
> >
> > With [2], these are now supported. (v5 will print out an error about the
> > missing plan line, but this can safely be ignored, and will hopefully be
>
> Should this commit description be updated?
>

Whoops, yup. I didn't want to change anything without making this a
v2, but I'll definitely update it now.

> > changed). As a result, individual test parameter results are parsed,
> > displayed in the formatted results, and counted for test statistics.
>
> This works for me.
>
> One concern I have for the future is if showing all the parameters by
> default might become too verbose?
> Should there eventually be a verbosity/test-level flag that controls
> how deep we go?
> We could elect to only print FAILED subtests after we hit the depth limit.

Totally agree with this. A --depth option is definitely going to
become necessary here, and I think that printing FAILED subtests after
that limit is sensible default behaviour for it.
Yeah, this is definitely something for which there's no "perfect" way
of handling it. The fact that the number of '=' signs is based on the
length of the name means that even that might not look consistent.
I'm sure there are things we could experiment with to make this
clearer, e.g. indenting or swapping out the '=' for '-' on subtests
(though there's definitely a limit to how deep we could go with
something like that).
(Was this supposed to be a Tested-by or Reviewed-by or something?)

Daniel Latypov

unread,
Oct 22, 2021, 6:29:34 PM10/22/21
to David Gow, Brendan Higgins, Rae Moar, Shuah Khan, KUnit Development, open list:KERNEL SELFTEST FRAMEWORK, Linux Kernel Mailing List
To be clear, I don't think we need to do anything about it at this moment.
Just noting that it might cause confusion (and if it causes enough
later on, then maybe we should revisit it).

>
> > >
> > > [1]: https://lore.kernel.org/linux-kselftest/CABVgOSnJAgWvTTABaF082LuY...@mail.gmail.com/
> > > [2]: https://lore.kernel.org/linux-kselftest/20211006001447....@google.com/
> > >
> > > Signed-off-by: David Gow <davi...@google.com>
> >
> > Signed-off-by: Daniel Latypov <dlat...@google.com>
> >
>
> (Was this supposed to be a Tested-by or Reviewed-by or something?)

Oops, muscle memory kicked in since I had just typed a Signed-off-by
recently....

Reviewed-by: Daniel Latypov <dlat...@google.com>

Daniel Latypov

unread,
Oct 22, 2021, 6:42:02 PM10/22/21
to David Gow, Brendan Higgins, Rae Moar, Shuah Khan, KUnit Development, open list:KERNEL SELFTEST FRAMEWORK, Linux Kernel Mailing List
That's the question. Should we report PASSED in that case as we do now?

Let's consider parameterised tests, our only current example in KUnit.

I think in most cases, it's a bug that if we got 0 cases and we should
let the user know somehow.
Should it be an error/warning? Maybe not, but wouldn't it be better to
report SKIPPED?
(This would require a change in KUnit on the kernel side, I'm not
suggesting we do this in the parser)

David Gow

unread,
Oct 22, 2021, 8:25:18 PM10/22/21
to Daniel Latypov, Brendan Higgins, Rae Moar, Shuah Khan, KUnit Development, open list:KERNEL SELFTEST FRAMEWORK, Linux Kernel Mailing List
Yeah: there are two sorf-of separable decisions here:
1) What result should a test with no subtests return?
2) Do we want to trigger any other errors/warnings.

I think the answer to 1 is that kunit_tool should report the result
printed in the KTAP output. I agree that, for parameterised tests,
though, that SKIPPED makes more sense than PASSED. (kunit_tool has a
separate NO_TESTS result, which we could maybe try to generate and
handle explicitly. I think we might as well leave that for the "no
tests run at all" case for now.)

For 2, I feel that this definitely should count as a "warning", but
all we have at the moment are "errors", which I feel is probably a bit
too strong a term for this. Given errors don't actually halt parsing,
I'm okay with generating them in kunit_tool in this case, but I'd
probably slightly prefer to leave it with SKIPPED, and maybe add a
warning later.

Daniel Latypov

unread,
Oct 25, 2021, 11:21:17 AM10/25/21
to David Gow, Brendan Higgins, Rae Moar, Shuah Khan, KUnit Development, open list:KERNEL SELFTEST FRAMEWORK, Linux Kernel Mailing List
Actually, when I tried to pass in an empty parameter array, I get a segfault.
So I guess we *do* let the user know somehow :)

The root cause: we call test_case->run_case(test), but the
test->param_value == NULL.
So the test code will segfault whenever it tries to read from param_value.

A hacky fix:

diff --git a/lib/kunit/test.c b/lib/kunit/test.c
index 85265f9a66a1..e55f842ae355 100644
--- a/lib/kunit/test.c
+++ b/lib/kunit/test.c
@@ -513,6 +513,8 @@ int kunit_run_tests(struct kunit_suite *suite)
}

do {
+ if (test_case->generate_params && !test.param_value)
+ break; // there were no parameters generated!
kunit_run_case_catch_errors(suite, test_case, &test);

if (test_case->generate_params) {

> Should it be an error/warning? Maybe not, but wouldn't it be better to
> report SKIPPED?
> (This would require a change in KUnit on the kernel side, I'm not
> suggesting we do this in the parser)

Being a bit more concrete, I was originally thinking of the following:

diff --git a/lib/kunit/test.c b/lib/kunit/test.c
index 85265f9a66a1..3f7141a72308 100644
--- a/lib/kunit/test.c
+++ b/lib/kunit/test.c
@@ -537,6 +537,9 @@ int kunit_run_tests(struct kunit_suite *suite)

} while (test.param_value);

+ if (param_stats.total == 0)
+ test_case->status = KUNIT_SKIPPED;
+
kunit_print_test_stats(&test, param_stats);

kunit_print_ok_not_ok(&test, true, test_case->status,

But tacking onto the hacky fix above, it could look like

diff --git a/lib/kunit/test.c b/lib/kunit/test.c
index 85265f9a66a1..a2d93b44ef88 100644
--- a/lib/kunit/test.c
+++ b/lib/kunit/test.c
@@ -513,6 +513,13 @@ int kunit_run_tests(struct kunit_suite *suite)
}

do {
+ if (test_case->generate_params && !test.param_value) {
+ strncpy(test.status_comment,"No test
parameters generated",
+ sizeof(test.status_comment));
+ test_case->status = KUNIT_SKIPPED;
+ break;
+ }
+
kunit_run_case_catch_errors(suite, test_case, &test);

if (test_case->generate_params) {

Brendan Higgins

unread,
Oct 25, 2021, 5:14:35 PM10/25/21
to David Gow, Daniel Latypov, Rae Moar, Shuah Khan, KUnit Development, open list:KERNEL SELFTEST FRAMEWORK, Linux Kernel Mailing List
I am OK marking it as SKIPPED, but I like the idea of promoting it to
a warning in a future change.

Completely ignoring an empty test suite seems wrong, especially when
we *do* complain when there *is* a test plan, and not all test cases
are accounted for.

My 2c.

David Gow

unread,
Oct 26, 2021, 9:37:11 PM10/26/21
to Brendan Higgins, Rae Moar, Daniel Latypov, Shuah Khan, David Gow, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org

David Gow

unread,
Oct 26, 2021, 9:37:13 PM10/26/21
to Brendan Higgins, Rae Moar, Daniel Latypov, Shuah Khan, David Gow, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
It's possible for a test to have a subtest header, but zero valid
subtests. We used to error on this if the test plan had no subtests
listed, but it's possible to have subtests without a test plan (indeed,
this is how parameterised tests work).

Tests with 0 subtests now have the result NO_TESTS, and will report an
error (which does not halt test execution, but is printed in a scary red
colour and is noted in the results summary).

Signed-off-by: David Gow <davi...@google.com>
---
tools/testing/kunit/kunit_parser.py | 14 +++++++++-----
tools/testing/kunit/kunit_tool_test.py | 9 +++++++++
.../test_is_test_passed-no_tests_no_plan.log | 7 +++++++
3 files changed, 25 insertions(+), 5 deletions(-)
create mode 100644 tools/testing/kunit/test_data/test_is_test_passed-no_tests_no_plan.log

diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index 50ded55c168c..3a838423c381 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -360,9 +360,6 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool:
test.log.append(lines.pop())
expected_count = int(match.group(1))
test.expected_count = expected_count
- if expected_count == 0:
- test.status = TestStatus.NO_TESTS
- test.add_error('0 tests run!')
return True

TEST_RESULT = re.compile(r'^(ok|not ok) ([0-9]+) (- )?([^#]*)( # .*)?$')
@@ -731,6 +728,7 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
# test plan
test.name = "main"
parse_test_plan(lines, test)
+ parent_test = True
else:
# If KTAP/TAP header is not found, test must be subtest
# header or test result line so parse attempt to parser
@@ -744,7 +742,7 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
expected_count = test.expected_count
subtests = []
test_num = 1
- while expected_count is None or test_num <= expected_count:
+ while parent_test and (expected_count is None or test_num <= expected_count):
# Loop to parse any subtests.
# Break after parsing expected number of tests or
# if expected number of tests is unknown break when test
@@ -779,9 +777,15 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
parse_test_result(lines, test, expected_num)
else:
test.add_error('missing subtest result line!')
+
+ # Check for there being no tests
+ if parent_test and len(subtests) == 0:
+ test.status = TestStatus.NO_TESTS
+ test.add_error('0 tests run!')
+
# Add statuses to TestCounts attribute in Test object
bubble_up_test_results(test)
- if parent_test:
+ if parent_test and not main:
# If test has subtests and is not the main test object, print
# footer.
print_test_footer(test)
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
index bc8793145713..c59fe0777387 100755
--- a/tools/testing/kunit/kunit_tool_test.py
+++ b/tools/testing/kunit/kunit_tool_test.py
@@ -208,6 +208,15 @@ class KUnitParserTest(unittest.TestCase):
self.assertEqual(
kunit_parser.TestStatus.NO_TESTS,
result.status)
+ no_plan_log = test_data_path('test_is_test_passed-no_tests_no_plan.log')
+ with open(no_plan_log) as file:
+ result = kunit_parser.parse_run_tests(
+ kunit_parser.extract_tap_lines(file.readlines()))
+ self.assertEqual(0, len(result.test.subtests[0].subtests[0].subtests))
+ self.assertEqual(
+ kunit_parser.TestStatus.NO_TESTS,
+ result.test.subtests[0].subtests[0].status)
+

def test_no_kunit_output(self):
crash_log = test_data_path('test_insufficient_memory.log')
diff --git a/tools/testing/kunit/test_data/test_is_test_passed-no_tests_no_plan.log b/tools/testing/kunit/test_data/test_is_test_passed-no_tests_no_plan.log
new file mode 100644
index 000000000000..dd873c981108
--- /dev/null
+++ b/tools/testing/kunit/test_data/test_is_test_passed-no_tests_no_plan.log
@@ -0,0 +1,7 @@
+TAP version 14
+1..1
+ # Subtest: suite
+ 1..1
+ # Subtest: case
+ ok 1 - case # SKIP
+ok 1 - suite
--
2.33.0.1079.g6e70778dc9-goog

David Gow

unread,
Oct 26, 2021, 9:37:15 PM10/26/21
to Brendan Higgins, Rae Moar, Daniel Latypov, Shuah Khan, David Gow, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
It's possible that a parameterised test could end up with zero
parameters. At the moment, the test function will nevertheless be called
with NULL as the parameter. Instead, don't try to run the test code, and
just mark the test as SKIPped.

Reported-by: Daniel Latypov <dlat...@google.com>
Signed-off-by: David Gow <davi...@google.com>
---
lib/kunit/test.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)

diff --git a/lib/kunit/test.c b/lib/kunit/test.c
index 3bd741e50a2d..e028d98e4f5b 100644
--- a/lib/kunit/test.c
+++ b/lib/kunit/test.c
@@ -500,7 +500,10 @@ int kunit_run_tests(struct kunit_suite *suite)
kunit_print_subtest_start(suite);

kunit_suite_for_each_test_case(suite, test_case) {
- struct kunit test = { .param_value = NULL, .param_index = 0 };
+ /* The initial param value is nonzero, as we want
+ * non-parametrised tests to run once.
+ */
+ struct kunit test = { .param_value = (void *)-1, .param_index = 0 };
struct kunit_result_stats param_stats = { 0 };
test_case->status = KUNIT_SKIPPED;

@@ -510,7 +513,7 @@ int kunit_run_tests(struct kunit_suite *suite)
test.param_value = test_case->generate_params(NULL, param_desc);
}

- do {
+ while (test.param_value) {
kunit_run_case_catch_errors(suite, test_case, &test);

if (test_case->generate_params) {
@@ -530,11 +533,12 @@ int kunit_run_tests(struct kunit_suite *suite)
param_desc[0] = '\0';
test.param_value = test_case->generate_params(test.param_value, param_desc);
test.param_index++;
- }
+ } else
+ test.param_value = NULL;

kunit_update_stats(&param_stats, test.status);

- } while (test.param_value);
+ }

kunit_print_test_stats(&test, param_stats);

--
2.33.0.1079.g6e70778dc9-goog

David Gow

unread,
Oct 26, 2021, 9:37:17 PM10/26/21
to Brendan Higgins, Rae Moar, Daniel Latypov, Shuah Khan, David Gow, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
Currently, the results for individial parameters in a parameterised test
are simply output as (K)TAP diagnostic lines.

As kunit_tool now supports nested subtests, report each parameter as its
own subtest.

For example, here's what the output now looks like:
# Subtest: inode_test_xtimestamp_decoding
ok 1 - 1901-12-13 Lower bound of 32bit < 0 timestamp, no extra bits
ok 2 - 1969-12-31 Upper bound of 32bit < 0 timestamp, no extra bits
ok 3 - 1970-01-01 Lower bound of 32bit >=0 timestamp, no extra bits
ok 4 - 2038-01-19 Upper bound of 32bit >=0 timestamp, no extra bits
ok 5 - 2038-01-19 Lower bound of 32bit <0 timestamp, lo extra sec bit on
ok 6 - 2106-02-07 Upper bound of 32bit <0 timestamp, lo extra sec bit on
ok 7 - 2106-02-07 Lower bound of 32bit >=0 timestamp, lo extra sec bit on
ok 8 - 2174-02-25 Upper bound of 32bit >=0 timestamp, lo extra sec bit on
ok 9 - 2174-02-25 Lower bound of 32bit <0 timestamp, hi extra sec bit on
ok 10 - 2242-03-16 Upper bound of 32bit <0 timestamp, hi extra sec bit on
ok 11 - 2242-03-16 Lower bound of 32bit >=0 timestamp, hi extra sec bit on
ok 12 - 2310-04-04 Upper bound of 32bit >=0 timestamp, hi extra sec bit on
ok 13 - 2310-04-04 Upper bound of 32bit>=0 timestamp, hi extra sec bit 1. 1 ns
ok 14 - 2378-04-22 Lower bound of 32bit>= timestamp. Extra sec bits 1. Max ns
ok 15 - 2378-04-22 Lower bound of 32bit >=0 timestamp. All extra sec bits on
ok 16 - 2446-05-10 Upper bound of 32bit >=0 timestamp. All extra sec bits on
# inode_test_xtimestamp_decoding: pass:16 fail:0 skip:0 total:16
ok 1 - inode_test_xtimestamp_decoding

Signed-off-by: David Gow <davi...@google.com>
Reviewed-by: Daniel Latypov <dlat...@google.com>
---
lib/kunit/test.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/lib/kunit/test.c b/lib/kunit/test.c
index e028d98e4f5b..fe2ab31b5949 100644
--- a/lib/kunit/test.c
+++ b/lib/kunit/test.c
@@ -511,6 +511,8 @@ int kunit_run_tests(struct kunit_suite *suite)
/* Get initial param. */
param_desc[0] = '\0';
test.param_value = test_case->generate_params(NULL, param_desc);
+ kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT
+ "# Subtest: %s", test_case->name);
}

while (test.param_value) {
@@ -523,9 +525,8 @@ int kunit_run_tests(struct kunit_suite *suite)

Daniel Latypov

unread,
Oct 27, 2021, 3:00:08 PM10/27/21
to David Gow, Brendan Higgins, Rae Moar, Shuah Khan, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
On Tue, Oct 26, 2021 at 6:37 PM David Gow <davi...@google.com> wrote:
>
> It's possible that a parameterised test could end up with zero
> parameters. At the moment, the test function will nevertheless be called
> with NULL as the parameter. Instead, don't try to run the test code, and
> just mark the test as SKIPped.
>
> Reported-by: Daniel Latypov <dlat...@google.com>
> Signed-off-by: David Gow <davi...@google.com>
> ---
> lib/kunit/test.c | 12 ++++++++----
> 1 file changed, 8 insertions(+), 4 deletions(-)
>
> diff --git a/lib/kunit/test.c b/lib/kunit/test.c
> index 3bd741e50a2d..e028d98e4f5b 100644
> --- a/lib/kunit/test.c
> +++ b/lib/kunit/test.c
> @@ -500,7 +500,10 @@ int kunit_run_tests(struct kunit_suite *suite)
> kunit_print_subtest_start(suite);
>
> kunit_suite_for_each_test_case(suite, test_case) {
> - struct kunit test = { .param_value = NULL, .param_index = 0 };
> + /* The initial param value is nonzero, as we want
> + * non-parametrised tests to run once.
> + */
> + struct kunit test = { .param_value = (void *)-1, .param_index = 0 };

(Not a strong preference)

Hmm, I'd slightly prefer we don't set a dummy value of -1 for this.
I personally think something like this is a bit less subtle:

/* Run non-parameterised tests once */
while (!test_case->generate_param || test.param_value) {

if (!test_case->generate_param) break;
}

Alternatively, we don't need to share the loop

if (!test_case->generate_param) {
kunit_run_case_catch_errors(suite, test_case, &test);
kunit_update_stats(&param_stats, test.status);
} else while (test_param.value) {
kunit_run_case_catch_errors(suite, test_case, &test);
kunit_update_stats(&param_stats, test.status);
/* print subtest and advance next param */

Daniel Latypov

unread,
Oct 27, 2021, 6:13:05 PM10/27/21
to David Gow, Brendan Higgins, Rae Moar, Shuah Khan, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
On Tue, Oct 26, 2021 at 6:37 PM David Gow <davi...@google.com> wrote:
>
> The (K)TAP spec encourages test output to begin with a 'test plan': a
> count of the number of tests being run of the form:
> 1..n
>
> However, some test suites might not know the number of subtests in
> advance (for example, KUnit's parameterised tests use a generator
> function). In this case, it's not possible to print the test plan in
> advance.
>
> kunit_tool already parses test output which doesn't contain a plan, but
> reports an error. Since we want to use nested subtests with KUnit
> paramterised tests, remove this error.
>
> Signed-off-by: David Gow <davi...@google.com>

Reviewed-by: Daniel Latypov <dlat...@google.com>

This looks to be unchanged from v1.
Looks good to me given kunit itself will report SKIPPED for parameterised tests.

David Gow

unread,
Oct 27, 2021, 7:02:58 PM10/27/21
to Daniel Latypov, Brendan Higgins, Rae Moar, Shuah Khan, KUnit Development, open list:KERNEL SELFTEST FRAMEWORK, Linux Kernel Mailing List
On Thu, Oct 28, 2021 at 6:13 AM Daniel Latypov <dlat...@google.com> wrote:
>
> On Tue, Oct 26, 2021 at 6:37 PM David Gow <davi...@google.com> wrote:
> >
> > The (K)TAP spec encourages test output to begin with a 'test plan': a
> > count of the number of tests being run of the form:
> > 1..n
> >
> > However, some test suites might not know the number of subtests in
> > advance (for example, KUnit's parameterised tests use a generator
> > function). In this case, it's not possible to print the test plan in
> > advance.
> >
> > kunit_tool already parses test output which doesn't contain a plan, but
> > reports an error. Since we want to use nested subtests with KUnit
> > paramterised tests, remove this error.
> >
> > Signed-off-by: David Gow <davi...@google.com>
>
> Reviewed-by: Daniel Latypov <dlat...@google.com>
>
> This looks to be unchanged from v1.

Yeah, the changes in v2 were all in the new patches.

-- David

David Gow

unread,
Oct 28, 2021, 2:42:07 AM10/28/21
to Brendan Higgins, Rae Moar, Daniel Latypov, Shuah Khan, David Gow, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
The (K)TAP spec encourages test output to begin with a 'test plan': a
count of the number of tests being run of the form:
1..n

However, some test suites might not know the number of subtests in
advance (for example, KUnit's parameterised tests use a generator
function). In this case, it's not possible to print the test plan in
advance.

kunit_tool already parses test output which doesn't contain a plan, but
reports an error. Since we want to use nested subtests with KUnit
paramterised tests, remove this error.

Signed-off-by: David Gow <davi...@google.com>
Reviewed-by: Daniel Latypov <dlat...@google.com>
---

Changes since v2:
https://lore.kernel.org/linux-kselftest/20211027013702.2...@google.com/
- No code changes.
- Added Daniel's Reviewed-by.

David Gow

unread,
Oct 28, 2021, 2:42:09 AM10/28/21
to Brendan Higgins, Rae Moar, Daniel Latypov, Shuah Khan, David Gow, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
It's possible for a test to have a subtest header, but zero valid
subtests. We used to error on this if the test plan had no subtests
listed, but it's possible to have subtests without a test plan (indeed,
this is how parameterised tests work).

Tests with 0 subtests now have the result NO_TESTS, and will report an
error (which does not halt test execution, but is printed in a scary red
colour and is noted in the results summary).

Signed-off-by: David Gow <davi...@google.com>
- Report NO_TESTS as '[NO TESTS RUN]' in yellow, instead of '[FAILED]'
in red, particularly since it doesn't get counted as a failure.

tools/testing/kunit/kunit_parser.py | 16 +++++++++++-----
tools/testing/kunit/kunit_tool_test.py | 9 +++++++++
.../test_is_test_passed-no_tests_no_plan.log | 7 +++++++
3 files changed, 27 insertions(+), 5 deletions(-)
create mode 100644 tools/testing/kunit/test_data/test_is_test_passed-no_tests_no_plan.log

diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index 50ded55c168c..68c847e8ca58 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -360,9 +360,6 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool:
test.log.append(lines.pop())
expected_count = int(match.group(1))
test.expected_count = expected_count
- if expected_count == 0:
- test.status = TestStatus.NO_TESTS
- test.add_error('0 tests run!')
return True

TEST_RESULT = re.compile(r'^(ok|not ok) ([0-9]+) (- )?([^#]*)( # .*)?$')
@@ -589,6 +586,8 @@ def format_test_result(test: Test) -> str:
return (green('[PASSED] ') + test.name)
elif test.status == TestStatus.SKIPPED:
return (yellow('[SKIPPED] ') + test.name)
+ elif test.status == TestStatus.NO_TESTS:
+ return (yellow('[NO TESTS RUN] ') + test.name)
elif test.status == TestStatus.TEST_CRASHED:
print_log(test.log)
return (red('[CRASHED] ') + test.name)
@@ -731,6 +730,7 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
# test plan
test.name = "main"
parse_test_plan(lines, test)
+ parent_test = True
else:
# If KTAP/TAP header is not found, test must be subtest
# header or test result line so parse attempt to parser
@@ -744,7 +744,7 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
expected_count = test.expected_count
subtests = []
test_num = 1
- while expected_count is None or test_num <= expected_count:
+ while parent_test and (expected_count is None or test_num <= expected_count):
# Loop to parse any subtests.
# Break after parsing expected number of tests or
# if expected number of tests is unknown break when test
@@ -779,9 +779,15 @@ def parse_test(lines: LineStream, expected_num: int, log: List[str]) -> Test:
parse_test_result(lines, test, expected_num)
else:
test.add_error('missing subtest result line!')
+
+ # Check for there being no tests
+ if parent_test and len(subtests) == 0:
+ test.status = TestStatus.NO_TESTS
+ test.add_error('0 tests run!')
+
# Add statuses to TestCounts attribute in Test object
bubble_up_test_results(test)
- if parent_test:
+ if parent_test and not main:
# If test has subtests and is not the main test object, print
# footer.
print_test_footer(test)
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
index bc8793145713..c59fe0777387 100755
--- a/tools/testing/kunit/kunit_tool_test.py
+++ b/tools/testing/kunit/kunit_tool_test.py

David Gow

unread,
Oct 28, 2021, 2:42:11 AM10/28/21
to Brendan Higgins, Rae Moar, Daniel Latypov, Shuah Khan, David Gow, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
It's possible that a parameterised test could end up with zero
parameters. At the moment, the test function will nevertheless be called
with NULL as the parameter. Instead, don't try to run the test code, and
just mark the test as SKIPped.

Reported-by: Daniel Latypov <dlat...@google.com>
Signed-off-by: David Gow <davi...@google.com>
- Rework to not share the loop between the parameterised and
non-parameterised test cases.
- Suggested by Daniel Latypov.
- Avoids using a magic non-zero pointer value.

lib/kunit/test.c | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/lib/kunit/test.c b/lib/kunit/test.c
index 3bd741e50a2d..dfe1127aacfd 100644
--- a/lib/kunit/test.c
+++ b/lib/kunit/test.c
@@ -508,12 +508,12 @@ int kunit_run_tests(struct kunit_suite *suite)
/* Get initial param. */
param_desc[0] = '\0';
test.param_value = test_case->generate_params(NULL, param_desc);
- }
+ kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT
+ "# Subtest: %s", test_case->name);

- do {
- kunit_run_case_catch_errors(suite, test_case, &test);
+ while (test.param_value) {
+ kunit_run_case_catch_errors(suite, test_case, &test);

- if (test_case->generate_params) {
if (param_desc[0] == '\0') {
snprintf(param_desc, sizeof(param_desc),
"param-%d", test.param_index);
@@ -530,11 +530,15 @@ int kunit_run_tests(struct kunit_suite *suite)
param_desc[0] = '\0';
test.param_value = test_case->generate_params(test.param_value, param_desc);
test.param_index++;
- }

+ kunit_update_stats(&param_stats, test.status);
+ }
+ } else {
+ /* Non-parameterised test. */
+ kunit_run_case_catch_errors(suite, test_case, &test);
kunit_update_stats(&param_stats, test.status);
+ }

- } while (test.param_value);

kunit_print_test_stats(&test, param_stats);

--
2.33.0.1079.g6e70778dc9-goog

David Gow

unread,
Oct 28, 2021, 2:42:14 AM10/28/21
to Brendan Higgins, Rae Moar, Daniel Latypov, Shuah Khan, David Gow, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
Currently, the results for individial parameters in a parameterised test
are simply output as (K)TAP diagnostic lines.

As kunit_tool now supports nested subtests, report each parameter as its
own subtest.

For example, here's what the output now looks like:
# Subtest: inode_test_xtimestamp_decoding
ok 1 - 1901-12-13 Lower bound of 32bit < 0 timestamp, no extra bits
ok 2 - 1969-12-31 Upper bound of 32bit < 0 timestamp, no extra bits
ok 3 - 1970-01-01 Lower bound of 32bit >=0 timestamp, no extra bits
ok 4 - 2038-01-19 Upper bound of 32bit >=0 timestamp, no extra bits
ok 5 - 2038-01-19 Lower bound of 32bit <0 timestamp, lo extra sec bit on
ok 6 - 2106-02-07 Upper bound of 32bit <0 timestamp, lo extra sec bit on
ok 7 - 2106-02-07 Lower bound of 32bit >=0 timestamp, lo extra sec bit on
ok 8 - 2174-02-25 Upper bound of 32bit >=0 timestamp, lo extra sec bit on
ok 9 - 2174-02-25 Lower bound of 32bit <0 timestamp, hi extra sec bit on
ok 10 - 2242-03-16 Upper bound of 32bit <0 timestamp, hi extra sec bit on
ok 11 - 2242-03-16 Lower bound of 32bit >=0 timestamp, hi extra sec bit on
ok 12 - 2310-04-04 Upper bound of 32bit >=0 timestamp, hi extra sec bit on
ok 13 - 2310-04-04 Upper bound of 32bit>=0 timestamp, hi extra sec bit 1. 1 ns
ok 14 - 2378-04-22 Lower bound of 32bit>= timestamp. Extra sec bits 1. Max ns
ok 15 - 2378-04-22 Lower bound of 32bit >=0 timestamp. All extra sec bits on
ok 16 - 2446-05-10 Upper bound of 32bit >=0 timestamp. All extra sec bits on
# inode_test_xtimestamp_decoding: pass:16 fail:0 skip:0 total:16
ok 1 - inode_test_xtimestamp_decoding

Signed-off-by: David Gow <davi...@google.com>
Reviewed-by: Daniel Latypov <dlat...@google.com>
---

Changes since v2:
https://lore.kernel.org/linux-kselftest/20211027013702.2...@google.com/
- No changes to this patch.


lib/kunit/test.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/lib/kunit/test.c b/lib/kunit/test.c
index dfe1127aacfd..c66fe1735054 100644
--- a/lib/kunit/test.c
+++ b/lib/kunit/test.c
@@ -520,9 +520,8 @@ int kunit_run_tests(struct kunit_suite *suite)

Daniel Latypov

unread,
Oct 28, 2021, 6:21:23 PM10/28/21
to David Gow, Brendan Higgins, Rae Moar, Shuah Khan, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
On Wed, Oct 27, 2021 at 11:42 PM David Gow <davi...@google.com> wrote:
>
> It's possible that a parameterised test could end up with zero
> parameters. At the moment, the test function will nevertheless be called
> with NULL as the parameter. Instead, don't try to run the test code, and
> just mark the test as SKIPped.
>
> Reported-by: Daniel Latypov <dlat...@google.com>
> Signed-off-by: David Gow <davi...@google.com>

Reviewed-by: Daniel Latypov <dlat...@google.com>

There's a small bug in this patch noted below, we just need to move
the "# Subtest" change into the child patch.
If/when we make that change, I have an optional suggestion about
flipping the if/else branch.

But other than that, this looks good to me.

> ---
>
> Changes since v2:
> https://lore.kernel.org/linux-kselftest/20211027013702.2...@google.com/
> - Rework to not share the loop between the parameterised and
> non-parameterised test cases.
> - Suggested by Daniel Latypov.
> - Avoids using a magic non-zero pointer value.
>
> lib/kunit/test.c | 16 ++++++++++------
> 1 file changed, 10 insertions(+), 6 deletions(-)
>
> diff --git a/lib/kunit/test.c b/lib/kunit/test.c
> index 3bd741e50a2d..dfe1127aacfd 100644
> --- a/lib/kunit/test.c
> +++ b/lib/kunit/test.c
> @@ -508,12 +508,12 @@ int kunit_run_tests(struct kunit_suite *suite)
> /* Get initial param. */
> param_desc[0] = '\0';
> test.param_value = test_case->generate_params(NULL, param_desc);
> - }
> + kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT
> + "# Subtest: %s", test_case->name);

It looks like this change accidentally made its way into this patch as
opposed to the child.

This commit itself gives me a traffic light (red/yellow/green statuses):

[ERROR] Test inode_test_xtimestamp_decoding: 0 tests run!
====== [NO TESTS RUN] inode_test_xtimestamp_decoding =======
================= [PASSED] ext4_inode_test =================
============================================================

The problem is the output becomes this:
# Subtest: ext4_inode_test
1..1
# Subtest: inode_test_xtimestamp_decoding
# inode_test_xtimestamp_decoding: ok 1 - 1901-12-13 Lower bound of
32bit < 0 timestamp, no extra bits
...

>
> - do {
> - kunit_run_case_catch_errors(suite, test_case, &test);
> + while (test.param_value) {
> + kunit_run_case_catch_errors(suite, test_case, &test);
>
> - if (test_case->generate_params) {
> if (param_desc[0] == '\0') {
> snprintf(param_desc, sizeof(param_desc),
> "param-%d", test.param_index);
> @@ -530,11 +530,15 @@ int kunit_run_tests(struct kunit_suite *suite)
> param_desc[0] = '\0';
> test.param_value = test_case->generate_params(test.param_value, param_desc);
> test.param_index++;
> - }
>
> + kunit_update_stats(&param_stats, test.status);
> + }
> + } else {

I have a very slight preference for having the order of these branches swapped.
i.e.

if (!test_case->generate_params) {
/* Non-parameterised test */
} else { ... }

I prefer this because I think it's more readable for a few reasons:
* I like having the "normal" branch come first. This is likely the
code path a reader would care more about.
* I prefer having the shorter branch come first. It makes it easier to
read it through and see "oh, so this branch is just that one but with
XYZ"

Daniel Latypov

unread,
Oct 28, 2021, 6:36:33 PM10/28/21
to David Gow, Brendan Higgins, Rae Moar, Shuah Khan, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
On Wed, Oct 27, 2021 at 11:42 PM David Gow <davi...@google.com> wrote:
>
> It's possible for a test to have a subtest header, but zero valid
> subtests. We used to error on this if the test plan had no subtests
> listed, but it's possible to have subtests without a test plan (indeed,
> this is how parameterised tests work).
>
> Tests with 0 subtests now have the result NO_TESTS, and will report an
> error (which does not halt test execution, but is printed in a scary red
> colour and is noted in the results summary).

Tested by tweaking ext4 tests (and running with patch 3)

[15:04:33] =============== ext4_inode_test (1 subtest) ================
[15:04:33] ============== inode_test_xtimestamp_decoding ==============
[15:04:33] [ERROR] Test inode_test_xtimestamp_decoding: 0 tests run!
[15:04:33] ====== [NO TESTS RUN] inode_test_xtimestamp_decoding =======
[15:04:33] ================ [SKIPPED] ext4_inode_test =================
[15:04:33] ============================================================
[15:04:33] Testing complete. Passed: 0, Failed: 0, Crashed: 0,
Skipped: 1, Errors: 1
[15:04:33] Elapsed time: 48.581s total, 0.000s configuring, 45.486s
building, 2.992s running

It's maybe a bit confusing to have ERROR, NO TESTS RUN, and SKIPPED
all printed for the same thing.

An alternative would be to drop the error, giving
[15:04:33] =============== ext4_inode_test (1 subtest) ================
[15:04:33] ============== inode_test_xtimestamp_decoding ==============
[15:04:33] ====== [NO TESTS RUN] inode_test_xtimestamp_decoding =======
[15:04:33] ================ [SKIPPED] ext4_inode_test =================
[15:04:33] ============================================================

But looking at it, I think I prefer the more explicit ERROR being there.

>
> Signed-off-by: David Gow <davi...@google.com>

Reviewed-by: Daniel Latypov <dlat...@google.com>

A few optional nits below.
I'd prefer we split these test cases out.
Perhaps:

def test_no_tests_empty_plan(self):
...

def test_no_tests_no_plan(self):
... # this new test

> + no_plan_log = test_data_path('test_is_test_passed-no_tests_no_plan.log')
> + with open(no_plan_log) as file:
> + result = kunit_parser.parse_run_tests(
> + kunit_parser.extract_tap_lines(file.readlines()))
> + self.assertEqual(0, len(result.test.subtests[0].subtests[0].subtests))
> + self.assertEqual(
> + kunit_parser.TestStatus.NO_TESTS,
> + result.test.subtests[0].subtests[0].status)

optional:
self.assertEqual(1, result.test.counts.errors)

David Gow

unread,
Nov 2, 2021, 3:30:21 AM11/2/21
to Brendan Higgins, Rae Moar, Daniel Latypov, Shuah Khan, David Gow, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
The (K)TAP spec encourages test output to begin with a 'test plan': a
count of the number of tests being run of the form:
1..n

However, some test suites might not know the number of subtests in
advance (for example, KUnit's parameterised tests use a generator
function). In this case, it's not possible to print the test plan in
advance.

kunit_tool already parses test output which doesn't contain a plan, but
reports an error. Since we want to use nested subtests with KUnit
paramterised tests, remove this error.

Signed-off-by: David Gow <davi...@google.com>
Reviewed-by: Daniel Latypov <dlat...@google.com>
---

No changes since v3:
https://lore.kernel.org/linux-kselftest/20211028064154.2...@google.com/
- No code changes.
- Added Daniel's Reviewed-by.


tools/testing/kunit/kunit_parser.py | 5 ++---
tools/testing/kunit/kunit_tool_test.py | 5 ++++-
2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index 3355196d0515..50ded55c168c 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
@@ -340,8 +340,8 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool:
"""
Parses test plan line and stores the expected number of subtests in
test object. Reports an error if expected count is 0.
- Returns False and reports missing test plan error if fails to parse
- test plan.
+ Returns False and sets expected_count to None if there is no valid test
+ plan.

Accepted format:
- '1..[number of subtests]'
@@ -356,7 +356,6 @@ def parse_test_plan(lines: LineStream, test: Test) -> bool:
match = TEST_PLAN.match(lines.peek())
if not match:
test.expected_count = None
- test.add_error('missing plan line!')
return False
test.log.append(lines.pop())
expected_count = int(match.group(1))
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
index 9c4126731457..bc8793145713 100755
--- a/tools/testing/kunit/kunit_tool_test.py
+++ b/tools/testing/kunit/kunit_tool_test.py
@@ -191,7 +191,10 @@ class KUnitParserTest(unittest.TestCase):
result = kunit_parser.parse_run_tests(
kunit_parser.extract_tap_lines(
file.readlines()))
- self.assertEqual(2, result.test.counts.errors)
+ # A missing test plan is not an error.
+ self.assertEqual(0, result.test.counts.errors)
+ # All tests should be accounted for.
+ self.assertEqual(10, result.test.counts.total())
self.assertEqual(
kunit_parser.TestStatus.SUCCESS,
result.status)
--
2.33.1.1089.g2158813163f-goog

David Gow

unread,
Nov 2, 2021, 3:30:23 AM11/2/21
to Brendan Higgins, Rae Moar, Daniel Latypov, Shuah Khan, David Gow, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
It's possible for a test to have a subtest header, but zero valid
subtests. We used to error on this if the test plan had no subtests
listed, but it's possible to have subtests without a test plan (indeed,
this is how parameterised tests work).

Tests with 0 subtests now have the result NO_TESTS, and will report an
error (which does not halt test execution, but is printed in a scary red
colour and is noted in the results summary).

Signed-off-by: David Gow <davi...@google.com>
Reviewed-by: Daniel Latypov <dlat...@google.com>
---

Changes since v3:
https://lore.kernel.org/linux-kselftest/20211028064154.2...@google.com/
- Split up and expanded the tests, as suggested by Daniel.
- Report NO_TESTS as '[NO TESTS RUN]' in yellow, instead of '[FAILED]'
in red, particularly since it doesn't get counted as a failure.


tools/testing/kunit/kunit_parser.py | 16 +++++++++++-----
tools/testing/kunit/kunit_tool_test.py | 12 ++++++++++++
.../test_is_test_passed-no_tests_no_plan.log | 7 +++++++
3 files changed, 30 insertions(+), 5 deletions(-)
create mode 100644 tools/testing/kunit/test_data/test_is_test_passed-no_tests_no_plan.log

diff --git a/tools/testing/kunit/kunit_parser.py b/tools/testing/kunit/kunit_parser.py
index 50ded55c168c..68c847e8ca58 100644
--- a/tools/testing/kunit/kunit_parser.py
+++ b/tools/testing/kunit/kunit_parser.py
diff --git a/tools/testing/kunit/kunit_tool_test.py b/tools/testing/kunit/kunit_tool_test.py
index bc8793145713..9de2072089e6 100755
--- a/tools/testing/kunit/kunit_tool_test.py
+++ b/tools/testing/kunit/kunit_tool_test.py
@@ -209,6 +209,18 @@ class KUnitParserTest(unittest.TestCase):
kunit_parser.TestStatus.NO_TESTS,
result.status)

+ def test_no_tests_no_plan(self):
+ no_plan_log = test_data_path('test_is_test_passed-no_tests_no_plan.log')
+ with open(no_plan_log) as file:
+ result = kunit_parser.parse_run_tests(
+ kunit_parser.extract_tap_lines(file.readlines()))
+ self.assertEqual(0, len(result.test.subtests[0].subtests[0].subtests))
+ self.assertEqual(
+ kunit_parser.TestStatus.NO_TESTS,
+ result.test.subtests[0].subtests[0].status)
+ self.assertEqual(1, result.test.counts.errors)
+
+
def test_no_kunit_output(self):
crash_log = test_data_path('test_insufficient_memory.log')
print_mock = mock.patch('builtins.print').start()
diff --git a/tools/testing/kunit/test_data/test_is_test_passed-no_tests_no_plan.log b/tools/testing/kunit/test_data/test_is_test_passed-no_tests_no_plan.log
new file mode 100644
index 000000000000..dd873c981108
--- /dev/null
+++ b/tools/testing/kunit/test_data/test_is_test_passed-no_tests_no_plan.log
@@ -0,0 +1,7 @@
+TAP version 14
+1..1
+ # Subtest: suite
+ 1..1
+ # Subtest: case
+ ok 1 - case # SKIP
+ok 1 - suite
--
2.33.1.1089.g2158813163f-goog

David Gow

unread,
Nov 2, 2021, 3:30:25 AM11/2/21
to Brendan Higgins, Rae Moar, Daniel Latypov, Shuah Khan, David Gow, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
It's possible that a parameterised test could end up with zero
parameters. At the moment, the test function will nevertheless be called
with NULL as the parameter. Instead, don't try to run the test code, and
just mark the test as SKIPped.

Reported-by: Daniel Latypov <dlat...@google.com>
Signed-off-by: David Gow <davi...@google.com>
Reviewed-by: Daniel Latypov <dlat...@google.com>
---

- Fix a change which should've been in patch 4.
- Reverse the order of the if conditional: handle non-parameterised case
first.
- Rework to not share the loop between the parameterised and
non-parameterised test cases.
- Suggested by Daniel Latypov.
- Avoids using a magic non-zero pointer value.

lib/kunit/test.c | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/lib/kunit/test.c b/lib/kunit/test.c
index 3bd741e50a2d..f96498ede2cc 100644
--- a/lib/kunit/test.c
+++ b/lib/kunit/test.c
@@ -504,16 +504,18 @@ int kunit_run_tests(struct kunit_suite *suite)
struct kunit_result_stats param_stats = { 0 };
test_case->status = KUNIT_SKIPPED;

- if (test_case->generate_params) {
+ if (!test_case->generate_params) {
+ /* Non-parameterised test. */
+ kunit_run_case_catch_errors(suite, test_case, &test);
+ kunit_update_stats(&param_stats, test.status);
+ } else {
/* Get initial param. */
param_desc[0] = '\0';
test.param_value = test_case->generate_params(NULL, param_desc);
- }

- do {
- kunit_run_case_catch_errors(suite, test_case, &test);
+ while (test.param_value) {
+ kunit_run_case_catch_errors(suite, test_case, &test);

- if (test_case->generate_params) {
if (param_desc[0] == '\0') {
snprintf(param_desc, sizeof(param_desc),
"param-%d", test.param_index);
@@ -530,11 +532,11 @@ int kunit_run_tests(struct kunit_suite *suite)
param_desc[0] = '\0';
test.param_value = test_case->generate_params(test.param_value, param_desc);
test.param_index++;
- }

- kunit_update_stats(&param_stats, test.status);
+ kunit_update_stats(&param_stats, test.status);
+ }
+ }

- } while (test.param_value);

kunit_print_test_stats(&test, param_stats);

--
2.33.1.1089.g2158813163f-goog

David Gow

unread,
Nov 2, 2021, 3:30:28 AM11/2/21
to Brendan Higgins, Rae Moar, Daniel Latypov, Shuah Khan, David Gow, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
Currently, the results for individial parameters in a parameterised test
are simply output as (K)TAP diagnostic lines.

As kunit_tool now supports nested subtests, report each parameter as its
own subtest.

For example, here's what the output now looks like:
# Subtest: inode_test_xtimestamp_decoding
ok 1 - 1901-12-13 Lower bound of 32bit < 0 timestamp, no extra bits
ok 2 - 1969-12-31 Upper bound of 32bit < 0 timestamp, no extra bits
ok 3 - 1970-01-01 Lower bound of 32bit >=0 timestamp, no extra bits
ok 4 - 2038-01-19 Upper bound of 32bit >=0 timestamp, no extra bits
ok 5 - 2038-01-19 Lower bound of 32bit <0 timestamp, lo extra sec bit on
ok 6 - 2106-02-07 Upper bound of 32bit <0 timestamp, lo extra sec bit on
ok 7 - 2106-02-07 Lower bound of 32bit >=0 timestamp, lo extra sec bit on
ok 8 - 2174-02-25 Upper bound of 32bit >=0 timestamp, lo extra sec bit on
ok 9 - 2174-02-25 Lower bound of 32bit <0 timestamp, hi extra sec bit on
ok 10 - 2242-03-16 Upper bound of 32bit <0 timestamp, hi extra sec bit on
ok 11 - 2242-03-16 Lower bound of 32bit >=0 timestamp, hi extra sec bit on
ok 12 - 2310-04-04 Upper bound of 32bit >=0 timestamp, hi extra sec bit on
ok 13 - 2310-04-04 Upper bound of 32bit>=0 timestamp, hi extra sec bit 1. 1 ns
ok 14 - 2378-04-22 Lower bound of 32bit>= timestamp. Extra sec bits 1. Max ns
ok 15 - 2378-04-22 Lower bound of 32bit >=0 timestamp. All extra sec bits on
ok 16 - 2446-05-10 Upper bound of 32bit >=0 timestamp. All extra sec bits on
# inode_test_xtimestamp_decoding: pass:16 fail:0 skip:0 total:16
ok 1 - inode_test_xtimestamp_decoding

Signed-off-by: David Gow <davi...@google.com>
Reviewed-by: Daniel Latypov <dlat...@google.com>
---

- Fix the missing log line which ended up in patch 3 by mistake.
- No changes to this patch.


lib/kunit/test.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/lib/kunit/test.c b/lib/kunit/test.c
index f96498ede2cc..c7ed4aabec04 100644
--- a/lib/kunit/test.c
+++ b/lib/kunit/test.c
@@ -512,6 +512,8 @@ int kunit_run_tests(struct kunit_suite *suite)
/* Get initial param. */
param_desc[0] = '\0';
test.param_value = test_case->generate_params(NULL, param_desc);
+ kunit_log(KERN_INFO, &test, KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT
+ "# Subtest: %s", test_case->name);

while (test.param_value) {
kunit_run_case_catch_errors(suite, test_case, &test);
@@ -522,9 +524,8 @@ int kunit_run_tests(struct kunit_suite *suite)
}

kunit_log(KERN_INFO, &test,
- KUNIT_SUBTEST_INDENT
- "# %s: %s %d - %s",
- test_case->name,
+ KUNIT_SUBTEST_INDENT KUNIT_SUBTEST_INDENT
+ "%s %d - %s",
kunit_status_to_ok_not_ok(test.status),
test.param_index + 1, param_desc);

--
2.33.1.1089.g2158813163f-goog

Brendan Higgins

unread,
Dec 7, 2021, 3:18:45 PM12/7/21
to David Gow, Rae Moar, Daniel Latypov, Shuah Khan, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
On Tue, Nov 2, 2021 at 3:30 AM David Gow <davi...@google.com> wrote:
>
> The (K)TAP spec encourages test output to begin with a 'test plan': a
> count of the number of tests being run of the form:
> 1..n
>
> However, some test suites might not know the number of subtests in
> advance (for example, KUnit's parameterised tests use a generator
> function). In this case, it's not possible to print the test plan in
> advance.
>
> kunit_tool already parses test output which doesn't contain a plan, but
> reports an error. Since we want to use nested subtests with KUnit
> paramterised tests, remove this error.
>
> Signed-off-by: David Gow <davi...@google.com>
> Reviewed-by: Daniel Latypov <dlat...@google.com>

Reviewed-by: Brendan Higgins <brendan...@google.com>

Brendan Higgins

unread,
Dec 7, 2021, 3:22:37 PM12/7/21
to David Gow, Rae Moar, Daniel Latypov, Shuah Khan, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
On Tue, Nov 2, 2021 at 3:30 AM David Gow <davi...@google.com> wrote:
>
> It's possible for a test to have a subtest header, but zero valid
> subtests. We used to error on this if the test plan had no subtests
> listed, but it's possible to have subtests without a test plan (indeed,
> this is how parameterised tests work).
>
> Tests with 0 subtests now have the result NO_TESTS, and will report an
> error (which does not halt test execution, but is printed in a scary red
> colour and is noted in the results summary).
>
> Signed-off-by: David Gow <davi...@google.com>
> Reviewed-by: Daniel Latypov <dlat...@google.com>

Reviewed-by: Brendan Higgins <brendan...@google.com>

Brendan Higgins

unread,
Dec 7, 2021, 3:26:33 PM12/7/21
to David Gow, Rae Moar, Daniel Latypov, Shuah Khan, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
On Tue, Nov 2, 2021 at 3:30 AM David Gow <davi...@google.com> wrote:
>
> It's possible that a parameterised test could end up with zero
> parameters. At the moment, the test function will nevertheless be called
> with NULL as the parameter. Instead, don't try to run the test code, and
> just mark the test as SKIPped.
>
> Reported-by: Daniel Latypov <dlat...@google.com>
> Signed-off-by: David Gow <davi...@google.com>
> Reviewed-by: Daniel Latypov <dlat...@google.com>

Reviewed-by: Brendan Higgins <brendan...@google.com>

Brendan Higgins

unread,
Dec 7, 2021, 3:30:57 PM12/7/21
to David Gow, Rae Moar, Daniel Latypov, Shuah Khan, kuni...@googlegroups.com, linux-k...@vger.kernel.org, linux-...@vger.kernel.org
On Tue, Nov 2, 2021 at 3:30 AM David Gow <davi...@google.com> wrote:
>
Reviewed-by: Brendan Higgins <brendan...@google.com>
Reply all
Reply to author
Forward
0 new messages