How to TODO a test

72 views
Skip to first unread message

Will Coleda

unread,
Oct 6, 2014, 10:34:58 AM10/6/14
to mxu...@googlegroups.com
With the docs unavailable, I can't look this up at the moment…

Is there a way to mark a test as TODO'd, so it's still executed, but
is not expected to pass?

--
Will "Coke" Coleda

Marc Esher

unread,
Oct 10, 2014, 4:34:21 PM10/10/14
to mxu...@googlegroups.com
Hey Will,

Great question. Currently, the framework doesn't support tests that are executed but which don't do anything on failure. 

What is the use case for this?

Marc


--
Will "Coke" Coleda

--
You received this message because you are subscribed to the Google Groups "mxunit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mxunit+un...@googlegroups.com.
To post to this group, send email to mxu...@googlegroups.com.
Visit this group at http://groups.google.com/group/mxunit.
For more options, visit https://groups.google.com/d/optout.

Adam Tuttle

unread,
Oct 10, 2014, 5:21:18 PM10/10/14
to mxu...@googlegroups.com
If you're just stubbing out tests, I usually do fail("test not implemented yet");

Adam

Will Coleda

unread,
Oct 13, 2014, 8:13:29 AM10/13/14
to mxu...@googlegroups.com
FYI, this is a common paradigm I use frequently when writing tests in
perl5/6 using TAP.

There are 2 cases where I find TODO appropriate:

1) in the case where something is not implemented; a TODO'd test still
runs, but doesn't report anything exceptional unless it passes; at
this point, you get a notice that the feature is working, and you
untodo the tests. This is as opposed to declaring a fail; you have the
code that is expected to run, and your'e running it with each
invocation.

2) This just came up at $DAYJOB - we have a bitrotted test suite that
we're trying to resurrect; there are a dozen tests that are currently
failing; we want to run the whole test suite, get a pass/fail result,
except for those tests that we've said "this is too complicated to fix
right now". We'd add a ticket number for the failing test to the test
suite to track the issue. Once the suite has no fails, we can think
about putting the test run into an automated system which will warn us
if anything else starts failing.

From perl's Test::More docs:

> The nice part about todo tests, as opposed to simply commenting out a block of tests,
> is it's like having a programmatic todo list. You know how much work is left to be done,
> you're aware of what bugs there are, and you'll know immediately when they're fixed.

Regards.

On Fri, Oct 10, 2014 at 4:34 PM, Marc Esher <marc....@gmail.com> wrote:
--
Will "Coke" Coleda

Chris Blackwell

unread,
Oct 13, 2014, 9:10:35 AM10/13/14
to mxu...@googlegroups.com
I haven't tried it, but what about using the expectedException annotation?
if I understand it correctly you should be able to do something like this

private function todo() {
  throw(type="todo");
}

function todoTest() expectedException="todo" {
  todo();
}

does that help at all?

Chris


Will Coleda

unread,
Oct 13, 2014, 9:12:05 AM10/13/14
to mxu...@googlegroups.com
Sorry, no - I still have no insight as to when the test starts passing
in this case.

Chris Blackwell

unread,
Oct 13, 2014, 9:22:07 AM10/13/14
to mxu...@googlegroups.com
i'm not familiar with how todo tests are reported in perl, so do i have this right;

a test can either pass or fail - it can have no other state.
but, if a test is marked as todo and fails its not counted as a failure, but just as a 'todo' item
if a test thats marked as todo passes, is that then a failure ?

i'm unclear of how you would like to be notified when a todo starts passing

Will Coleda

unread,
Oct 13, 2014, 9:45:31 AM10/13/14
to mxu...@googlegroups.com
On Mon, Oct 13, 2014 at 9:21 AM, Chris Blackwell <ch...@team193.com> wrote:
> i'm not familiar with how todo tests are reported in perl, so do i have this
> right;
>
> a test can either pass or fail - it can have no other state.

Basically. (not counting tests that do the equivalent of abort() or
have a syntax error)

> but, if a test is marked as todo and fails its not counted as a failure, but
> just as a 'todo' item

Yes.

> if a test thats marked as todo passes, is that then a failure ?

The difference is how the harness running the tests reports it; here's
some simple examples:

$ cat foo.t
use Test::More;
is(1,1,"simple equiv");
done_testing;

$ perl foo.t
ok 1 - simple equiv
1..1

$ echo $?
0

When running a test using just perl, you get only the output for each
test and a test count.

When running it with a testing harness, you get more info:

$ prove -v foo.t
foo.t ..
ok 1 - simple equiv
1..1
ok
All tests successful.
Files=1, Tests=1, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.01 cusr
0.00 csys = 0.03 CPU)
Result: PASS

$ echo $?
0

Here's what this looks like with a TODO'd test - the test reports as
"not ok", but

$ cat bar.t
use Test::More;
TODO: {
local $TODO = "NYI";
is(1,2,"simple equiv");
}
done_testing;

$ perl bar.t
not ok 1 - simple equiv # TODO NYI
# Failed (TODO) test 'simple equiv'
# at bar.t line 4.
# got: '1'
# expected: '2'
1..1

$ echo $?
0

$ prove -v bar.t
bar.t ..
not ok 1 - simple equiv # TODO NYI

# Failed (TODO) test 'simple equiv'
# at bar.t line 4.
# got: '1'
# expected: '2'
1..1
ok
All tests successful.
Files=1, Tests=1, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.01 cusr
0.00 csys = 0.03 CPU)
Result: PASS

$ echo $?
0

And finally, with a passing TODO'd test:

$ perl baz.t
ok 1 - simple equiv # TODO NYI
1..1

$ prove -v baz.t
baz.t ..
ok 1 - simple equiv # TODO NYI
1..1
ok
All tests successful.

Test Summary Report
-------------------
baz.t (Wstat: 0 Tests: 1 Failed: 0)
TODO passed: 1
Files=1, Tests=1, 0 wallclock secs ( 0.01 usr 0.00 sys + 0.01 cusr
0.00 csys = 0.02 CPU)
Result: PASS

$ echo $?
0



So, from an automated testing standpoint, a failing TODO test still
fails the individual test, but the harness doesn't report that the
file fails; a passing TODO test isn't a failure either; both the test
and the file passes.

By default, I get additional diagnostic information on a passing TODO
- typically I would see this when running the tests as a developer,
rather than via an automated testing process (though I've done this
both ways)

In the meantime, until the test starts passing, I insure that the code
at least compiles, and I'll have some way of telling when the test
starts working.

Regards.
Reply all
Reply to author
Forward
0 new messages