Skip Test Decorators

64 views
Skip to first unread message

Devin

unread,
Jul 24, 2008, 10:30:46 PM7/24/08
to Django developers
I've added a patch to #4788 that defines decorators for conditionally
skipping tests and a usage of it per Jason and Russell's discussion in
IRC about #7611.

When tests are skipped, I'm raising an Exception SkippedTest. Now
ideally these would be handled differently than other errors, but
that'd require practically rewriting unittest as mtredinnick
mentioned.

What we could do is filter this out in the output layer per Russell's
idea. Check errors vs. SkippedTest and count those as a separate
category in the output. Then we'd have to roll our own TestRunner
instead of using unittest.TextTestRunner. Which would overlap a bit
with #7884.

That's the direction I'm leaning, but I thought I'd bring the topic up
now to get a consensus.

Thanks,
Devin Naquin

Russell Keith-Magee

unread,
Jul 25, 2008, 7:45:14 AM7/25/08
to django-d...@googlegroups.com
On Fri, Jul 25, 2008 at 10:30 AM, Devin <dna...@gmail.com> wrote:
>
> I've added a patch to #4788 that defines decorators for conditionally
> skipping tests and a usage of it per Jason and Russell's discussion in
> IRC about #7611.
>
> When tests are skipped, I'm raising an Exception SkippedTest. Now
> ideally these would be handled differently than other errors, but
> that'd require practically rewriting unittest as mtredinnick
> mentioned.

Broadly, the idea seems ok.

However, there will be a need for all sorts of conditions for skipping
tests. Non-deployment of a view is one condition, but it's not the
only condition. YAML tests need to be skipped if pyYAML isn't
installed. Tests that require transactions need to be skipped if the
database backend is MySQL MyISAM. I don't think we have any OS or
version specific tests yet, but concievably we could have tests that
only run under Windows, or fail under Python 2.3, or PyPy, or Jython.

I would be interested to see how you intend to encompass all these
'features' into the decorator without making the decorator a beast in
itself.

> What we could do is filter this out in the output layer per Russell's
> idea. Check errors vs. SkippedTest and count those as a separate
> category in the output. Then we'd have to roll our own TestRunner
> instead of using unittest.TextTestRunner. Which would overlap a bit
> with #7884.

> That's the direction I'm leaning, but I thought I'd bring the topic up
> now to get a consensus.

Filtering on output sounds like a good approach if it can be done
elegantly - it should certainly be more elegant than rebuilding
unittest.TestCase. #7884 is a reasonable idea in itself; the crossover
with this suggestion is a nice bit of gravy.

Yours,
Russ Magee %-)

Reply all
Reply to author
Forward
0 new messages