Broadly, the idea seems ok.
However, there will be a need for all sorts of conditions for skipping
tests. Non-deployment of a view is one condition, but it's not the
only condition. YAML tests need to be skipped if pyYAML isn't
installed. Tests that require transactions need to be skipped if the
database backend is MySQL MyISAM. I don't think we have any OS or
version specific tests yet, but concievably we could have tests that
only run under Windows, or fail under Python 2.3, or PyPy, or Jython.
I would be interested to see how you intend to encompass all these
'features' into the decorator without making the decorator a beast in
itself.
> What we could do is filter this out in the output layer per Russell's
> idea. Check errors vs. SkippedTest and count those as a separate
> category in the output. Then we'd have to roll our own TestRunner
> instead of using unittest.TextTestRunner. Which would overlap a bit
> with #7884.
> That's the direction I'm leaning, but I thought I'd bring the topic up
> now to get a consensus.
Filtering on output sounds like a good approach if it can be done
elegantly - it should certainly be more elegant than rebuilding
unittest.TestCase. #7884 is a reasonable idea in itself; the crossover
with this suggestion is a nice bit of gravy.
Yours,
Russ Magee %-)