Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

unittest: Calling tests in liner number order

0 views
Skip to first unread message

Antoon Pardon

unread,
May 23, 2008, 5:36:40 AM5/23/08
to
Some time ago I asked whether is would be possible that unittest would
perform the test in order of appearence in the file.

The answers seemed to be negative. Since I really would like this
behaviour I took the trouble of looking throught the source and
I think I found a minor way to get this behaviour.

Now my questions are:

Are other people interrested in this behaviour?
Does the following patch has a chance of being introduced in the
standard python distribution?

*** /usr/lib/python2.5/unittest.py 2008-04-17 16:26:37.000000000 +0200
--- unittest.py 2008-05-23 11:19:57.000000000 +0200
***************
*** 570,575 ****
--- 570,577 ----
"""
def isTestMethod(attrname, testCaseClass=testCaseClass, prefix=self.testMethodPrefix):
return attrname.startswith(prefix) and callable(getattr(testCaseClass, attrname))
+ def getlinenr(name):
+ return getattr(testCaseClass, name).im_func.func_code.co_firstlineno
testFnNames = filter(isTestMethod, dir(testCaseClass))
for baseclass in testCaseClass.__bases__:
for testFnName in self.getTestCaseNames(baseclass):
***************
*** 577,582 ****
--- 579,586 ----
testFnNames.append(testFnName)
if self.sortTestMethodsUsing:
testFnNames.sort(self.sortTestMethodsUsing)
+ else:
+ testFnNames.sort(key=getlinenr)
return testFnNames

John Roth

unread,
May 23, 2008, 11:01:52 AM5/23/08
to
On May 23, 3:36 am, Antoon Pardon <apar...@forel.vub.ac.be> wrote:
> Some time ago I asked whether is would be possible that unittest would
> perform the test in order of appearence in the file.
>
> The answers seemed to be negative. Since I really would like this
> behaviour I took the trouble of looking throught the source and
> I think I found a minor way to get this behaviour.
>
> Now my questions are:
>
> Are other people interested in this behavior?

I'm not. If there are any changes in the current behavior I'd much
prefer that unittest create a random permutation as a way of flushing
out intertest dependencies.

> Does the following patch has a chance of being introduced in the
> standard python distribution?

I certainly hope not!

John Roth
Python FIT

Ben Finney

unread,
May 23, 2008, 10:23:13 PM5/23/08
to
Antoon Pardon <apa...@forel.vub.ac.be> writes:

> The answers seemed to be negative. Since I really would like this
> behaviour

I didn't see you explain *why* you think this behaviour is desirable.
You've already had responses that explained why it leaves you open to
more bugs when code only works if tests run in a specific order.

> Are other people interrested in this behaviour?
> Does the following patch has a chance of being introduced in the
> standard python distribution?

No and no. If anything, unit test cases should be run in a completely
*non*-deterministic sequence.

--
\ "In prayer, it is better to have a heart without words than |
`\ words without heart." -- Mahatma Gandhi |
_o__) |
Ben Finney

Roy Smith

unread,
May 24, 2008, 9:12:23 AM5/24/08
to
In article
<84ff733a-e364-437d...@m73g2000hsh.googlegroups.com>,
John Roth <john...@gmail.com> wrote:

> > Does the following patch has a chance of being introduced in the
> > standard python distribution?
>
> I certainly hope not!

I think you're being overly negative here. Antoon went to the trouble to
read the sources and post a diff. At the very least, he deserves a more
polite response.

But, more than that, I think Antoon's idea has some merit. I understand
that the mantra in unit testing is that the tests should be able to be run
in any order. Still, it's the job of a library package to make it easy to
do things, not to enforce policy. If somebody (for whatever reason) has a
need to run their tests in a certain order, why is it our job to make it
hard for them to do that?

In fact, unittest.defaultTestLoader *already* sorts the tests into
alphabetical order by their name. So, if somebody wanted to get the tests
run in order, they could just name their tests "test0001", "test0002", etc.
In fact, I've done that in the past when I had some (long forgotten) reason
why I wanted to run a bunch of things in order. Allowing the tests to be
sorted by line number order instead of by name just makes it a little
easier to do the same thing.

If somebody wants that functionality, and is willing to put in the effort
to write the code to do it, and contribute that back to the community, I
don't see any reason why it shouldn't be considered. It would have to be
done in a way that doesn't change the current behavior (perhaps by shipping
a subclass of TestLoader which users could use instead of the default).
I'm not saying that we *need* to include it, just that it's not such a bad
idea that it deserves responses like "I certainly hope not!"

André

unread,
May 24, 2008, 9:22:27 AM5/24/08
to
On May 24, 10:12 am, Roy Smith <r...@panix.com> wrote:
> In article
> <84ff733a-e364-437d-9b5d-d8bb14cc6...@m73g2000hsh.googlegroups.com>,

I totally agree.

I can't relate to anyone that want to oppose a change that would give
more freedom to a programmer.

André

Fuzzyman

unread,
May 24, 2008, 9:24:27 AM5/24/08
to


I second Roy's appreciation with you going to the trouble to post a
patch. There is another problem with your code though, it is dependent
on the CPython implementation. Currently unittest works *great* with
IronPython, which your code wouldn't.

Also, like others, I have had wonderful experiences of trying to track
down test failures that depend on the order that tests run in. Having
interdependencies between tests is a recipe for madness...

Michael Foord
http://www.ironpythoninaction.com/

Roy Smith

unread,
May 24, 2008, 9:44:22 AM5/24/08
to
In article
<af66b0d4-f77d-4d54...@l64g2000hse.googlegroups.com>,
Fuzzyman <fuzz...@gmail.com> wrote:

> Also, like others, I have had wonderful experiences of trying to track
> down test failures that depend on the order that tests run in. Having
> interdependencies between tests is a recipe for madness...

I agree that tests should not depend on each other, but sometimes it's
still useful to have the tests run in a certain order for reporting
purposes.

If you're doing requirements tracking, it's nice to have the tests execute
in the same order as the requirements are listed. It makes interpreting
the output easier. Sure, you could give the test cases names like
"test_fr17.3a" and write your own getTestCaseNames(), but just putting them
into the file in the order you want them to run is easier. And making
things easy is what this is all about.

Fuzzyman

unread,
May 24, 2008, 10:10:15 AM5/24/08
to
On May 24, 2:44 pm, Roy Smith <r...@panix.com> wrote:
> In article
> <af66b0d4-f77d-4d54-8ee0-94592497d...@l64g2000hse.googlegroups.com>,

Whilst I understand your point, I think the danger is that you end up
with hidden dependencies on the test order - which you're not aware of
and that the tests never expose. Certainly layout your tests in a
logical order within the file, but I think you risk potential problems
by controlling the order they are run in. Other frameworks
specifically provide test order randomizers for this very reason.

A worthwhile question for the OP - your patch seems fairly simple. Is
it easy for you to extend unittest for your own testing needs by
subclassing?

Unittest should definitely be easy for people who *want* this to add
it to their own testing environment.


All the best,


Michael Foord
http://www.ironpythoninaction.com/

Eric Wertman

unread,
May 24, 2008, 10:40:43 AM5/24/08
to pytho...@python.org
>I can't relate to anyone that want to oppose a change that would give
>more freedom to a programmer.

While in general I agree with this.. I think in the case of python
part of it's base philosophy seems to be a tendency to encourage a
single way of doing things, and create a path of least resistance to
the "best" way. "best" in this case being one or a handful of peoples
opinion. I'm really ok with this, personally. I think it's working
well.

So.. I'll post this because I'm sure at least one person hasn't seen it:

>>> import this
The Zen of Python, by Tim Peters

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!

Roy Smith

unread,
May 24, 2008, 10:57:29 AM5/24/08
to
In article
<a2c4d209-e203-4b43...@a70g2000hsh.googlegroups.com>,
Fuzzyman <fuzz...@gmail.com> wrote:

> Whilst I understand your point, I think the danger is that you end up
> with hidden dependencies on the test order - which you're not aware of
> and that the tests never expose.

Well, yes. But, this is no worse than the current situation, where the
tests are run in alphabetical order by default. You could still have
hidden dependencies and not realize it.

Fuzzyman

unread,
May 24, 2008, 1:10:23 PM5/24/08
to
On May 24, 3:57 pm, Roy Smith <r...@panix.com> wrote:
> In article
> <a2c4d209-e203-4b43-aa57-f9f0248b2...@a70g2000hsh.googlegroups.com>,

>
>  Fuzzyman <fuzzy...@gmail.com> wrote:
> > Whilst I understand your point, I think the danger is that you end up
> > with hidden dependencies on the test order - which you're not aware of
> > and that the tests never expose.
>
> Well, yes.  But, this is no worse than the current situation, where the
> tests are run in alphabetical order by default.  You could still have
> hidden dependencies and not realize it.

Fair point. I'd still be -1 on the addition of this patch, but +1 on
the addition of a randomizer.

Michael Foord
http://www.ironpythoninaction.com/

Diez B. Roggisch

unread,
May 25, 2008, 7:28:19 AM5/25/08
to
Roy Smith schrieb:

> In article
> <af66b0d4-f77d-4d54...@l64g2000hse.googlegroups.com>,
> Fuzzyman <fuzz...@gmail.com> wrote:
>
>> Also, like others, I have had wonderful experiences of trying to track
>> down test failures that depend on the order that tests run in. Having
>> interdependencies between tests is a recipe for madness...
>
> I agree that tests should not depend on each other, but sometimes it's
> still useful to have the tests run in a certain order for reporting
> purposes.

Then sort your report. Seriously. A test-outpt shoud be in a way that
delimits individual testresults in a way that makes them easily
extractable. Then if you want them to be ordered for e.g. diff'ing -
sort them.

Diez

John Roth

unread,
May 25, 2008, 9:17:58 AM5/25/08
to
On May 24, 7:22 am, André <andre.robe...@gmail.com> wrote:

>
> I can't relate to anyone that want to oppose a change that would give
> more freedom to a programmer.
>
> André

Well, you can already do that. Or anything else you want. It's not all
that difficult to change the algorithms in the unittest package
_without_ patching the code. How to do it could be better documented,
granted.

For my major project (Python FIT) I've got my own test case extraction
mechanism that doesn't depend on patching the code. It lets me use any
name I want - that is, it doesn't depend on any kind of pattern match.
(It also avoids looking in subclasses for tests.) I find the naming
freedom to be quite useful in thinking about the test case.

I really don't care what the OP does in his own projects. My objection
is that, if it goes into the standard library, is that it passes a
signal that it's good practice to allow dependencies between tests. It
most definitely is _not_ good practice.

I like the technique of looking at the line numbers to get the
declaration order; it ought to be documented somewhere.

The proper place for this is either a recipe (
http://aspn.activestate.com/ASPN/Cookbook/Python/ ) or a note in the
documentation with a caveat that it's not good practice, but it may be
useful in some circumstances.

John Roth

Roy Smith

unread,
May 25, 2008, 10:13:18 AM5/25/08
to
In article
<f838bc64-a4ef-4770...@t54g2000hsg.googlegroups.com>,
John Roth <john...@gmail.com> wrote:

> I really don't care what the OP does in his own projects. My objection
> is that, if it goes into the standard library, is that it passes a
> signal that it's good practice to allow dependencies between tests. It
> most definitely is _not_ good practice.

The OP stated that he wants the tests run in a given order. People are
assuming that's because he has dependencies between his tests. Maybe he
just wants them run in order because it's easier to understand the output
in a certain order.

For example, I'm currently working on some low-level data marshalling code
(in another language). This is the sort of stuff which the Python struct
module handles -- converting between internal form and network
representation.

There is a logical hierarchy of complexity. Handling characters is easier
than handling ints, which is easier than floats, which is easier than
doubles, and so on (arrays, sets and other composite types). If I was
porting this to a new platform, I would want the unit tests to run in a
logical order of simple to complex. If some assortment of tests failed,
you want to start debugging the problem on the least complex data types.
If the tests run in a specific order, it's easier to see where to start.

It's not that any test depends on any other test to run, in the sense that
there's some shared state between them. It's just that from a human
factors point of view, there's a logical order to run them in.

In fact, from a protocol point of view, some of the types really do depend
on each other. We send counted strings, for example, so we can't send a
string until we know how to send an int (for the string length). If the
first test that fails is the string test, I know right off that the problem
is not in how we send ints, because that test ran already and it passed.

Earlier, I gave another example of wanting tests to be run in the same
order as some externally controlled set of functional requirements. Again,
not because the tests have inter-dependencies, but because it just makes it
easier to interpret the results.

Don't assume inter-test dependencies (which I agree are a Bad Thing) is the
only reason you want to run tests in a specific order.

Roy Smith

unread,
May 25, 2008, 10:39:06 AM5/25/08
to
"Diez B. Roggisch" <de...@nospam.web.de> wrote:

> > I agree that tests should not depend on each other, but sometimes it's
> > still useful to have the tests run in a certain order for reporting
> > purposes.
>
> Then sort your report. Seriously. A test-outpt shoud be in a way that
> delimits individual testresults in a way that makes them easily
> extractable. Then if you want them to be ordered for e.g. diff'ing -
> sort them.
>
> Diez

Here's an example of why *running* tests in order can make sense.

You could have a bunch of tests of increasing complexity. The first bunch
of tests all run in a few seconds and test some basic functionality. From
experience, you also know that these are the tests that are most likely to
fail as you port to a new environment.

There's also some tests which take a long time to run. If the basic stuff
that's being tested by the earlier tests doesn't work, there's no way these
tests could pass, but they still take a long time to fail.

It's really handy to have the simple tests RUN first. If you see they
fail, you can cancel the rest of the test run and get on with fixing your
code faster.

It's a good thing to make it easy to do things the right way, and difficult
to do things the wrong way. The danger is when you let your pre-conceived
notions of right and wrong trick you into making it difficult to do things
any way but YOUR way.

So far, the strongest argument I've seen against the OP's idea is that it's
not portable to Iron Python. That's a legitimate argument. All the rest
of the "You're not supposed to do it that way" arguments are just religion.

Fuzzyman

unread,
May 25, 2008, 10:40:24 AM5/25/08
to
On May 25, 3:13 pm, Roy Smith <r...@panix.com> wrote:
> In article
> <f838bc64-a4ef-4770-b139-01284e0f8...@t54g2000hsg.googlegroups.com>,

>  John Roth <johnro...@gmail.com> wrote:
>
> > I really don't care what the OP does in his own projects. My objection
> > is that, if it goes into the standard library, is that it passes a
> > signal that it's good practice to allow dependencies between tests. It
> > most definitely is _not_ good practice.
>
> The OP stated that he wants the tests run in a given order.  People are
> assuming that's because he has dependencies between his tests.  Maybe he
> just wants them run in order because it's easier to understand the output
> in a certain order.

No, we're pointing out that running tests in a specific order can
introduce hidden dependencies without you being aware of it. Whilst
this is already the case with unittest, further enshrining it in the
standard library is a bad idea.

As mentioned elsewhere, providing a better reporting mechanism is
probably the way to get better understandable output.

Micahel Foord
http://www.ironpythoninaction.com/

Diez B. Roggisch

unread,
May 25, 2008, 10:50:57 AM5/25/08
to

> Here's an example of why *running* tests in order can make sense.
>
> You could have a bunch of tests of increasing complexity. The first bunch
> of tests all run in a few seconds and test some basic functionality. From
> experience, you also know that these are the tests that are most likely to
> fail as you port to a new environment.
>
> There's also some tests which take a long time to run. If the basic stuff
> that's being tested by the earlier tests doesn't work, there's no way these
> tests could pass, but they still take a long time to fail.
>
> It's really handy to have the simple tests RUN first. If you see they
> fail, you can cancel the rest of the test run and get on with fixing your
> code faster.

I don't see this as something that can be solved by ordering tests -
*especially* not on a per-method-level as the OP suggested, because I
tend to have test suites that span several files.

Instead when I've been in the situation that you describe before, I
resorted to an approach where I annotated tests as long or short-running
(or any other criteria), and then ran the tests that were appropriate.

For example, post-commit-tests needed to be short, as otherwise the
feedback came to slow.

Annotation could be done explicit, or implicit by grouping tests
together that had the desired property. However, as I prefer tests that
share e.g. the same module to test the same functionality, I rather have
an annotation mechanism.

So *if* anything should be changed IMHO it would be the introduction of
a tagging system or something equivalent, together with a selection
mechanism based on that.

> It's a good thing to make it easy to do things the right way, and difficult
> to do things the wrong way. The danger is when you let your pre-conceived
> notions of right and wrong trick you into making it difficult to do things
> any way but YOUR way.
>
> So far, the strongest argument I've seen against the OP's idea is that it's
> not portable to Iron Python. That's a legitimate argument. All the rest
> of the "You're not supposed to do it that way" arguments are just religion.

So far the reasons for introducing them haven't been compelling either.
Neither does it work over several testsuites, nor is it the only thing
that can order *results*, which you (rightly so) claimed as being useful.

Diez

Diez B. Roggisch

unread,
May 25, 2008, 10:53:40 AM5/25/08
to
> In fact, from a protocol point of view, some of the types really do depend
> on each other. We send counted strings, for example, so we can't send a
> string until we know how to send an int (for the string length). If the
> first test that fails is the string test, I know right off that the problem
> is not in how we send ints, because that test ran already and it passed.
>
> Earlier, I gave another example of wanting tests to be run in the same
> order as some externally controlled set of functional requirements. Again,
> not because the tests have inter-dependencies, but because it just makes it
> easier to interpret the results.
>
> Don't assume inter-test dependencies (which I agree are a Bad Thing) is the
> only reason you want to run tests in a specific order.

Both these points can be solved by application of output ordering or
even better using groups of tests inside individual modules that test
e.g. basic functionality. Selecting these to run first before trying to
run more complicated tests is much more senseful than just letting a
single test run in a determined order.

Diez

Ryan Ginstrom

unread,
May 25, 2008, 11:39:29 AM5/25/08
to pytho...@python.org
> On Behalf Of Roy Smith

> You could have a bunch of tests of increasing complexity.
> The first bunch of tests all run in a few seconds and test
> some basic functionality. From experience, you also know
> that these are the tests that are most likely to fail as you
> port to a new environment.
>
> There's also some tests which take a long time to run. If
> the basic stuff that's being tested by the earlier tests
> doesn't work, there's no way these tests could pass, but they
> still take a long time to fail.

How about something like this:

def run_quickies():
# run the quick, i.e. actual unit tests
# located in folder ./unit_tests/

def run_long_ones():
# Run function tests, integration tests, what have you
# located in folder ./integration_tests/

def whole_shebang():
run_quickies()
run_long_ones()

Now you do something like run the unit tests every time a file is saved, and
run the whole shebang nightly and every time a build is performed.

Regards,
Ryan Ginstrom

Matthew Woodcraft

unread,
May 25, 2008, 11:36:10 AM5/25/08
to
Diez B. Roggisch <de...@nospam.web.de> wrote:
>I don't see this as something that can be solved by ordering tests -
>*especially* not on a per-method-level as the OP suggested, because I
>tend to have test suites that span several files.

unittest already runs multiple test suites in the order you specify
(which is another clue that running tests in order is not evil).

I suspect unittest's choice of alphabetical order for the tests within
a suite is more an artefact of its original Java implementation than
anything else.

-M-

John Roth

unread,
May 26, 2008, 2:57:13 PM5/26/08
to
On May 25, 8:13 am, Roy Smith <r...@panix.com> wrote:
> In article
> <f838bc64-a4ef-4770-b139-01284e0f8...@t54g2000hsg.googlegroups.com>,

> John Roth <johnro...@gmail.com> wrote:
>
> >
>
> Don't assume inter-test dependencies (which I agree are a Bad Thing) is the
> only reason you want to run tests in a specific order.

I'm not. As Michael Fnord and Diez Roggisch says in other responses,
there are other solutions to those problems. It would be better to
pursue them if you want a change to the standard library.

John Roth

Antoon Pardon

unread,
May 29, 2008, 4:14:49 AM5/29/08
to
On 2008-05-24, Fuzzyman <fuzz...@gmail.com> wrote:
>
> A worthwhile question for the OP - your patch seems fairly simple. Is
> it easy for you to extend unittest for your own testing needs by
> subclassing?

I've been ill the last days, but I will look into this possibility.

--
Antoon Pardon

0 new messages