On Tue, Jul 20, 2010 at 17:40, scastro <dri...@gmail.com> wrote:
> I've read somewhere the fact that nose doesn't run the test fixtures
> in the order they appear in the class. Is this going to be fixed in
> the next release?
It's good unit testing practice to have all tests independent of each
other. If this is true, the order in which the tests run is
irrelevant.
If you have tests whose result depend on the order they're run, you
have an Erratic Test[1] code smell.
I hope nose never supports specifying the order of unit tests :)
Hope that helps,
Mike
[1] Of particular interest are the Interacting Tests, Interacting Test
Suites and Lonely Test sections:
http://xunitpatterns.com/Erratic%20Test.html
On Tue, Jul 20, 2010 at 19:59, scastro <dri...@gmail.com> wrote:
> Thanks for your reply. I understand that. My tests are all independent
> of each other. But in the following example:
>
> nosetests -d -s --verbosity=2 test_report.py test_lsof.py
> test_flagdata.py test_lsof.py
>
> I want test_lsof.py to run right after each of the other two tests. In
> this particular case, test_lsof.py contains a system call to "lsof" to
> monitor how many open files are left by the unit tests. That's why the
> order is important here.
If there's a need to run lsof after a test, why not call a function
which performs the desired task at the end of the test?
It feels like you're overloading tests for more than one purpose.
There's probably a better way to get what you need. That's usually
what code smells indicate :)
Mike
Having determinism is paramount to determining faults in code
(otherwise, what's the purpose of testing without traceability?). So
I'd argue that this non-determinism will cause you more grief than it
would help in making things more efficient.
-Garrett
The way I deal with this (at least when using nose as a test runner),
is to manage tests across multiple directories. The fast tests used
for quick feedback are in the unit test directory and anything needing
to perform out of process IO (files, network requests, etc) and any
test known to take more than a certain amount of time go in the
integration tests directory. I've toyed with breaking them down
further, but that hasn't been necessary at this point. Then there is
a script that runs them all, in order of their complexity, and stops
on the first failure. That way you get the immediate feedback.
-Mark
that mirrors my experiences why test ordering can be helpful.
If i want to ensure that tests are isolated i rather like to
explicitely use some randomization technique.
best,
holger
Well, yeah... if you group things together appropriately, they can
focus as one atomic block that doesn't affect the other set of tests,
and can parse the results, I suppose it's not an issue.
-Garrett
In addition to the suggestion of keeping your "fast" tests and "slow"
tests in their own directories you could also use attributes:
http://somethingaboutorange.com/mrl/projects/nose/0.11.2/plugins/attrib.html
Your first run of tests could be:
$ nosetests -a fast
then if that passes,
$ nosetests -a slow
>
> Thanks!
As mentioned earlier in the thread, it sounds like instead you
probably need to perform this as setup either in the top level package
or in a custom plugin. That way you can guarantee that the code your
tests depend on will run at the right time. Depending on the order of
test execution is a slippery slope and will start to bite you hard
once you scale up your test suite.
>
> $ nosetests -v -a slow -a fast
> tests.test_slow1 ... ok
> tests.test_fast1 ... ok
> tests.test_fast2 ... ok
> tests.test_fast3 ... ok
> tests.test_slow2 ... ok
> tests.test_slow3 ... ok
>
> .:. jeremy
>
I was responding to this quote in your last message:
"...However, the problem for our build is we
need the slow tests and the fast tests to run at the same time (via --
processes=NN) with the slow tests started first..."
If you need the slow tests to start first then your tests are not
independent. If that is my misunderstanding then maybe you can
explain it in a different way.
> Our tests are completely independent. We want to start slow tests
> before fast tests because it significantly lowers the time it takes to
> run our test suite since we use --processes.
if it's just for a speedup then again I'd suggest something like a
plugin that defines begin() so that the plugin can perform the needed
preparation code that will give you the speedup.
If instead you still want to control ordering there's probably not an
easy way to do that in Nose, especially when using multiple processes.
-Kumar
>
> Also, in the example I gave before about wanting to see simpler tests
> fail before complex ones, all the tests are independent.
>
> For some reason, there seems to be a belief that if you want a
> particular order of execution it's because of dependency problems. In
> both my examples, this is not the case. There are valid reasons for
> wanting independent tests to happen in a particular order.
>
> .:. jeremy
>
Do you have ideas or examples in mind how you'd like to concretely
define ordering or test dependencies?
best,
holger