Sequencing tests

257 views
Skip to first unread message

Samantha

unread,
Nov 23, 2011, 10:30:32 AM11/23/11
to NUnit-Discuss
Hi,
I'm doing some c# work writing selenium tests that use nUnit....
Still only scratching the surface with nUnit.

In nUnit is there a way to force the order that tests are executed?

Our product has several spots where we need to wait for an
indefinite time (say between 30 seconds and 10 minutes) for something
to be processed. Say I kick off something that takes 10 minutes to be
processed... It may only take 10 seconds to invoke the task, then I
need to wait for 9:50 before validating If I have a dozen of these
tests I'll be spending way too much time waiting.

If I can control the sequence of tests I can do other things while
I'm waiting.

Interested in hearing how others do this sort of thing.

Thanks,
Samantha

Charlie Poole

unread,
Nov 23, 2011, 11:33:02 AM11/23/11
to nunit-...@googlegroups.com
Sorry, but there isn't a way to do that yet in NUnit. At some point after
3.0, we'll run tests on multiple threads, which would solve your problem.

Some folks use the alphabetic ordering of tests to force an order, but
this is ugly and won't always work in the future since it's an accidental
artifact rather than a specification of how NUnit works.

But even doing that, you will only be moving the delays around, since
NUnit won't launch the next test until the prior one completes.

Charlie

> --
> You received this message because you are subscribed to the Google Groups "NUnit-Discuss" group.
> To post to this group, send email to nunit-...@googlegroups.com.
> To unsubscribe from this group, send email to nunit-discus...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/nunit-discuss?hl=en.
>
>

John Brett

unread,
Nov 24, 2011, 6:50:33 AM11/24/11
to NUnit-Discuss
>   Our product has several spots where we need to wait for an
> indefinite time (say between 30 seconds and 10 minutes) for something
> to be processed.  Say I kick off something that takes 10 minutes to be
> processed... It may only take 10 seconds to invoke the task, then I
> need to wait for 9:50 before validating If I have a dozen of these
> tests I'll be spending way too much time waiting.


I've faced this issue in the past (mostly whilst trying to test multi-
threaded code).
I've usually managed to identify or create an indicator that tells me
when the
processing has occurred. In the worst case, you then simply poll the
indicator
every 10 seconds, or in an ideal case you simply wait on an event
triggered
by job completion.
My experience of doing this, btw, is that if you think a job could
take up to 10
minutes (!), then you're best off:
a) making the test time out after 3x as long (30 minutes). There's
little more frustrating
than finding your test machine isn't quite as quick as a developer
machine and that you
have an intermittent failing test that would have completed in 11
minutes if only it had been
given long enough
b) you log the actual execution time of the job, so that you can go
back later and review
the appropriate timeouts.

BTW - what you're describing sounds more like a desire to parallelize
long-running
tests, rather than control the exact ordering thereof.

John

goo...@my2cents.co.uk

unread,
Nov 24, 2011, 7:59:26 PM11/24/11
to nunit-...@googlegroups.com
This is something that we've had to do a lot of, as John Brett has said, the best way is to find the trigger that will tell you that the processing has occurred and either hook into it or poll it.

From a tester point of view, we liaised with the development team quite closely to speed up this timeframes by supplying tools or configurable parameters to change the delays.  For instance, I'm assuming there is some kind of scheduled task happening that forces the order through, you could look at getting the developers to tell you want the scheduled task is and get them to allowi you to remotely fire it off as needed (Just had to do this exact thing with a windows service today in fact!).

Be careful when scoping your testing though.  Ask yourself what exactly it is that you're testing and make sure you know where your test ends and another test starts.  What I mean by this is, are you testing that the scheduler works ok, or just that the script that it uses works ok (if the latter, then just fire the script off from within the test itself). It's something that I've seen our testers get messed up in a few times and it causes a lot of headaches.

Failing that, you could perform all the actions for the tests as tests, and add the Asserts into the TearDownFixture which would delay until a timespan after the test start time (look at the documentation, but it's run once for the test run per namespace iirc).  This is an awful workaround as you wouldn't receive any kind of decent feedback in the UI or in the tests, and the first one that fails would cause the rest to not be tested.

Something of a feature request I can see here though.  Something that would allow for "Delayed assertations" where you can define a list of assertations to occur at a defined delayed time interval.  Seems like a fairly large feature for a small use case though.

That's my 2 pennies anyway...

Thanks,
Martin


----- Original Message -----
From: "Samantha" <he...@informz.com>
To: "NUnit-Discuss" <nunit-...@googlegroups.com>
Sent: Wednesday, 23 November, 2011 15:30:32 GMT +00:00 GMT Britain, Ireland, Portugal
Subject: [nunit-discuss] Sequencing tests

Hi,
   I'm doing some c# work writing selenium tests that use nUnit....
Still only scratching the surface with nUnit.

  In nUnit is there a way to force the order that tests are executed?

  Our product has several spots where we need to wait for an
indefinite time (say between 30 seconds and 10 minutes) for something
to be processed.  Say I kick off something that takes 10 minutes to be
processed... It may only take 10 seconds to invoke the task, then I
need to wait for 9:50 before validating If I have a dozen of these
tests I'll be spending way too much time waiting.

Charlie Poole

unread,
Nov 24, 2011, 11:52:15 PM11/24/11
to nunit-...@googlegroups.com
Hi,

On Thu, Nov 24, 2011 at 4:59 PM, <goo...@my2cents.co.uk> wrote:
> This is something that we've had to do a lot of, as John Brett has said, the
> best way is to find the trigger that will tell you that the processing has
> occurred and either hook into it or poll it.
>
> From a tester point of view, we liaised with the development team quite
> closely to speed up this timeframes by supplying tools or configurable
> parameters to change the delays.  For instance, I'm assuming there is some
> kind of scheduled task happening that forces the order through, you could
> look at getting the developers to tell you want the scheduled task is and
> get them to allowi you to remotely fire it off as needed (Just had to do
> this exact thing with a windows service today in fact!).
>
> Be careful when scoping your testing though.  Ask yourself what exactly it
> is that you're testing and make sure you know where your test ends and
> another test starts.  What I mean by this is, are you testing that the
> scheduler works ok, or just that the script that it uses works ok (if the
> latter, then just fire the script off from within the test itself). It's
> something that I've seen our testers get messed up in a few times and it
> causes a lot of headaches.

Very true.

> Failing that, you could perform all the actions for the tests as tests, and
> add the Asserts into the TearDownFixture which would delay until a timespan
> after the test start time (look at the documentation, but it's run once for
> the test run per namespace iirc).  This is an awful workaround as you
> wouldn't receive any kind of decent feedback in the UI or in the tests, and
> the first one that fails would cause the rest to not be tested.

Yes, and also it seems like you would be creating most of the infrastructure
that either exists or belongs in (and could be added to) NUnit.

> Something of a feature request I can see here though.  Something that would
> allow for "Delayed assertations" where you can define a list of assertations
> to occur at a defined delayed time interval.  Seems like a fairly large
> feature for a small use case though.

Plus, you can already do it! If the list is
Assert1 at time T1
Assert2 at time T2
..
AssertN at time Tn
just write asserts like...
Assert.That(....After(T1));
Assert.That(....After(T2-T1));
...
Assert.That(....AfterTn-Tn-1));

Charlie

goo...@my2cents.co.uk

unread,
Nov 26, 2011, 10:04:14 AM11/26/11
to nunit-...@googlegroups.com
Hi Charlie,

I think that we might be talking about different things in the delayed asserts.

I'm talking about something that would push the asserts onto their own thread so the rest of the tests could continue executing as normal.  Are you saying I can do something like ..

[Test]
public void Test1()
{

   int x = 1;
   int y = 1;
   FireAsyncProcess(x, y);
   Assert.That(x == y) => do at T1 (could be a timespan or a datetime)
}
[Test]
{

   int x = 1;
   int z = 1;
   FireAsyncProcess(x, z);
   Assert.That(x == z) => do at T2 (could be a timespan or a datetime)
}

What I would want to happen is that the Asserts are performed at T1 and T2 respectively, however the actions for both tests are run immediately (well consecutively, but you get the picture).  The UI would then update via some kind of callback so should that it was waiting to do asserts.

Charlie Poole

unread,
Nov 26, 2011, 11:28:26 AM11/26/11
to nunit-...@googlegroups.com
Hi Martin,

Essentially, it's the same thing. Your test T1, with it's delayed assert,
would be continuing internally because that assert would not return
until it had a result to report. Currently, NUnit can only deal with one
test at a time.

Let's say that we reported the test as provisionally successful, with it's
assert code scheduled to run after some time. At a later time, If the assert
failed - or even succeeded with some special message - NUnit would
have to do something with the new information. That info has most likely
been reflected in the Gui and possibly written to a file. Replacing the
old info with the new could become arbitrarily complex, depending on
what the test had done.

Further, the test may have gone on to issue another, subsequent assert.
That assert could have succeeded, failed or thrown an error. So we would
need to remove the pending assert from our schedule. Race conditions
could easily arise.

So considering that, it seems that allowing tests to run on parallell
threads, while keeping the Asserts sequential is actually a bit simpler
(although not simple) than what you are suggesting. That's the way we
are going in 3.0.

Charlie

goo...@my2cents.co.uk

unread,
Nov 26, 2011, 1:15:48 PM11/26/11
to nunit-...@googlegroups.com
Sounds very interesting, however that means that actions can and would be run concurrently which may cause problems in the test that we run.  The ability to control asserts and actions separately would have been more useful (in our use case, not sure about others).

Paralellisation of tests is something that I've been looking a lot at recently as our test run is around 9 hours in duration at the moment (2500ish tests in total), but the actual running for a lot of them needs to be sequential as a collision of configuration mid-test would cause a lot to fail instantly.  The asserts on the other hand, are pure DB lookups only, so to be able to run all the actions, then all the asserts for them would have been beneficial.

Anyway, can't wait to try out v3 but I'll wait until it's finished.

Thanks,
Maritn
Reply all
Reply to author
Forward
0 new messages