--
_ _
| |__ __ _| |_ ___ _ __ ___ __ _ _ __
| '_ \ / _` | __/ __| '_ ` _ \ / _` | '_ \
| |_) | (_| | |_\__ \ | | | | | (_| | | | |
|_.__/ \__,_|\__|___/_| |_| |_|\__,_|_| |_|
Running Debian GNU/Linux Sid (unstable)
batsman dot geo at yahoo dot com
<|ryan|> I don't use deb
<netgod> u poor man
<Disconnect> netgod: heh
<Kingsqueak> apt-get install task-p0rn
> I'm new to Unit Testing and wanted to learn on it by writing the tests for a
> cheap file-transfer protocol. There's some hand-shaking involved at the
> beginning which _needs_ to be done before anything else. The problem is I'd
> like to test that too, but I don't know if I can rely on any order in the
> execution of the tests. OTOH, I don't know if it's OK to do everything in
> #setup and do the assertions right there.
i had a similar problem. nameing your tests in a manner which sorts lexically
seems to work, eg :
def test_a_whatever
...
end
def test_b_whatever
...
end
-a
--
====================================
| Ara Howard
| NOAA Forecast Systems Laboratory
| Information and Technology Services
| Data Systems Group
| R/FST 325 Broadway
| Boulder, CO 80305-3328
| Email: aho...@fsl.noaa.gov
| Phone: 303-497-7238
| Fax: 303-497-7259
====================================
-mike
ahoward wrote:
>On Tue, 28 Jan 2003, Mauricio Fernández wrote:
>
>
>
>>I'm new to Unit Testing and wanted to learn on it by writing the tests for a
>>cheap file-transfer protocol. There's some hand-shaking involved at the
>>beginning which _needs_ to be done before anything else. The problem is I'd
>>like to test that too, but I don't know if I can rely on any order in the
>>execution of the tests. OTOH, I don't know if it's OK to do everything in
>>#setup and do the assertions right there.
>>
>>
>
> IMHO, it would be nice if they were run in the order they were defined.
agreed.
--
Not arguing that it would be nice, since it is possible to scan the
"dot-line" and know where to go looking, but please do _not_ rely on
test run order for anything serious. It is a bad idea and I think
hardcore unittesters would run over you with a TestRunner if you do.
(Been there, done that, red CVS tree all my fault, but I could atleast
blame it on C++ :-P)
Anything that needs to be initalized or cleaned up should be in setup or
teardown. If not, you're not doing "proper" unittesting, but something
akin more to acceptance or integration testing, IMHO.
To the OP, Mauricio Fernández, who wrote:
> There's some hand-shaking involved
> at the beginning which _needs_ to be done before anything else. The
> problem is I'd like to test that too,
Test the handshaking separately in its own test. Whatever you do, do not
use the test that does the handshaking set anything up that the other
tests would need.
> but I don't know if I can rely on
> any order in the execution of the tests. OTOH, I don't know if it's OK
> to do everything in #setup and do the assertions right there.
Bad idea. Setup shouldn't contain assertions meant to unittest
something. Anything failing in setup is an "Error", not a "Failure", if
I remember my unittesting parlance correct.
I would suggest you split in two testcases, one that tests the
handshaking exclusively and another testcase where setup does the
handshaking (but does not test it) for the conveniece of the other parts
you want to test, and just plain assumes that handshaking works (you
have another unittest that should catch it if it doesn't).
</unittesting_preaching_mode>
--
(\[ Kent Dahl ]/)_ _~_ __[ http://www.stud.ntnu.no/~kentda/ ]___/~
))\_student_/(( \__d L b__/ NTNU - graduate engineering - 5. year )
( \__\_õ|õ_/__/ ) _)Industrial economics and technological management(
\____/_ö_\____/ (____engineering.discipline_=_Computer::Technology___)
Chad
> I humbly disagree with this. If the order of the tests is guaranteed, it
> would promote temporal coupling between the test cases. It's better to let
> each "test..." method stand alone without dependencies on the other methods.
this may be true _most_ of the time, but there are cases where it's too
inconvenient to decouple tests from their order, for example, i have a test
suite for a singleton class which manages a bitemporal database stored in
postgresql. the api performs certain operations and expected the database to
be in a certain state after each one. the only way to decouple this
particular test from time would be to tear down, and re-import, the database
in the 'setup' method and then perform tests 0 though (n - 1) for each test n.
currently my suite is 2000 lines long and takes 4 minutes to run; if i
followed you approach this suite would balloon to around 10000 lines and take
a _very_ long time to run. this simply is not pratical in all cases.
-a
>
> Chad
>
> On Tue, 28 Jan 2003, ahoward wrote:
>
> > On Tue, 28 Jan 2003, Michael Garriss wrote:
> >
> > > IMHO, it would be nice if they were run in the order they were defined.
> >
> > agreed.
> >
> > >
> > > -mike
> > >
> > > ahoward wrote:
> > >
> > > >On Tue, 28 Jan 2003, Mauricio Fernández wrote:
> > > >
> > > >
> > > >
> > > >>I'm new to Unit Testing and wanted to learn on it by writing the tests for a
> > > >>cheap file-transfer protocol. There's some hand-shaking involved at the
> > > >>beginning which _needs_ to be done before anything else. The problem is I'd
> > > >>like to test that too, but I don't know if I can rely on any order in the
> > > >>execution of the tests. OTOH, I don't know if it's OK to do everything in
> > > >>#setup and do the assertions right there.
> > > >>
> > > >>
> > > >
> > > >i had a similar problem. nameing your tests in a manner which sorts lexically
> > > >seems to work, eg :
> > > >
> > > > def test_a_whatever
> > > > ...
> > > > end
> > > >
> > > > def test_b_whatever
> > > > ...
> > > > end
> > > >
> > > >-a
> > > >
> > > >
> > > >
> > >
> > >
> > >
> > >
> >
> >
>
>
>
--
I disagree with this. I have a test that check for a specific bug in
our software. The bug caused the software to segfault. I prefer to run
this test after all my other tests (since otherwise some of the tests
may not get run at all in the nightly build).
Paul
require 'dbi'
class DB_User
def initialize( dns, user, pass )
@dbh = DBI::connect( dns, user, pass )
end
def create_user_table
@dbh.execute "CREATE TABLE my_user (
id BIGINT NOT NULL PRIMARY KEY,
username VARCHAR(24) NOT NULL,
password VARCHAR(24) NOT NULL,
email VARCHAR(64) )"
end
def drop_user_table
@dbh.execute( "DROP TABLE my_user" )
end
end
Lets say I want to test create_user_table and drop_user_table seperatly.
A possible solution:
require 'test/unit'
require 'db_user'
class TC_DB_User < Test::Unit::TestCase
DNS = 'dbi:Pg:tiamat'
USER = 'postgres'
PASS = nil
def test_connection
begin
@tdb = Tiamat::DB_User.new( DNS, USER, PASS )
rescue DBI::Error
assert( false, 'Error getting connection' )
end
end
def test_create_table
test_connection # !!!!!!
begin
@tdb.create_user_table
rescue DBI::ProgrammingError
assert( false, 'Error creating user table' )
end
end
def test_drop_table
test_connection # !!!!!!
test_create_table # !!!!!!
begin
@tdb.drop_user_table
rescue DBI::ProgrammingError
assert( false, 'Error creating user table' )
end
end
end
The tests that depend on other tests are just run from the tests that
need them. Is this bad practice??
-Mike
The way I had written it, I couldn't easily separate the handshake from
the rest. I guess this reflects a number of things:
* that I should have writen the test before coding (instead of at the
same time)
* that Unit Testing really helps me think on the interface better than
some simple UML and stuff (which I actually took the time to make :)
* that I still have a lot to learn on Unit Testing
> > but I don't know if I can rely on
> > any order in the execution of the tests. OTOH, I don't know if it's OK
> > to do everything in #setup and do the assertions right there.
>
> Bad idea. Setup shouldn't contain assertions meant to unittest
> something. Anything failing in setup is an "Error", not a "Failure", if
> I remember my unittesting parlance correct.
>
> I would suggest you split in two testcases, one that tests the
> handshaking exclusively and another testcase where setup does the
> handshaking (but does not test it) for the conveniece of the other parts
> you want to test, and just plain assumes that handshaking works (you
> have another unittest that should catch it if it doesn't).
What about tests that rely on some state of the (in this case) protocol?
Can I chain them as suggested in another post like
def test_dosomeopt
....
end
def test_dosomething
test_dosomeopt
....
end
?
Is there a good reference on this for newcomers to Unit Testing? If
possible not too focused on XP, as those tend to get quite abstract and
anyway there's a couple things about XP I just cannot do like pair prog. ...
--
_ _
| |__ __ _| |_ ___ _ __ ___ __ _ _ __
| '_ \ / _` | __/ __| '_ ` _ \ / _` | '_ \
| |_) | (_| | |_\__ \ | | | | | (_| | | | |
|_.__/ \__,_|\__|___/_| |_| |_|\__,_|_| |_|
Running Debian GNU/Linux Sid (unstable)
batsman dot geo at yahoo dot com
'Ooohh.. "FreeBSD is faster over loopback, when compared to Linux
over the wire". Film at 11.'
-- Linus Torvalds
> On Tue, 28 Jan 2003, Chad Fowler wrote:
>
>> I humbly disagree with this. If the order of the tests is
>> guaranteed, it would promote temporal coupling between the test
>> cases. It's better to let each "test..." method stand alone
>> without dependencies on the other methods.
>
> this may be true _most_ of the time, but there are cases where it's
> too inconvenient to decouple tests from their order, for example, i
> have a test suite for [description of interrelated tests snipped]
[...]
The point of the TestCase class is to make it easy to set up a series
of different tests that all need the same initial test environment.
So in this case, what you're describing is a test case with a single
test that starts with a blank database.
To implement this, I would suggest renaming all your test_XXX methods
to subtest_XXX and have a single test_all method that calls the
subtest_XXX methods in the proper order. Have your setup method
create the database and your teardown method delete it.
This is better for many reasons:
- you can no longer run your tests out of order, since Test::Unit
only allows you to run individual test_ methods, not subtest_
methods.
- the order of your sub-tests are explicit and understandable (not
relying on non-obvious magical properties like the sort order of
the method names or the declaration order within the class).
- fits better into the intent of the TestCase class -- one that
sets up a consistent environment in which every test_ method is
run.
Then you have coupling between tests, which is generally considered "bad".
=====
--
Yahoo IM: michael_s_campbell
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com
I'm no UT expert, but there *must* be another way to achieve this
without relying on order of tests.
If you reply on it, it's better style to make that explicit rather
than implicit, IMO.
Gavin
This is a *great* solution for test cases that must be run in a certain
order.
The problem that I run into is when I have some test cases that test
simple building-block methods, and another big test case that tests a method
that calls those building-block methods. If a bug pops up in one of the
building-block methods, it would be nice to have the building-block's test
case catch it rather than the big test case, since the big test case may
fail in an obscure way if the building-blocks it depends on are
untrustworthy. The tests *can* be run in any order, it's just more helpful
to have them run in the given order.
My suggestion is to have the test cases execute in the order they are
encountered, but add a 'randomize' capability that can be specified from the
command line to run the test cases in random order. I can see no benefit in
having the test cases execute in alphabetical order, but if there is a
benefit, that too could be a command line option.
- Warren Brown
> My suggestion is to have the test cases execute in the order they are
> encountered, but add a 'randomize' capability that can be specified from the
> command line to run the test cases in random order. I can see no benefit in
> having the test cases execute in alphabetical order, but if there is a
> benefit, that too could be a command line option.
Just a thought. JUnit allows specification of test order. Does
Test::Unit allow this? Not a bad idea...
Gavin
> Just a thought. JUnit allows specification of test order.
> Does Test::Unit allow this? Not a bad idea...
From the TODO:
* Allow the selection of multiple test orderings.
To address the issue more broadly, my feeling is that Test::Unit should
support certain test orderings without letting anybody use them. Of
course, that's a contradiction, so let me explain...
I hate the fact that test ordering is ever important. I hope that people
only ever use different test orderings in order to identify test
interdependency as a step to removing it. But I recognize that there are
various reasons (a few of them even pragmatic ;-) that will cause
ordering to matter for some tests. My bow to this need at present is to
run the tests in alphabetical order, which makes running tests in a
specific order possible, though it feels hacky. I think I like that it
feels hacky, and will probably leave it feeling hacky no matter what I
do regarding test ordering in the future. While I believe in letting
programmer's shoot themselves in the foot if that's really what they
want to do, I don't feel the need to aim the gun for them ;-)
BTW, I think it was great the number of ideas that were shared in this
thread. The level of testing interest and expertise in the Ruby
community is very, very encouraging. Keep it up!
Nathaniel
<:((><
+ - -
| RoleModel Software, Inc.
| EQUIP VI
> > To implement this, I would suggest renaming all your test_XXX
methods
> > to subtest_XXX and have a single test_all method that calls the
> > subtest_XXX methods in the proper order. Have your setup method
> > create the database and your teardown method delete it.
>
> This is a *great* solution for test cases that must be
> run in a certain order.
Amen!
> The problem that I run into is when I have some test
> cases that test simple building-block methods, and another
> big test case that tests a method that calls those
> building-block methods. If a bug pops up in one of the
> building-block methods, it would be nice to have the
> building-block's test case catch it rather than the big test
> case, since the big test case may fail in an obscure way if
> the building-blocks it depends on are untrustworthy. The
> tests *can* be run in any order, it's just more helpful to
> have them run in the given order.
>
> My suggestion is to have the test cases execute in the
> order they are encountered, but add a 'randomize' capability
> that can be specified from the command line to run the test
> cases in random order. I can see no benefit in having the
> test cases execute in alphabetical order, but if there is a
> benefit, that too could be a command line option.
The problem is that there is no good way to run the tests in the order
defined. IIRC, Ruby doesn't make an guarantees about the ordering of
methods returned from calls like Module#public_instance_methods, so for
consistencies sake it's much better to run methods in alphabetical
order. While it's not good for the code (and tests) to depend on test
ordering, it's still good to have tests run in a consistent, predictable
order so that us programmers know what to expect. I hope the distinction
between those two things is at least as clear as muddy water :-/.
Anyhow, on running tests in (pseudo-)random order, that's one of the
test orderings I'd like to implement in the future. The only problem
with it is, if things fail when you're running them randomly, you'll
want to know what order they were run in that particular time so you can
fix the problem. Otherwise it might be more frustrating than helpful.
> Anyhow, on running tests in (pseudo-)random order, that's one of the
> test orderings I'd like to implement in the future. The only problem
> with it is, if things fail when you're running them randomly, you'll
> want to know what order they were run in that particular time so you can
> fix the problem. Otherwise it might be more frustrating than helpful.
Emit the random number seed so that it is repeatable, and give a
verbose option (may be there already?) that emits the method names as
they are tested. That oughta do it.
Gavin
> The problem is that there is no good way to run the tests in the
> order defined.
class TestCase
class << self
def method_added(symbol)
# how would matz do this Perlish test?
if symbol.id2name =~ /^test_/
@test_methods ||= []
@test_methods << symbol
end
end
attr_reader :test_methods
end
end
class MyTestCase < TestCase
def test_zzz; end
def test_mmm; end
def test_aaa; end
end
class MyTestCase2 < TestCase
def test_zzz2; end
def test_mmm2; end
def test_aaa2; end
end
p MyTestCase.test_methods
#=> [:test_zzz, :test_mmm, :test_aaa]
p MyTestCase2.test_methods
#=> [:test_zzz2, :test_mmm2, :test_aaa2]
> Anyhow, on running tests in (pseudo-)random order, that's one of the
> test orderings I'd like to implement in the future. The only problem
> with it is, if things fail when you're running them randomly, you'll
> want to know what order they were run in that particular time so you
> can fix the problem. Otherwise it might be more frustrating than
> helpful.
I can think of 2 solutions here:
- Test::Unit supports saving the order of the tests it runs in a
text file, and can parse that file and run the tests in the same
order.
- Test::Unit includes its own random number generator, spits
out the random seed it is using, and can support seeding its RNG
to a particular number. (you can't use Ruby's RNG since tests
themselves might use it)
> "Nathaniel Talbott" <nath...@talbott.ws> writes:
>
>> The problem is that there is no good way to run the tests in the
>> order defined.
>
> class TestCase
> class << self
> def method_added(symbol)
> # how would matz do this Perlish test?
> if symbol.id2name =~ /^test_/
Matz called "perlish" this
if /^test_/
There's nothing "perlish" in your test. As far as I understand, Matz is
only against operations on implied operands. Sorry for an off-topic
remark.
On Tue, Jan 28, 2003 at 10:56:02AM +0900, Nathaniel Talbott wrote:
> I hate the fact that test ordering is ever important. I hope that people
> only ever use different test orderings in order to identify test
> interdependency as a step to removing it. But I recognize that there are
> various reasons (a few of them even pragmatic ;-) that will cause
> ordering to matter for some tests.
>...snip
--
Alan Chen
Digikata Computing
http://digikata.com
> > Anyhow, on running tests in (pseudo-)random order, that's one of the
> > test orderings I'd like to implement in the future. The only problem
> > with it is, if things fail when you're running them randomly, you'll
> > want to know what order they were run in that particular time so you
> > can fix the problem. Otherwise it might be more frustrating than
> > helpful.
>
> Emit the random number seed so that it is repeatable,
Excellent idea. As Matt says, I'd have to insulate myself from the
standard random number generator, but that doesn't seem too hard to do.
> and
> give a verbose option (may be there already?) that emits the
> method names as they are tested. That oughta do it.
The Console::TestRunner in CVS supports several new output levels,
including one that outputs every test run, so we're covered there.
That's pretty slick! I had a feeling as soon as I said it wouldn't work
that someone would propose a solution. There might be an issue with
doing this, though - after a little IRB'ing here, it appears that
including a module will not trigger #method_added for each method of the
module. But perhaps there's a way around that, too :-)
Also, and it could just be me being over-cautious, but adding test
methods by trapping their definition seems to be more vulnerable to
problems than waiting until it's time to run them and _then_ gathering
them from where I expect them to be. I don't have to worry about missing
anything, as Ruby is collecting it all for me. But I'm a worrywart.
> > Anyhow, on running tests in (pseudo-)random order, that's one of the
> > test orderings I'd like to implement in the future. The only problem
> > with it is, if things fail when you're running them randomly, you'll
> > want to know what order they were run in that particular time so you
> > can fix the problem. Otherwise it might be more frustrating than
> > helpful.
>
> I can think of 2 solutions here:
>
> - Test::Unit supports saving the order of the tests it runs in a
> text file, and can parse that file and run the tests in the same
> order.
> - Test::Unit includes its own random number generator, spits
> out the random seed it is using, and can support seeding its RNG
> to a particular number. (you can't use Ruby's RNG since tests
> themselves might use it)
I think I like option #2 best, as it appears at first blush to be
simplest to provide.
Finally, as there seems to be so much interest in this, I'll mentally
bump it up a few levels in priority.
Thanks to everyone for their excellent input,
[... code elided ...]
I might try something like this ...
-- START CODE ----------------------------------------------------------
class TC_DB_User < Test::Unit::TestCase
DNS = 'dbi:Pg:tiamat'
USER = 'postgres'
PASS = nil
def setup
@tdb = Tiamat::DB_User.new(DNS, USER, PASS)
end
def test_create
assert ! @tdb.nil?, "Should not be null"
end
def test_create_drop
check_create_table
check_drop_table
end
def check_create_table
@tdb.create_user_table
assert table_is_created, "table should exist"
end
def check_drop_table
@tdb.drop_user_table
assert ! table_is_created, "table should not exist"
end
def table_is_created
# True/False test that actually checks if the table is created.
end
end
-- END CODE ----------------------------------------------------------
A couple of points...
@tdb.created?
* It bothers me to see assertions in contionally executed code (rescue
clauses in this case). I like to see the assertions executed every
time. Of course, that means "assert false" is not a good idea. :-)
* test_create is usually the first test I write, just to get the object
created. Initial condition tests can be added here. If the
test_create stays this simple, it will usually get removed as the test
matures.
* The create and drop are tested in separate methods, but run under the
same test method. This assures that the are run in order. Since
any method named "test_*" will be executed as a separate test, the
original version of this would execute the following sequence of
events ...
connect Tiamat
connect Tiamat
create user table
connect Tiamat
create user table
drop user table
I'm guessing the two create operations is not what is desired.
* When using assert (rather than assert_equal), it really helps to add
the string since it gets printed out if the assertion fails. I used
to write asserts like this ...
assert x.nil?, "X is nil"
which read great in the code. It reads like a comment that says "I am
assserting that X is nil". But when the assertion fails, the error
message displayed would be something like ...
ERROR: X is nil: ... yada, yada, yada
which is misleading because the asserted failed because X was NOT nil.
Changing the message to read the other way ...
assert x.nil?, "X is not nil"
... made the code look funny. I finally discovered that including the
word "should" make it read fine from both perspectives ...
assert x.nil?, "X should be nil"
ERROR: X should be nil: ... yada, yada, yada
Ok, that last point has way too much verbage for such a simple idea. I
think I'll stop while I'm ahead.
--
-- Jim Weirich jwei...@one.net http://w3.one.net/~jweirich
---------------------------------------------------------------------
"Beware of bugs in the above code; I have only proved it correct,
not tried it." -- Donald Knuth (in a memo to Peter van Emde Boas)
> Matt Armstrong [mailto:ma...@lickey.com] wrote:
>
>> > The problem is that there is no good way to run the tests in the
> order
>> > defined.
>>
>> class TestCase
>> class << self
>> def method_added(symbol)
[...]
>> end
>> attr_reader :test_methods
>> end
>> end
[...]
> That's pretty slick! I had a feeling as soon as I said it wouldn't
> work that someone would propose a solution. There might be an issue
> with doing this, though - after a little IRB'ing here, it appears
> that including a module will not trigger #method_added for each
> method of the module. But perhaps there's a way around that, too :-)
Ooh, yeah, yuck. The included module won't have a list of methods in
declaration order anyway. You'd have to a method_added hook on Object
or Module and track all classes just in case they are included in a
TestCase -- not pretty.
> Also, and it could just be me being over-cautious, but adding test
> methods by trapping their definition seems to be more vulnerable to
> problems than waiting until it's time to run them and _then_
> gathering them from where I expect them to be. I don't have to worry
> about missing anything, as Ruby is collecting it all for me. But I'm
> a worrywart.
Yes, it requires Test::Unit to become more involved with Ruby's
internal guts (method_added, etc.) than makes me comfortable too.
Alphabetical order is simple to define, easily verifiable, and fits in
most people's heads.
On Tue, Jan 28, 2003 at 02:38:58PM +0900, Alan Chen wrote:
> It occurs to me that you might to support unordered tests by providing a
> "random" test sequence method.
--
I'll mention right off the bat that I'm a Ruby newbie, but based on an
example I saw in the Pickaxe book, it seems like you could simply put a hook
in Module that would attach an (autoincremented) 'testcasenumber' attribute
to any method whose name =~ /^test_/ when that method is defined. Then
collect all of the test cases when you are ready to run them (like you
currently do), but sort on the 'testcasenumber' attribute before running
them. Any methods not involved in testing would (theoretically) not be
hindered by the additional attribute.
- Warren Brown