Thank you all for your time,
Jesse F. Weinstein
At the end of the day, I have never had any trouble with the standard
approach taught in many software engineering courses i.e. determine the
requirements (which may require some rapid protyping - not all problems can
be well defined up front! :-)), laying down a proper decomposition of the
problem i.e. design and continue the design step until code modules are
easily identified. Then and only then write the code. If you follow these
simple steps, then you can determine (some of) unit test criteria before you
write your code because you have designed what modules are going to be
required! Of course, you won't determine all unit test criteria (only
testing at the interface of the routine level) until you write the code -
since unit test, by definition, involves testing all of the paths through
the code :-)
I have followed this regime for 20+ years of programming now and have never
delivered a flop to the customer. The only time I get into a "bind" is when
I am writing something at a personal level and decide I know enough to go
direct to code - then I get to this stage myself. But retreating backwards
and doing it properly has always solved the issue (and you "relearn" the
value of taking the upfront time to do it properly! :-)).
As for your existing "binds", step back from the code and attempt to
diagram/document the structure of your implementation attempt. Hopefully,
this will show you where your problems lay (usually the pieces just don't
fit together because you didn't decompose them from the top - they grew from
the bottom as individual entities with little or no chance of meeting up
together for a harmonious whole). Producing unit tests for pieces of code
that doesn't fit together will not solve the problem. If you still can't see
the "woods for the trees" then at least you will have something you can show
a friend (who hopefully has enough experience to help you out).
Goodluck,
Peter
"Jesse W" <je...@loop.com> wrote in message
news:mailman.998013446...@python.org...
Which is why, ideally, you have the customer as part of the
development team -- like all XP practices, this is mutually
reinforcing with the others. The customer's role in the team
is first of all to define the "stories" that the program will have
to meet -- the acceptance tests. This can't be done ahead of
time -- which is why the customer must ideally be _part_ of
the team, refining and changing the stories as he or she gains
a better understanding of what, exactly, are the business needs.
(Also, as the developers make and refine estimates of how much
work it will be to meet the requirements of each story, the
customer must make the business decision of _prioritizing_,
iteration by iteration -- only the customer can weigh the real
business relevance of various objectives versus the developers'
updated estimates of their costs).
Part of the motivation for XP and other lightweight, adaptive,
iterative approaches is exactly that the classic "waterfall"
model, where it's assumed that all the requirements are to
be determined before design, and all design accomplished
before coding, etc, is such a bad match for the reality of most
software development projects. But if developers are in a
vacuum, with no real feeling for what business value the
system is supposed to deliver, and no rich continuing feedback
from the customer, on what basis would they adjust the
requirements iteratively? There HAS to be customer involvement,
one way or another. "Release early, release often" may be one
way, but you need highly responsive customers for that, too.
If you as the developer are going to be your own customer,
you can do it, as long as you manage to "switch hats" to the
different roles.
> and will this method make programming into a
> tiresome, irritating task, even if I can write these bigger programs?
On the contrary, it seems to me. Methodologies geared on your
producing lots of paperwork seem much more tiresome and
irritating than ones where the key deliverables are running code
(including tests), particularly if the heavyweight ones don't work:-).
> My question to you all is this: Do you have any suggestions on
> how to deal with learning to use unit testing in this way, or examples
> of setting up games with this kind of unit testing, or GUI programs?
Unit tests that are not automated have modest value. Fortunately,
the *units* (components) CAN be tested with test harnesses that
don't actually draw to the screen nor wait for joystick input (etc)
but rather simulate those activities to check the logic of the various
components, producing textual output that can be checked by the
usual means. The *acceptance* test (the customer's stories) may
be a different issue -- we've had some modest success with a
"capture and playback" approach, but that requires a "mock system"
be running to do your "capturing" in the first place, and there are
lots of uncertainties on how to keep the C&P test suites useful when
the GUI's layout and logic change.
Alex
> I hope, as I said above, that this method will allow me to finish
> various projects I have started and had to abandon because I got
> stuck in binds I could not specifically understand or find out how to
> resolve. My doubts lie with the thought that it is very difficult to
> figure out what the requirements of a program will be before you
> write the program, and will this method make programming into a
> tiresome, irritating task, even if I can write these bigger programs?
> My question to you all is this: Do you have any suggestions on
> how to deal with learning to use unit testing in this way, or examples
> of setting up games with this kind of unit testing, or GUI programs?
> Really, any thoughts or comments you might have about this form of
> unit testing, or how to learn to use it, would be very appreciated.
>
I've found little value in writing unittests for GUI apps (and that
includes games), but lots of use for unittests in the underlying libraries.
I use unittests even when writing examples for my PyQt book, and I've
included a chapter on unit testing, which Stephen Figgins rather liked,
even in the draft form
(http://www.onlamp.com/pub/a/python/2001/07/05/pythonnews.html).
--
Boudewijn | http://www.valdyas.org
>I've found little value in writing unittests for GUI apps (and that
>includes games), but lots of use for unittests in the underlying libraries.
One thing that I've not seen pointed out anywhere (it doesn't even seem to be
in the manual!) is that to write simple tests is pretty much trivial:
import unittest
class Tests(unittest.TestCase):
def test1(self):
self.assertEquals(1,1)
def test2(self):
self.assertEquals(1,2)
if __name__=='__main__':
unittest.main()
Basically, it looks like unittest.main() scans the code for classes derived
from unittest.TestCase, and runs all methods whose names start with 'test' as
test cases.
This makes writing a series of simple, repetitive, tests very straightforward.
Obviously, for more complex cases, you need the more complex infrastructure,
but I have to say that *not* emphasising how simple it is to write basic
tests, does put people off. (It certainly put me off!)
One point with this form of test: If I add a setUp() or a tearDown() method,
they are called before and after *each* test, rather than once at the start
and end of the suite of tests. But without this stuff being documented, I'm
not sure if this should be considered "correct"...
Paul.
There is a great example from pyunit.sourceforge.net
http://diveintopython.org/roman_divein.html
I think that unit tests have there greatest value in refactoring
and later when the code needs a revision for new functionality.
Thanks,
Jeff Sandys
>You are quite correct (according to my version of the documentation -- and
>in my experience with using the tool); setUp and tearDown are called for
>each test so you always start and end from a known state. <wink>
I'd argue that if you want to do setup and teardown actions for each
individual test, you can code the actions in the test. But there's no way to
code a setup for a series of tests. For example, in a database application,
you may want to create a connection in setUp(), run a series of queries in
tests, and then close the connection in tearDown(). Creating and closing a
connection per test could well be an unacceptable overhead. To my way of
thinking, this is the common case (of setup/teardown actions), and should be
easy to do. Setup and teardown actions which have such a low overhead as to be
OK to do for each test would seem to be less common. (Indeed, that's how the
docs describe setUp and tearDown, as high-overhead actions, IIRC.)
Paul.
Well, the alternative argument could be that, if you don't need to setup and
teardown in between various tests, then they could be coded as subtests of a
single, larger test... OTOH, if you *do* have high setup/teardown overhead, which
*does* require being redone for each test, then coding it each time would be a
pain. Lumping subtests together is easier than multiple copies of
setup/teardown...
Jeff Shannon
Technician/Programmer
Credit International
>Well, the alternative argument could be that, if you don't need to setup and
>teardown in between various tests, then they could be coded as subtests of a
>single, larger test... OTOH, if you *do* have high setup/teardown overhead, which
>*does* require being redone for each test, then coding it each time would be a
>pain. Lumping subtests together is easier than multiple copies of
>setup/teardown...
I'm not sure I agree. The long and short of it is that both cases can happen.
But look at my example "for real" (sort of)
class testSimpleQueries(unittest.TestCase):
def setUp(self):
# May take seconds to execute...
self.connection = DB.Connect(connect_str)
def tearDown(self):
self.connection.Disconnect()
def exec_query(self, q):
"Trivial helper to run a query"
return self.connection.Execute(q)
def testQ1(self):
"Simple query"
q = "select 1 from dual"
assertEqual(1, self.exec_query(q))
def testQ2(self):
"Exception when no rows returned"
q = "select 1 from dual where 1=0"
assertRaises(DB.Error, self.exec_query(q))
# And 100 more trivial queries...
You get the idea. The setup costs a *lot* in relative terms of time (each test
query takes, say, 0.01 second). You really want to only do that once. But you
don't really want to code a single huge test - it destroys the reporting of the
individual test docstrings by unittest.main(). What do you do? I don't say it's
not possible to code this - just that the documentation doesn't make it clear
*how*.
Remember, I'm arguing that the common cases should be simple to code - not that
it's not possible to do these things. That makes people more likely to code unit
tests.
So really, I'm suggesting two things - (1) beef up the documentation of
unittest.main(), and give it a section to itself ("How to code a simple series
of unit tests"), and (2) if necessary, beef up unittest.main() to cater for such
common cases (I don't necessarily think this is required).
Paul.
The setUp/tearDown mechanism can ensure that each test is independent of any
state left over from any other test. If you know your tests won't leave any
state behind, you could put your DB.connect() call in
testSimpleQueries.__init__. (Don't forget to call unittest.TestCase.__init__
though.) Again, this assumes that none of the individual tests will change
the state of the connection. To close the connection, you could put the
self.connection.Disconnect() call in testSimpleQueries.__del__; but there's
no guarantee when (or even if?) this will be called.
It might be useful for unittest.py to support a
"globalSetUp"/"globalTearDown" mechanism. globalSetUp would be called once
at the initialization of the TestCase, before any tests are run. Unlike
overriding __init__, there would be no need for globalSetUp to call the
overridden method in the superclass. globalTearDown would be called after
all the tests have been run. The timing of the call to globalTearDown would
be well known, unlike __del__.
> I don't say it's not possible to code this - just that the documentation
> doesn't make it clear *how*.
>
> Remember, I'm arguing that the common cases should be simple to code - not
> that it's not possible to do these things. That makes people more likely to
> code unit tests.
>
> So really, I'm suggesting two things - (1) beef up the documentation of
> unittest.main(), and give it a section to itself ("How to code a simple series
> of unit tests"), and (2) if necessary, beef up unittest.main() to cater for
> such common cases (I don't necessarily think this is required).
Good suggestions. Unfortunately, active developers can't always read every
newsgroup posting, even though they may be relevant. (I've copied this post
to Steve Purcell, who's the head of the PyUnit project.) Might I suggest
that, if you'd like your ideas to survive longer than the lifetime of a
Usenet posting (roughly equivalent to the lifespan of a fruit fly), you (a)
submit them as feature requests to the PyUnit project at
http://sourceforge.net/projects/pyunit/ or to Python at
http://sourceforge.net/projects/python/, and perhaps even (b) work on the
documentation and code and submit some patches? Anybody can do #a, even if
they can't do #b.
--
David Goodger dgoo...@users.sourceforge.net Open-source projects:
- Python Docstring Processing System: http://docstring.sourceforge.net
- reStructuredText: http://structuredtext.sourceforge.net
- The Go Tools Project: http://gotools.sourceforge.net
No, it won't work.
The __init__ is called for each test method for the class that inherits
TestCase -- PyUnit creates as much instaces of TestCase class as the number
of test methods, in this case those starting with "test-".
One of the golden rules of unit testing is "make it fast." Most of the
time, it shouldn't take more than a few mins at one run -- C3 had over 1300
unit tests, performing over 13,000 individual checks altogether, which ran
in about 10 minutes in visual works. The fastness of running unit tests is
necessary for TestDrivenDesign and MercilessRefactoring.
To make it fast, you should do high-cost things such as db connection,
insert/delete, as infrequent as possible. Yet you want to test as much as
possible.
One alternative to using live dbs is using small test dbs, but after all it
can lead to a real mess if a multiple of developers and test cases trying
to do the db test. (you could in-line the db in the test code instead)
Another one is using shunt pattern.
When I look at your code, it seems like it's testing DB itself rather than
the python code. PyUnit is not intended for testing DB engines per se.
"Some tests make sure that the lowest layer of database access works as
planned. Then I write a suite based on the assumptions demonstrated by the
first suite. It assumes that I can get things into and out of the real
database, so I don't have to have the real database there. That way I can
create an in-memory impostor for the database and exercise the higher level
objects." [PPR]
Putting lots of SQL statements at a higher layer would, as one of many
disadvantages, reduce code testability enormously. (refer to
http://www.c2.com/cgi/wiki?BadFormedPersistenceLayer)
>When I look at your code, it seems like it's testing DB itself rather than
>the python code. PyUnit is not intended for testing DB engines per se.
Agreed. It was an example. I couldn't think of a small, realistic, example.
Sorry.
How about a GUI test example, like in the docs. setUp creates a widget,
tearDown destroys it. In between, tests do hundreds of things like
setSize(10,10), assert getSize() == (10, 10). You get the idea. OK, so
creating a widget *might* not be too expensive - but it might be (you'll force
me to suggest a widget which maintains a database connection, if you're not
careful :-)
Paul.
One other suggestion nobody else has mentioned: use doctest.py instead.
--
--- Aahz <*> (Copyright 2001 by aa...@pobox.com)
Hugs and backrubs -- I break Rule 6 http://www.rahul.net/aahz/
Androgynous poly kinky vanilla queer het Pythonista
"I now regard a fact as a hypothesis that people don't bother to argue
about anymore." --John Burn, quoted in Lawrence Wright's _Twins_
Cameron Laird <cla...@NeoSoft.com>
Business: http://www.Phaseit.net
Personal: http://starbase.neosoft.com/~claird/home.html