Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Unit Testing

5 views
Skip to first unread message

Antonius Pius

unread,
Dec 2, 1998, 3:00:00 AM12/2/98
to
Since we have been discussing the build process I though I would bring up one
area that seems to be universally neglected:

Unit Testing

What I consider "Unit Test" is that those in charge of testing create a series
of tests for the components of the system. Such tests would often be written
in a programming language like C.

Suppose someone had written functions for dialing a modem. Those in charge of
testing would write stub programs that call these functions with all kinds of
parameters to make sure that the "Unit" performed as it was specified.

(Of course all of this assumes that some sort of system "design" exists,
something that is even more rare than a functional specification.)

What I have found is that "Unit Testing" rarely occurs. What I see "Unit
Testing" as in practice is a time period on the schedule that occurs after
"Coding". Project managers use this phase as breathing space for extra coding.
This allows them so show in the status reports that the project is advancing
according to schedule because the project team is now "Unit Testing" rather
than "Coding". When, in fact, nothing other than coding is taking place. Any
"testing" is being performed by those who wrote it.

When all the crap (And that is what it usually is) gets thrown together during
the "Integration Testing" naturally it does not work. The project then goes
into the infinite bug hung mode. The system gets shipped to testers who come
up with a list of 500 bugs. The programmers get out the bubble gum and twist
ties and patch of the bugs for the next build. The process repeats itself
while the number of bugs grows faster than the project team can fix them.

Is should be obvious to all in such situations that the problem is a result
of:
A. Problems occuring at the unit level.
B. Poor Design
C. All of the above.

Due to political considerations the project will continue in the "integration"
phase.

John - N8086N
------------------------------------ JPEG -----
Are you interested in a professional society or
guild for programmers?

See www.programmersguild.org/american.htm


EMail Address:
_m_i_a_n_o_@
_c_o_l_o_s_s_e_u_m_b_u_i_l_d_e_r_s.
_c_o_m_

Biju Thomas

unread,
Dec 2, 1998, 3:00:00 AM12/2/98
to
In article <743qf9$63s$1...@holly.prod.itd.earthlink.net>,

n...@nl.nl (Antonius Pius) wrote:
>
> Unit Testing
>
> What I consider "Unit Test" is that those in charge of testing create a
series
> of tests for the components of the system.

AFAIK, it is the developers who are supposed to do unit testing for the code
that they write, not people in charge of testing. (Is it so different in the
places you worked?) And, of course, there will be some over-confident
developers who think that their code will never have any problem and won't do
any testing, but that is a rare case.

> Such tests would often be written
> in a programming language like C.
>

It depends. If the components are well-isolated, it is easy to write such
drivers. Somtimes, the components will be too much intertwined, and, then,
writing such test drivers will take more time than integrating the components
together. This assumes that each of the components will not have too many
troubles to make such a testing impossible.

Also, do unit testing after doing a thorough code-review.

> Suppose someone had written functions for dialing a modem. Those in charge of
> testing would write stub programs that call these functions with all kinds of
> parameters to make sure that the "Unit" performed as it was specified.
>
> (Of course all of this assumes that some sort of system "design" exists,
> something that is even more rare than a functional specification.)
>
> What I have found is that "Unit Testing" rarely occurs.

Well, may not be very thoroughly and systematically, but I have seen people
doing it often.

> What I see "Unit
> Testing" as in practice is a time period on the schedule that occurs after
> "Coding". Project managers use this phase as breathing space for extra
coding.
> This allows them so show in the status reports that the project is advancing
> according to schedule because the project team is now "Unit Testing" rather
> than "Coding". When, in fact, nothing other than coding is taking place. Any
> "testing" is being performed by those who wrote it.
>

This may be partly true. Schedule overruns are not rare, and it is the last
phases of the project that usually suffer from schedule problems.

> When all the crap (And that is what it usually is) gets thrown together
during
> the "Integration Testing" naturally it does not work. The project then goes
> into the infinite bug hung mode. The system gets shipped to testers who come
> up with a list of 500 bugs. The programmers get out the bubble gum and twist
> ties and patch of the bugs for the next build. The process repeats itself
> while the number of bugs grows faster than the project team can fix them.
>
> Is should be obvious to all in such situations that the problem is a result
> of:
> A. Problems occuring at the unit level.
> B. Poor Design
> C. All of the above.
>
> Due to political considerations the project will continue in the
"integration"
> phase.
>

Besides the above, there is something else at fault here - the culture that
perpetuates these political considerations. Software project estimation was
never an exact science, and I don't see how it can ever be, in the face of
ever-changing technologies.

Regards,
Biju Thomas

-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own

Pete McBreen, McBreen.Consulting

unread,
Dec 2, 1998, 3:00:00 AM12/2/98
to
For some notes on the state of the art of Unit Testing see
http://www.armaties.com/ (look for the xUnit testing harnesses.)

Their practice is to write the unit tests first, then write the code.

Also see http://members.pingnet.ch/gamma/junit.htm for a article on unit
testing written by Kent Beck and Erich Gamma (of Gang of Four fame).

Pete
--
Pete McBreen, McBreen.Consulting , Calgary, AB
email: petem...@acm.org http://www.cadvision.com/roshi
Using Creativity and Objects for Process Improvement


David Gillon

unread,
Dec 3, 1998, 3:00:00 AM12/3/98
to
Biju Thomas wrote:

> > What I consider "Unit Test" is that those in charge of testing create a
> series
> > of tests for the components of the system.
>
> AFAIK, it is the developers who are supposed to do unit testing for the code
> that they write, not people in charge of testing. (Is it so different in the
> places you worked?)

The unit developer is the worst possible person to write the unit test
as they will
perpetuate any misunderstanding of the design from the code stage into
the test. You can dual task the coders to write unit tests also, but in
that case your procedures should insist no one tests their own code.

--

David Gillon
Avionics Systems Division
MAv Rochester

Ben Kovitz

unread,
Dec 3, 1998, 3:00:00 AM12/3/98
to
In article <743qf9$63s$1...@holly.prod.itd.earthlink.net>, n...@nl.nl (Antonius
Pius) wrote:

> What I have found is that "Unit Testing" rarely occurs. What I see "Unit

> Testing" as in practice is a time period on the schedule that occurs after
> "Coding". Project managers use this phase as breathing space for extra
coding.
> This allows them so show in the status reports that the project is advancing
> according to schedule because the project team is now "Unit Testing" rather
> than "Coding". When, in fact, nothing other than coding is taking place. Any
> "testing" is being performed by those who wrote it.
>

> When all the crap (And that is what it usually is) gets thrown together
during
> the "Integration Testing" naturally it does not work. The project then goes
> into the infinite bug hung mode. The system gets shipped to testers who come
> up with a list of 500 bugs. The programmers get out the bubble gum and twist
> ties and patch of the bugs for the next build. The process repeats itself
> while the number of bugs grows faster than the project team can fix them.

Here are two ideas about these things, widely known but not widely
practiced. I picked up this line of thought mostly from Steve Maguire's
_Debugging the Development Process_, though you can find it in a lot of
places. Please forgive me if these have already been discussed.

1. Don't have a "unit test" part of the schedule. Programmers should test
all code they write before checking it in. That includes not only writing
a simple test suite for each class, but single-stepping through every line
in a debugger, and of course placing assertions at every entry and exit
point. A programmer should be supremely confident that each line of code
is correct before a tester ever has a chance to run it.

2. Don't have an "integration test" part of the schedule. Instead,
release entire, working programs (internally) every two to four weeks. Or
in other words, integrate early and often. Each release adds more
features, so the earliest releases are the least complete, but each
release is a functional, testable, usable piece of software.

Integration errors are among the worst to debug, and they play off each
other in complex ways. With more than four weeks of accumulated code (and
even that is on the high side), you get too many interactions to deal with
at once. There should never be 500 outstanding bugs at once. Also, at
each release, the programmers learn from each other and see ways to
improve each collaborating section of the system to make it simpler and
easier to test and debug.

After each internal release, the testers and documenters test the new
release and update the manuals and on-line help to reflect the new
functionality. Thus you omit "documentation hell" at the end of a
project, too.

During the next 2-4 week chunk, the programmers fix bugs as the testers
(and documenters!) find them, while adding the new features. If fixing
the bugs takes a lot of time, then the programmers have to defer
implementation of new features to the next internal release. The idea is
to always build on a stable foundation, to the extent that this is
possible; and when the foundation is found not to be stable, to fix it
immediately, not to wait months for "integration hell". (By "stable" I
mean that it works, not that it can't change in the future.)


Since the above is really so obvious, and works so well (and so enjoyably)
when people apply it, I guess the real question is: why do so many
organizations find it so scary to give it a try? In places where I've
worked, I and others have suggested this and gotten pretty close to
universal agreement, yet when the next project gets started, no one dares
schedule the project this way. (And it's easier to schedule and estimate
this way, too!) Is it just the natural fear of trying new things, or
something else? I suspect that it's mainly "something else", but I can't
quite put my finger on what it is. Any ideas?

--
Ben Kovitz <apteryx at chisp dot net>
Author, _Practical Software Requirements: A Manual of Content & Style_
http://www.amazon.com/exec/obidos/ASIN/1884777597
http://www.manning.com/Kovitz

Pete McBreen, McBreen.Consulting

unread,
Dec 3, 1998, 3:00:00 AM12/3/98
to
Ben Kovitz wrote:
>Since the above is really so obvious, and works so well (and so enjoyably)
>when people apply it, I guess the real question is: why do so many
>organizations find it so scary to give it a try?

Great question, and it ties back to the theories developers have about
software development.

earlier on in the thread Biju Thomas wrote:
> AFAIK, it is the developers who are supposed to do unit testing for the
code
> that they write, not people in charge of testing. (Is it so different in
the
> places you worked?)

and David Gillon replied


>The unit developer is the worst possible person to write the unit test
>as they will perpetuate any misunderstanding of the design from the
>code stage into the test. You can dual task the coders to write unit
>tests also, but in that case your procedures should insist no one tests
>their own code.

One theory is that developers unit test their own code so that when it is
released to testing, only integration errors remain. (corresponds to Biju's
comment)

Another theory is that because developers might misinterpret the
requirements/design, some other viewpoint is required for testing.
(Corresponds to David's comment)

The theory espoused by Beck and Gamma in their "Test Infected" article
http://members.pingnet.ch/gamma/junit.htm is that develoopers write unit
tests to know when they have completed the feature.

Using this latter theory, you run a test to prove you have completed the
coding task, as opposed to running a test to prove you are not done yet.

So depending on your theory, unit tests only ever give you bad news, or they
are a progress indicator.

Since the prevailing viewpoint is that Unit Test report bad news, is it
surprising that they are not used much?

Projects that use Unit Tests as progress indicators tend to have a really
large number of tests.

Other reasons could be related to "Good Enough Software" as reported by Ed
Yourdon. The theory being that with a large enough code base it is
impossible to remove all errors, so some are "acceptable". In that
environment, unit tests are not needed since it goes against the theory that
some bugs are acceptable :-(

Switching to Unit Tests as Progress indicators, by writing the Unit Tests
First, and then writing the code can greatly improve the quality of the
delivered system. You also get a great regression test suite for ensuring
that the simple 1 line change does not break the application. (And if it
does break, you go back and beef up the unit tests until they detect the
problem and then fix the code).

Biju Thomas

unread,
Dec 3, 1998, 3:00:00 AM12/3/98
to
In article <36665C13...@gecm.com>,

David Gillon <David....@gecm.com> wrote:
> Biju Thomas wrote:
>
> > > What I consider "Unit Test" is that those in charge of testing create a
> > series
> > > of tests for the components of the system.
> >
> > AFAIK, it is the developers who are supposed to do unit testing for the
code
> > that they write, not people in charge of testing. (Is it so different in
the
> > places you worked?)
>
> The unit developer is the worst possible person to write the unit test
> as they will
> perpetuate any misunderstanding of the design from the code stage into
> the test. You can dual task the coders to write unit tests also, but in
> that case your procedures should insist no one tests their own code.
>

Unit testing is not the phase where you evaluate the correctness of the
design. It is the phase where you test whether the code conforms to the
design. Hence, the designer of the component is the best person to write the
unit test cases, since she/he is the person having thorough knowledge of the
component.

Concerning developers cheating during unit testing, it is a problem with
people, not with the process. If developers don't have little ethics, they
will ultimately suffer.

To evaluate correctness of the design, you need to do design reviews. And,
finally, during system testing (aka. function testing), you may find out
problems in the design that were overlooked. But any design problems found out
during the system testing phase is likely to have a huge impact so that it may
be impractical to fix them, unless you have plenty of time and resources.

Biju Thomas

unread,
Dec 3, 1998, 3:00:00 AM12/3/98
to
Ben Kovitz wrote:
>
> 1. Don't have a "unit test" part of the schedule. Programmers should test
> all code they write before checking it in. That includes not only writing
> a simple test suite for each class, but single-stepping through every line
> in a debugger, and of course placing assertions at every entry and exit
> point. A programmer should be supremely confident that each line of code
> is correct before a tester ever has a chance to run it.
>

The type of comprehensive unit-testing that you advocate here are good,
but not so practical in non-mission-critical projects due to constraints
of time and resources.

> 2. Don't have an "integration test" part of the schedule. Instead,
> release entire, working programs (internally) every two to four weeks. Or
> in other words, integrate early and often. Each release adds more
> features, so the earliest releases are the least complete, but each
> release is a functional, testable, usable piece of software.
>

Agreed. Such early integrations help to identify problems that might not
have been forseen in the analysis and design phases. Finding it early
helps to ensure that the same mistakes won't be repeated in the
yet-to-complete components, and there will be time to fix the problems.

> Since the above is really so obvious, and works so well (and so enjoyably)
> when people apply it, I guess the real question is: why do so many
> organizations find it so scary to give it a try?

Ignorance, laziness, lack of management support and so on...

Regards,
Biju Thomas

Chris Kuan

unread,
Dec 4, 1998, 3:00:00 AM12/4/98
to
Pete McBreen, McBreen.Consulting wrote:

> earlier on in the thread Biju Thomas wrote:

> > AFAIK, it is the developers who are supposed to do unit testing for the
> code
> > that they write, not people in charge of testing. (Is it so different in
> the
> > places you worked?)
>

> and David Gillon replied


> >The unit developer is the worst possible person to write the unit test
> >as they will perpetuate any misunderstanding of the design from the
> >code stage into the test. You can dual task the coders to write unit
> >tests also, but in that case your procedures should insist no one tests
> >their own code.
>

> One theory is that developers unit test their own code so that when it is
> released to testing, only integration errors remain. (corresponds to Biju's
> comment)
>
> Another theory is that because developers might misinterpret the
> requirements/design, some other viewpoint is required for testing.
> (Corresponds to David's comment)
>
> The theory espoused by Beck and Gamma in their "Test Infected" article
> http://members.pingnet.ch/gamma/junit.htm is that develoopers write unit
> tests to know when they have completed the feature.

As Biju posted in parallel, the designer should design the tests.
The developer implements and executes them.


> Other reasons could be related to "Good Enough Software" as reported by Ed
> Yourdon. The theory being that with a large enough code base it is
> impossible to remove all errors, so some are "acceptable". In that
> environment, unit tests are not needed since it goes against the theory that
> some bugs are acceptable :-(

In that environment, one could still write unit tests to
probe for only those bugs which are not acceptable. The tests
don't have to be comprehensive (and I don't think anyone here
would say that they can always be comprehesive, anyway).

--

Chris Kuan, BHP Information Technology
Concatenate for email: mr gazpacho @ hotmail . com

"The fools must be dealt with, however."
- Dan Pop, comp.lang.c <danpop.9...@news.cern.ch>

Fabian Musci

unread,
Dec 4, 1998, 3:00:00 AM12/4/98
to

David Gillon <David....@gecm.com> wrote in article
<36665C13...@gecm.com>...


> Biju Thomas wrote:
>
> > > What I consider "Unit Test" is that those in charge of testing create
a
> > series
> > > of tests for the components of the system.
> >

> > AFAIK, it is the developers who are supposed to do unit testing for the
code
> > that they write, not people in charge of testing. (Is it so different
in the
> > places you worked?)
>

> The unit developer is the worst possible person to write the unit test
> as they will
> perpetuate any misunderstanding of the design from the code stage into
> the test. You can dual task the coders to write unit tests also, but in
> that case your procedures should insist no one tests their own code.
>

Yes. This is correct. If we use the word audit rather than test every one
would understand.


David Gillon

unread,
Dec 4, 1998, 3:00:00 AM12/4/98
to
Biju Thomas wrote:

> Unit testing is not the phase where you evaluate the correctness of the
> design. It is the phase where you test whether the code conforms to the
> design. Hence, the designer of the component is the best person to write the
> unit test cases, since she/he is the person having thorough knowledge of the
> component.

Unit testing does verify the correctness of the implementation of the
design into code rather than the correctness of the design, but on large
complex projects the designer is not necessarily the same person as the
coder. It is vital that the coder's interpretation is verified as
correct, therefore the code is subject to both peer review and complete
unit testing.

> To evaluate correctness of the design, you need to do design reviews.

Undoubtedly. But code correctness is not the same as design correctness
and also needs to be verified via review and testing.

> finally, during system testing (aka. function testing), you may find out
> problems in the design that were overlooked. But any design problems found out
> during the system testing phase is likely to have a huge impact so that it may
> be impractical to fix them, unless you have plenty of time and resources.

Indeed, 'left-shift' in error detection is a fundamental of process
improvement.

David Gillon

unread,
Dec 4, 1998, 3:00:00 AM12/4/98
to
Antonius Pius wrote:

> What I have found is that "Unit Testing" rarely occurs. What I see "Unit
> Testing" as in practice is a time period on the schedule that occurs after
> "Coding". Project managers use this phase as breathing space for extra coding.
> This allows them so show in the status reports that the project is advancing
> according to schedule because the project team is now "Unit Testing" rather
> than "Coding". When, in fact, nothing other than coding is taking place. Any
> "testing" is being performed by those who wrote it.

The real weakness shown by this is in the apparent lack of strength of
the Quality Assurance function in much of the industry -- I'm referring
here to QA in an audit role, not the misinterpretation that assumes QA
equals testing. The project's QA Plan should specify the degree of unit
testing (and other forms of testing and quality control activities)
required before release and QA audits should identify compliance with
these targets. QA of course should report completely independently of
project management.

Hopeless idealism? Not if we want to be perceived as a professional
industry....

Perhaps I'm fortunate to work in an environment where this structure has
always been in place, and avionics necessarily mandates a higher level
of confidence in your product, but the structure itself is applicable to
any project, only the depth of the testing activities required varies
between domains.

pro...@my-dejanews.com

unread,
Dec 4, 1998, 3:00:00 AM12/4/98
to
In article <36671C...@sig.please>,

lo...@sig.please wrote:
> Pete McBreen, McBreen.Consulting wrote:
>
> > earlier on in the thread Biju Thomas wrote:
> > > AFAIK, it is the developers who are supposed to do unit testing for the
> > code
> > > that they write, not people in charge of testing. (Is it so different in
> > the
> > > places you worked?)
> >
> > and David Gillon replied

> > >The unit developer is the worst possible person to write the unit test
> > >as they will perpetuate any misunderstanding of the design from the
> > >code stage into the test. You can dual task the coders to write unit
> > >tests also, but in that case your procedures should insist no one tests
> > >their own code.
> >
> > One theory is that developers unit test their own code so that when it is
> > released to testing, only integration errors remain. (corresponds to Biju's
> > comment)
> >
> > Another theory is that because developers might misinterpret the
> > requirements/design, some other viewpoint is required for testing.
> > (Corresponds to David's comment)
> >
> > The theory espoused by Beck and Gamma in their "Test Infected" article
> > http://members.pingnet.ch/gamma/junit.htm is that develoopers write unit
> > tests to know when they have completed the feature.
>
> As Biju posted in parallel, the designer should design the tests.
> The developer implements and executes them.

I, for one, agree with Biju. I do unit tests to evaluate the robustness
of my software. One place I worked we even had a module release requirement
that every line of code had to be tested. So we created unit tests that
inserted all manner of erroroneous inputs to the programs. I did
part of the communications and without custom hardware, the system testers
could not create the scenarios I did. So during integration, any errors
that occurred, I could say confidently, "that problem could not arise
in this module". And no this was not a way of pointing the problem to
others. It made integration much smoother and kept us within our agressive
schedule.

Unit testing should be about making that unit work according to the
requirements and design, and making it robust in the face of input errors.

>
> > Other reasons could be related to "Good Enough Software" as reported by Ed
> > Yourdon. The theory being that with a large enough code base it is
> > impossible to remove all errors, so some are "acceptable". In that
> > environment, unit tests are not needed since it goes against the theory that
> > some bugs are acceptable :-(
>
> In that environment, one could still write unit tests to
> probe for only those bugs which are not acceptable. The tests
> don't have to be comprehensive (and I don't think anyone here
> would say that they can always be comprehesive, anyway).

Unit tests can be comprehensive. System tests cannot. Note comprehensive
does not mean exhaustive. You might not be able (in a finite amount of time)
to tests every possible input and result. But you can design unit tests
that evaluate all the logic of the module, all the major states,
and all the boundary conditions. THAT's exactly why the implementor
should do the unit testing.

How do you know the unit testing is done right? That's what reviews
are for. We had unit test reviews to demonstrate to others that the
tests were done and passed.

Just for reference, I tend to view things from an embedded systems
development point of view. Often that software is committed to ROM
or cannot be upgraded in the field. (Customer: "Shut down the production
line, are you crazy?!") So you really have to get it right.


>
> --
>
> Chris Kuan, BHP Information Technology
> Concatenate for email: mr gazpacho @ hotmail . com
>
> "The fools must be dealt with, however."
> - Dan Pop, comp.lang.c <danpop.9...@news.cern.ch>

Go Dan!


--
Ed Prochak
Magic Interface, Ltd.
440-498-3702

pro...@my-dejanews.com

unread,
Dec 4, 1998, 3:00:00 AM12/4/98
to
In article <apteryxspamless-...@sl-2.chisp.net>,
apteryx...@nospam.chisp.net (Ben Kovitz) wrote:
snip

>
> Integration errors are among the worst to debug, and they play off each
> other in complex ways. With more than four weeks of accumulated code (and
> even that is on the high side), you get too many interactions to deal with
> at once. There should never be 500 outstanding bugs at once. Also, at
> each release, the programmers learn from each other and see ways to
this point^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

> improve each collaborating section of the system to make it simpler and
> easier to test and debug.
>

This point really struck a nerve with me. If developers wait until
integration to start learning from each other , it's too late! The
most successful projects I've seen had developers deeply involved in
others' designs. If two modules in the system design interacted, then
their developers were talking about that interaction during design.
The best case of this was a project where I was supporting the OS.
I had to deal with the needs of nearly every application module in
that product to ensure their needs were meet. This went from designing
new functionality down to training some of the programmers on the
features that existed in the OS that they could use to solve their
design problems.

Waiting to talk until the integration phase is how I understand
another project at the same company failed. Interfaces changed
and system builds would fail, because developers did not talk
to each other.

I'm just saying, a development team better work as a team from the
beginning. trying to pull it together in the final quarter is
likely to fail.

> --
> Ben Kovitz <apteryx at chisp dot net>
> Author, _Practical Software Requirements: A Manual of Content & Style_
> http://www.amazon.com/exec/obidos/ASIN/1884777597
> http://www.manning.com/Kovitz
>

Colin Stock

unread,
Dec 4, 1998, 3:00:00 AM12/4/98
to
Pete McBreen, McBreen.Consulting wrote:

>
> Ben Kovitz wrote:
> >Since the above is really so obvious, and works so well (and so enjoyably)
> >when people apply it, I guess the real question is: why do so many
> >organizations find it so scary to give it a try?
>
> Great question, and it ties back to the theories developers have about
> software development.
>
[snip]
I wholeheartedly agree with Ben's comments in the grandparent posting.

IMHO one reason why so few organizations use this approach is that still
too many project managers do not understand the software development
process and equate the number of lines of code written as a the only
valid indicator of progress.

Another reason (IMHO) is pressure to deliver. If the software doesn't
function properly there is always the post-delivery maintenance phase to
correct it.

Colin
--

========================================================================
Colin Stock colin...@gecm.com

To re-boot your laptop, turn it upside-down and give it a good shake.

Steve Brothers

unread,
Dec 4, 1998, 3:00:00 AM12/4/98
to
David-

For what its worth, I agree with you here. The QA function in the software
industry does need to "step-up" and provide auditing capability. This
should not be restricted to auditing in code though. Auditing (or something
close) needs to occur in all phases of development (and testing and
requirements as well, for that matter).

There are a number of problems with executing this. Mainly because of the
misinterpretation that you suggest. Moreso, I think, because the skills
that are required to perform in an auditing capacity are not found in many
QA/test people. It seems that the people that have those skills, or strive
to acquire those skills, have little, if any, interest in a QA role. When
people have those skills they want to be developers. (Broad generalization,
and there are exceptions, but "in general" I have found it to be true.)
That serves to perpetuate the QA=Test idea because QA rarely gets done by
QA/test people, and done correctly.

As for unit test - in my mind - I don't really care who writes the unit
tests because when unit test code is treated as production code (as it is in
our environment), it should be going through the same peer review and other
rigorous auditing requirements and standards for the environment. So if
someone misses something or perpetuates bad design/code/etc. then there is a
chance it will be caught.

Industry non-withstanding - I happen to be in Healthcare - I think the
approach and reasons for following it (auditing/QA/unit-test) would benefit
everyone. Its mainly, as was suggested, the implementation that becomes
difficult.

-S

David Gillon wrote in message <3667C5FA...@gecm.com>...


>The real weakness shown by this is in the apparent lack of strength of
>the Quality Assurance function in much of the industry -- I'm referring
>here to QA in an audit role, not the misinterpretation that assumes QA
>equals testing. The project's QA Plan should specify the degree of unit
>testing (and other forms of testing and quality control activities)
>required before release and QA audits should identify compliance with
>these targets. QA of course should report completely independently of
>project management.
>
>Hopeless idealism? Not if we want to be perceived as a professional
>industry....
>
>Perhaps I'm fortunate to work in an environment where this structure has
>always been in place, and avionics necessarily mandates a higher level
>of confidence in your product, but the structure itself is applicable to
>any project, only the depth of the testing activities required varies
>between domains.


---
Steve Brothers, Senior QA Manager, CyberPlus Corporation
sbro...@cyberplus.com - 303.280.0800 x226

Martin Fowler

unread,
Dec 5, 1998, 3:00:00 AM12/5/98
to

>The unit developer is the worst possible person to write the unit test
>as they will
>perpetuate any misunderstanding of the design from the code stage into
>the test. You can dual task the coders to write unit tests also, but in
>that case your procedures should insist no one tests their own code.

Depends on your purposes for unit testing. I always write unit tests
for my code, whether someone else is doing it or not. Why? Because I
find that writing tests as (or before) I write the code helps me
deliver much faster because I spend less time debugging.

That realization is the one of the most valuable contributions of the
eXtreme Programming folks.

Martin Fowler
Martin Fowler
http://ourworld.compuserve.com/homepages/Martin_Fowler/

Bob Binder

unread,
Dec 5, 1998, 3:00:00 AM12/5/98
to

Martin Fowler wrote in message <3668ca00...@news.mci2000.com>...
[snip]

>>Depends on your purposes for unit testing. I always write unit tests
>for my code, whether someone else is doing it or not. Why? Because I
>find that writing tests as (or before) I write the code helps me
>deliver much faster because I spend less time debugging.
>
>That realization is the one of the most valuable contributions of the
>eXtreme Programming folks.


I'm pleased to see that testing is being rediscovered and I endorse
the general XP approach. But the argument for and practice of "test before
code"
goes back at least to the early 1970s.

______________________________________________________________________________
Bob Binder http://www.rbsc.com RBSC Corporation
312 214-3280 tel Software Engineering 3 First National Plaza
312 214-3110 fax Process Improvement Suite 1400
rbi...@rbsc.mapson.com (remove .mapson to mail) Chicago, IL 60602-4205

Martin Fowler

unread,
Dec 7, 1998, 3:00:00 AM12/7/98
to

>I'm pleased to see that testing is being rediscovered and I endorse
>the general XP approach. But the argument for and practice of "test before
>code"
>goes back at least to the early 1970s.
>

You're right, of course, and I was told at university to write tests
before I coded. But the heavyweight way they went about this, coupled
with the observation that nobody (including the professors) actually
did it, put me off.

I learned later (before coming into contact with the XP crowd) that
simple tests made me program faster. For me the key realization was
that *simple* tests get you most of the way. I also learned that such
tests are valuable only if run frequently, such as every time you
compile.

Now I'm not saying that nobody else discovered these things, I'm sure
plenty people do. But most clients I go to don't test like this. Most
methodologies hardly mention testing. Most test gurus make testing
sound so complicated that it's off-putting.

Martin
Martin Fowler
http://ourworld.compuserve.com/homepages/Martin_Fowler/

Colin Stock

unread,
Dec 7, 1998, 3:00:00 AM12/7/98
to
Biju Thomas wrote:
>
> Ben Kovitz wrote:
> >
> > 1. Don't have a "unit test" part of the schedule. Programmers should test
> > all code they write before checking it in. That includes not only writing
> > a simple test suite for each class, but single-stepping through every line
> > in a debugger, and of course placing assertions at every entry and exit
> > point. A programmer should be supremely confident that each line of code
> > is correct before a tester ever has a chance to run it.
> >
>
> The type of comprehensive unit-testing that you advocate here are good,
> but not so practical in non-mission-critical projects due to constraints
> of time and resources.
>
In my experience, this is the sort of attitude that tends to lead either
to systems that require enormous re-work at the integration phase or to
systems that require extended post delivery maintenance. IMO in either
case, it should be obvious that the failure to perform comprehensive
unit testing is a false economy.

TTFN

Bob Binder

unread,
Dec 7, 1998, 3:00:00 AM12/7/98
to

Martin Fowler wrote in message <366b77e1...@news.mci2000.com>...

>
>>I'm pleased to see that testing is being rediscovered and I endorse
>>the general XP approach. But the argument for and practice of "test before
>>code"
>>goes back at least to the early 1970s.
>>
>
>You're right, of course, and I was told at university to write tests
>before I coded. But the heavyweight way they went about this, coupled
>with the observation that nobody (including the professors) actually
>did it, put me off.
>
>I learned later (before coming into contact with the XP crowd) that
>simple tests made me program faster. For me the key realization was
>that *simple* tests get you most of the way. I also learned that such
>tests are valuable only if run frequently, such as every time you
>compile.
>
>Now I'm not saying that nobody else discovered these things, I'm sure
>plenty people do. But most clients I go to don't test like this. Most
>methodologies hardly mention testing.

More's the pity.

>Most test gurus make testing
>sound so complicated that it's off-putting.


For example?


Simplicity, clarity, and pragmatism are important in all dimensions
of software development. But hard problems don't go away by wishful
thinking. I like what Brian Marick says: "make test specifications
as complex as is tractable, stopping just before the point where
the complexity overwhelms you." This insight is reminiscent of
Einstein's pithy observation that "Everything should be made as
simple as possible, but not simpler."

Frank Adrian

unread,
Dec 7, 1998, 3:00:00 AM12/7/98
to
Martin Fowler wrote in message <3668ca00...@news.mci2000.com>...

>>The unit developer is the worst possible person to write the unit test
>>as they will
>>perpetuate any misunderstanding of the design from the code stage into
>>the test. You can dual task the coders to write unit tests also, but in
>>that case your procedures should insist no one tests their own code.
>
>Depends on your purposes for unit testing. I always write unit tests
>for my code, whether someone else is doing it or not. Why? Because I
>find that writing tests as (or before) I write the code helps me
>deliver much faster because I spend less time debugging.
>
>That realization is the one of the most valuable contributions of the
>eXtreme Programming folks.

Everyone should write engineering unit tests for their code. Period. But,
having said that, I also believe that in most organizations, you still need
to have independently unit tests.

Why? IMHO, the reason that having the developers writing and testing their
own code works so well in XP is due to four factors:

(1) The small scale and frequency of their "quantum integration" makes sure
that any errors in the interface that do slip through the developers' unit
tests are spotted and corrected immediately.

(2) The communication due to the XP process also improves the speed of
correction of the module errors that do slip through the cracks.

(3) The small size of the modules involved (due in no large part to the
language used) minimizes the extent of the errors.

(4) The immediate use of integrated modules by other developers also
prevents the spread of errors.

In short, having developers write their own unit tests in XP works because
of XP, not because the idea of developers writing their own tests is
inherently good (or bad).

Now - put the same testing premise in place without small modules and small
and frequent integrations (the other main hallmark of XP). Errors that slip
through cracks in the developers' test code (and I'll bet that even Ron
won't say that every unit is exhaustively tested) allow bad code to
propagate through the system until integration time.

Independent test code is a necessary band-aid to better check module code
and thus prevent wider spread of errors for processes where communication
between developers is low, module and integration sizes are larger, and
integrations are less frequent.

Developer Unit-Test Only works in XP because it works in XP. Do not
separate the sub-process from the process and still expect it to work as
well.
--
Frank A. Adrian
First DataBank

frank_...@firstdatabank.com (W)
fra...@europa.com (H)

This message does not necessarily reflect the views of my employer,
its parent company, or any of the co-subsidiaries of the parent
company.


Chris Kuan

unread,
Dec 8, 1998, 3:00:00 AM12/8/98
to
Bob Binder wrote:
>
> Martin Fowler wrote in message <366b77e1...@news.mci2000.com>...

> >Most test gurus make testing
> >sound so complicated that it's off-putting.
>
> For example?

I think there's a language that's evolved that can be off-putting
to the inexperienced; "black-box", "white-box", a seemingly large
emphasis on "coverage", complexity metrics (OK, they're not testing
per se, but they seem to get lumped in there, along with QA),
"regression"...

> Einstein's pithy observation that "Everything should be made as
> simple as possible, but not simpler."

C++ programmers should appreciate the reference :-)

Bob Binder

unread,
Dec 8, 1998, 3:00:00 AM12/8/98
to
Chris Kuan wrote in message <366C51...@sig.please>...

>Bob Binder wrote:
>>
>> Martin Fowler wrote in message <366b77e1...@news.mci2000.com>...
>
>> >Most test gurus make testing
>> >sound so complicated that it's off-putting.
>>
>> For example?
>
>I think there's a language that's evolved that can be off-putting
>to the inexperienced; "black-box", "white-box", a seemingly large
>emphasis on "coverage", complexity metrics (OK, they're not testing
>per se, but they seem to get lumped in there, along with QA),
>"regression"...

Ah yes, testing is an impenetrable thicket of jargon and obfuscation,
especially in contrast to the intuitively obvious terms and concepts
used in other areas of software: block-structured name scoping, static
polymorphism, dynamic polymorphism, operator overloading,
delegation, abstract base classes, dereferencing, generic types,
information hiding, postfix operators, unbounded recursion,
doubly-linked list with hashed addressing, bitwise equivalence,
b-tree, b+-tree, binary tree, places/transitions/tokens,
meta-class variables, role stereotypes, meta-model syntax,
Ada packages, Java packages, UML packages, referential integrity,
call-backs, triggers, stored procedures, remote procedure calls,
object request broker, packet headers, distributed common object
model, composite class relationships, runtime reflection,
history states, functional decomposition, step-wise refinement,
hard realtime, broadcast transitions, non-deterministic
rendezvous, garbage collection, chunks, thunks, widgets,
formal arguments, formal methods, iterative and incremental,
and of course state-of-the-art, bulletproof, and cool ;-)

People who find jargon off-putting and work in software are
doomed to chronic discomfort. Testing has its share of
abstractions, but they are necessary. Like anything else,
you use what you need and ignore the rest.

Bob

JRStern

unread,
Dec 9, 1998, 3:00:00 AM12/9/98
to
On Thu, 03 Dec 1998 10:34:24 -0700, apteryx...@nospam.chisp.net
(Ben Kovitz) wrote:
>1. Don't have a "unit test" part of the schedule. Programmers should test
>all code they write before checking it in. That includes not only writing
>a simple test suite for each class, but single-stepping through every line
>in a debugger, and of course placing assertions at every entry and exit
>point. A programmer should be supremely confident that each line of code
>is correct before a tester ever has a chance to run it.

I guess I have a two-step process of unit test. The first is done
before any code check-in, the second is done before a "release to
(system) test" milestone. I do not agree that single-stepping through
a program is necessary or desirable -- that's a white-box mode of
thinking, and I'm a black-box guy. I don't have any particular
confidence in any line of code, or any method, or class -- *that's*
why I put asserts everywhere!

>2. Don't have an "integration test" part of the schedule. Instead,
>release entire, working programs (internally) every two to four weeks. Or
>in other words, integrate early and often. Each release adds more
>features, so the earliest releases are the least complete, but each
>release is a functional, testable, usable piece of software.

I can't understand this. If you release a program, it seems to me you
want to test it first, at the integration/system level. I do like
frequent, small, functional releases, but this does NOT remove the
need for testing. In fact, it increases the total test load.

Your description sounds a bit like what Microsoft must do, and sounds
to me like a bunch of excuses to skip a necessary development step.

Joshua Stern
JRS...@gte.net


kent...@compuserve.com

unread,
Dec 9, 1998, 3:00:00 AM12/9/98
to
In article <74hcam$b75$1...@client3.news.psi.net>,

"Frank Adrian" <frank_...@firstdatabank.com> wrote:
> Everyone should write engineering unit tests for their code. Period. But,
> having said that, I also believe that in most organizations, you still need
> to have independently unit tests.
>
> Developer Unit-Test Only works in XP because it works in XP. Do not
> separate the sub-process from the process and still expect it to work as
> well.

1. XP has a separate thread of functional tests spawned by the requirements.

2. I write unit test because I program faster if I do (and cleaner, and have
more fun, and am more confident...). I would do this even is someone held a
gun to my head and made me work on a waterfall project. It has nothing to do
with XP. It has to do with programming and confidence.

Kent

Ben Kovitz

unread,
Dec 9, 1998, 3:00:00 AM12/9/98
to
In article <74kooc$bcj$1...@news-1.news.gte.net>, JRS...@gte.net (JRStern)
replied to me:

> >2. Don't have an "integration test" part of the schedule. Instead,
> >release entire, working programs (internally) every two to four weeks. Or
> >in other words, integrate early and often. Each release adds more
> >features, so the earliest releases are the least complete, but each
> >release is a functional, testable, usable piece of software.
>
> I can't understand this. If you release a program, it seems to me you
> want to test it first, at the integration/system level. I do like
> frequent, small, functional releases, but this does NOT remove the
> need for testing. In fact, it increases the total test load.

I think I must not have been clear. I'm not saying to do away
with integration testing. I'm saying to do it as part of these
incremental releases: "early and often" instead of "a great big
one at the end." The rest of the message explains more about
this.

I do agree, by the way, that "early and often" increases the
total test load. And I think that's a good thing.

Ben Kovitz

unread,
Dec 9, 1998, 3:00:00 AM12/9/98
to
In article <74kooc$bcj$1...@news-1.news.gte.net>, JRS...@gte.net (JRStern)
replied to me:

> On Thu, 03 Dec 1998 10:34:24 -0700, apteryx...@nospam.chisp.net


> (Ben Kovitz) wrote:
> >1. Don't have a "unit test" part of the schedule. Programmers should test
> >all code they write before checking it in. That includes not only writing
> >a simple test suite for each class, but single-stepping through every line
> >in a debugger, and of course placing assertions at every entry and exit
> >point. A programmer should be supremely confident that each line of code
> >is correct before a tester ever has a chance to run it.
>
> I guess I have a two-step process of unit test. The first is done
> before any code check-in, the second is done before a "release to
> (system) test" milestone. I do not agree that single-stepping through
> a program is necessary or desirable -- that's a white-box mode of
> thinking, and I'm a black-box guy. I don't have any particular
> confidence in any line of code, or any method, or class -- *that's*
> why I put asserts everywhere!

First of all, I'm glad to come across another assertions-fanatic!
I think it's appalling that even in 1998, most programmers that I
talk to don't even know how to write assertions or what they're
good for. In my view, (with the usual exceptions) programming
without assertions is unprofessionalism, plain and simple.

But I think the black-box approach, if it steers people to
intentionally overlook potential sources of unexpected behavior,
is dogmatism. A good mechanic does not simply drive a car
around the block to test that it's all right. A good mechanic
looks under the hood for potential sources of trouble that a
"black box" test of the car simply wouldn't catch.

I suspect that the value of single-stepping through code is one
of those things that is difficult to appreciate until one has
tried it--just like assertions. I sometimes hear from
programmers that assertions "aren't necessary" in contemporary
programming because a good object-oriented design simply doesn't
have that kind of mistake in it. What brings people to
appreciate assertions is simply putting them into some "solid"
code and watching hundreds of them fire.

Steve Maguire writes about the kind of resistance to assertions
that he got, where people assumed that the only reason so many
assertions could be firing was because the assertions were wrong.
Nope, the assertions were correct: there were just hundreds and
hundreds of bugs in the code that no one had ever caught before,
because no one was looking. Until they see it happen, most
programmers just don't believe that their code has that many
errors of the sort that assertions catch.

I think it's the same with single-stepping. I just single-step
through tests, watching the variables, continually rethinking
what values they're supposed to have--and every once in a while
catch some boneheaded mistake I made while coding. Very
importantly, single-stepping, more so even than assertions, helps
you catch types of errors that you simply didn't think might
occur. Sometimes single-stepping just helps you see the problem
in a whole new way and leads you to throw out a whole bunch of
code and write it better. Testing, after all, is a place in
which we try to welcome the unknown, the unexpected, the
unimagined--everything that we omitted from our thinking but
*didn't know we omitted*.

In theory, we should be able to anticipate every possible kind of
error and block it with an assertion. But hey, in theory, we
should never make design errors and always think things through
perfectly at the very beginning.

Ben Kovitz

unread,
Dec 9, 1998, 3:00:00 AM12/9/98
to
In article <36676637...@ibm.net>, biju...@ibm.net wrote:

> Ben Kovitz wrote:
> >
> > 1. Don't have a "unit test" part of the schedule. Programmers should test
> > all code they write before checking it in. That includes not only writing
> > a simple test suite for each class, but single-stepping through every line
> > in a debugger, and of course placing assertions at every entry and exit
> > point. A programmer should be supremely confident that each line of code
> > is correct before a tester ever has a chance to run it.
>

> The type of comprehensive unit-testing that you advocate here are good,
> but not so practical in non-mission-critical projects due to constraints
> of time and resources.

While I do agree that it's ok for some kinds of software to go
out with bugs--that is, fixing bugs is sometimes less important
than other things--I think that in most cases, even in
non-mission-critical software, omitting these simple bug-catching
measures is false economy.

The reasons most commonly given for why it's false economy have
to do with the benefits of having the bugs fixed:

- Bugs tend to anger customers much more than other problems.
People really do not like having to reenter data due to
corrupted files, for example. Also, re-dos and workarounds
tend to cost a lot in terms of the users' time.

- It's difficult to build good, new code except on a solid
foundation. That is, bugs in one section of code have a way
of percolating into the rest of the program, making it very
difficult to debug later versions. To put it another way,
two interacting bugs are more than twice as difficult to
debug as one bug at a time.

But I'd also like to add that the costs of assertions and
single-stepping to catch bugs are very tiny when you consider
that you should be doing them anyway, for other reasons, too.

I write assertions, first of all, as a form of source-code
documentation. If I assert that every input parameter to a
function has a valid value, I've said something very useful to
anyone who wants to call that function. And if I assert
something similar about the return value of that function, I've
come a long way toward documenting the complete "contract"
fulfilled by that function. A programmer should think through
the terms of that "contract" very precisely. Writing assertions
is fairly effortless if you've already thought it through, and if
you haven't though it through, well, here's yet another benefit
of writing assertions.

I single-step in order to better understand what's happening when
the code is running. This can be especially important when
interfacing with spaghetti-style object-oriented design, where no
function call invokes the subroutine that you would
expect. Even without spaghetti code, I always find that
single-stepping gives me a more solid understanding of the
code--just the sort of understanding that enables me to spot
bugs, as well as better ways to write the code.

Maybe another point here is that I don't see these measures as
particularly time-consuming or difficult. I think they should be
a routine part of writing code and checking it in, just like
writing useful comments or naming your variables carefully.
That's why I don't like to see "unit test" as a separate part of
the schedule. It should be part and parcel of coding.

Ben Kovitz

unread,
Dec 9, 1998, 3:00:00 AM12/9/98
to
In article <3JB92.2553$Q92.2...@news.rdc1.bc.wave.home.com>, "Pete
McBreen, McBreen.Consulting" <mcbr...@cadvision.com> wrote:

> Ben Kovitz wrote:
> >Since the above is really so obvious, and works so well (and so enjoyably)
> >when people apply it, I guess the real question is: why do so many
> >organizations find it so scary to give it a try?
>
> Great question, and it ties back to the theories developers have about
> software development.
>

> [various theories succinctly described]


>
> Using this latter theory, you run a test to prove you have completed the
> coding task, as opposed to running a test to prove you are not done yet.
>
> So depending on your theory, unit tests only ever give you bad news, or they
> are a progress indicator.
>
> Since the prevailing viewpoint is that Unit Test report bad news, is it
> surprising that they are not used much?

Actually, I was asking about why so many organizations won't try
the incremental-release type of scheduling, not about why
programmers don't like unit testing, but hey, this is quite an
insight!

This is one of those strange, purely psychological things that
somehow makes all the difference in the world. And I had not
even noticed it before.

I became a testing fanatic long ago, when I was doubling in tech
support, taking phone calls from people complaining that a
certain bug had been fixed in an earlier release, broken again
and fixed again after they had called again, and now broken yet
again, leading to the current phone call. So I started doing
things like building up a test battery for every bug that we ever
fix, so we'd have an automated way to tell if we ever broke
anything a second time. And then slowly adding more and more
"obvious" techniques, like single-stepping, assertions,
documenting the programming "contract", and so on, which are all
talked about in lots of different books.

But for some reason, I could never get anyone else who worked
there to go along with it. It just seemed baffling to me. We
*all* doubled as tech-support people, even my manager, so we
*all* had the same personal motivation to cut down on bugs (few
people enjoy being reprimanded on the phone for, er, professional
incompetence).

Maybe you have found the explanation. For whatever reason--
probably just random chance--I came to see "passing the test" as
the good news to aim for, giving the green light to release.
(And to release with confidence!) Maybe the others saw testing
as a continual insult to their egos, every failed test saying
"you screwed up again, you jerk." I tended to see each failed
test as the next stage in getting to the goal, and fixing it as a
sort of measure of the decreasing amount of work left to do (i.e.
a "progress indicator").

No doubt there's a lot of other psychological/emotional stuff
going on here, too.

Is this sort of thing covered in any of those "psychology of
programmers" books?

Pete McBreen, McBreen.Consulting

unread,
Dec 9, 1998, 3:00:00 AM12/9/98
to
Ben Kovitz wrote in message ...

>Is this sort of thing covered in any of those "psychology of
>programmers" books?

Probably, but I picked up the insight from Kent Beck and Erich Gamma from
their "Test Infected: Programmers Love Writing Tests" article at
http://members.pingnet.ch/gamma/junit.htm

The two best "programming psychology" books are

Jan Lammers "Programmers at Work"

and

BEn Schneiderman "Psychology of Programming"

Both are getting kind of old now, but other than Luke Hohmann's "Journey of
a Software Professional", nobody seems to be covering that area right now.

>--
>Ben Kovitz <apteryx at chisp dot net>
>Author, _Practical Software Requirements: A Manual of Content & Style_
>http://www.amazon.com/exec/obidos/ASIN/1884777597
>http://www.manning.com/Kovitz


Pete
---

Frank Adrian

unread,
Dec 9, 1998, 3:00:00 AM12/9/98
to
Far it from me to argue with you, especially when I agree that having my own
unit tests make me feel more sanguine about the correctness of my code and I
never fail to write them, but I also know that I can't guarantee exhaustive
test coverage. The independent tests might very well test error cases I had
not thought of - this is especially true of "out-of-the-norm" conditions.

In addition, many projects operate within organizations without cohesive
development cultures that make required "good" unit tests the norm. In
these organizations, developer written unit tests are spotty, almost begging
for independent tests to keep the developers honest about completeness.
Also, as development time is crunched, fewer unit test cases get written by
developers, leading to bad code at the time where it can least be afforded.
Again, as I have stated, I believe that GOOD unit tests should be written by
the developers. I just think that in most organizations the point is moot
before we even get started.

Now even if this were not the case it is still my opinion that several
project level factors determine whether or not the sub-project level process
of DUT (Developer Unit Testing) works. If schedule pressure increases it
may be likely that the number or quality of the unit tests decrease. If
module size increases, test coverage may suffer, and/or testing code itself
may contain defects that mask the non-operation of the code. As integration
becomes less frequent, defects may tend to stay in the system, again,
masking other defects. As more team members are included across wider
geographic boundaries, the ways in which certain modules are utilized may
not be totally clear, again possibly leading to less complete coverage.

Note in the above that I say maybe, might, and possibly a lot. This is not
only because it would be presumptuous of me to state that DUT could not be
made to work on larger scale projects, but because I really WANT to believe
that this process IS scalable. I'd also like to believe that most
organizations can build the type of tight, cohesive teams that lead to
strong process norms, but with work becoming more geographically dispersed,
increased use of short-term employees, and outsourcing becoming more
prevalent, I doubt that that will happen in many organizations, as sad as
the thought makes me.

I guess, in the final analysis, I can certainly use (and have used) this
knowledge myself on small projects (in terms of people), but before I'd use
it on a 200+ person project (and one can certainly debate whether or not
that size of project is desirable), I'd really like to compare (a) a
project with independently created unit tests and (b) a project with unit
tests written by the developers for their own code under varying condtions
of (1) project team size, (2) module complexity, (3) schedule pressure, and
(4) integration frequency. I'd bet that you'd start to see worse
performance for process b with respect to process a as factors 1, 2, and 3
increased and factor 4 decreased. I'd also make a bet that factor 3 would
be most significant. I'm also willing to make a smaller bet that the number
of defects unmasked by independently developed unit tests may never be large
enough to recoup the cost of their construction (because testers usually
have the same blind spots as the developers), but that's another study. I'd
like to be proven wrong on the first two bets and right on the third,
though, 'cause not having to have people independently construct unit tests
would save a ton o'money. Plus, it fits in better with my belief in small
teams doing good work...


--
Frank A. Adrian
First DataBank

frank_...@firstdatabank.com (W)
fra...@europa.com (H)

This message does not necessarily reflect the views of my employer,
its parent company, or any of the co-subsidiaries of the parent
company.

kent...@compuserve.com wrote in message
<74lfh3$l29$1...@nnrp1.dejanews.com>...

Martin Fowler

unread,
Dec 10, 1998, 3:00:00 AM12/10/98
to

>I'm pleased to see that testing is being rediscovered and I endorse
>the general XP approach. But the argument for and practice of "test before
>code"
>goes back at least to the early 1970s.
>

You're right, of course, and I was told at university to write tests
before I coded. But the heavyweight way they went about this, coupled
with the observation that nobody (including the professors) actually
did it, put me off.

I learned later (before coming into contact with the XP crowd) that
simple tests made me program faster. For me the key realization was
that *simple* tests get you most of the way. I also learned that such
tests are valuable only if run frequently, such as every time you
compile.

Now I'm not saying that nobody else discovered these things, I'm sure
plenty people do. But most clients I go to don't test like this. Most

methodologies hardly mention testing. Most test gurus make testing


sound so complicated that it's off-putting.

Martin
Martin Fowler
http://ourworld.compuserve.com/homepages/Martin_Fowler/

JRStern

unread,
Dec 11, 1998, 3:00:00 AM12/11/98
to
On Wed, 09 Dec 1998 12:37:15 -0700, apteryx...@nospam.chisp.net
(Ben Kovitz) wrote:
>First of all, I'm glad to come across another assertions-fanatic!

And I leave many in the production code, of course with their output
redirected and their longjump's safely caught.

>But I think the black-box approach, if it steers people to
>intentionally overlook potential sources of unexpected behavior,
>is dogmatism.

Yes, but it's my dogmatism, so it's ok. <g>

>I suspect that the value of single-stepping through code is one
>of those things that is difficult to appreciate until one has
>tried it

Well my gosh, I spend lots of time single-stepping in search of bugs,
it's not like I've never seen it happen. But it takes my total
concentration to do so, it can take hours to trace a one-second
computation, single-stepping doesn't even work for time-critical and
event-driven code, and it only covers a single case out of thousands.
I'm not saying it's not a valuable technique, I'm just saying my life
expectancy is not sufficient to its application.

OK, alright, maybe what you mean is some kind of animated walkthru. I
suppose I could be talked into something like that.

>Steve Maguire writes about the kind of resistance to assertions
>that he got, where people assumed that the only reason so many
>assertions could be firing was because the assertions were wrong.
>Nope, the assertions were correct: there were just hundreds and
>hundreds of bugs in the code that no one had ever caught before,
>because no one was looking. Until they see it happen, most
>programmers just don't believe that their code has that many
>errors of the sort that assertions catch.

There's a sort of paradigmatic myth to programming today, that it
ought to be easy, that it is easy, if you have the skills and the
tools and the requirements. I used to believe that, but not any more.
Now I say that programming is hard by its nature. I could go on about
this at some length, but it veers away from engineering into theory,
philosophy, and poetry.

Joshua Stern
JRS...@gte.net


Rob Hudson

unread,
Dec 17, 1998, 3:00:00 AM12/17/98
to
Maybe I'm out of my league here, but...

We tend to get small-medium sized projects which are assigned single resource
(one developer). The obvious answer might seem to be "add another resource",
but budget and available time constraints often prohibit this. Assuming this
is the environment, What are some effective "single developer" strategies for
testing?


Lee Derbenwick

unread,
Dec 17, 1998, 3:00:00 AM12/17/98
to
In article <75bq0m$l...@enews1.newsguy.com>,

A few quick hints:

1. Read Glenford Myers, _The Art of Software Testing_.
[Old, but excellent.]

2. Write your test cases before you write your code.
[At least a detailed description.]

3. Remember that cheating at solitaire isn't any fun.
[Yes, the mind set is very important. It helps to want to
show those lousy developers just how badly they messed up,
especially when "they" is yourself.]


Just my personal opinion, but I've done all three at work...
Lee Derbenwick (derbe...@lucent.com)
--
Lee Derbenwick, derbe...@lucent.com

Pete McBreen, McBreen.Consulting

unread,
Dec 17, 1998, 3:00:00 AM12/17/98
to
Rob Hudson wrote in message <75bq0m$l...@enews1.newsguy.com>...

>Maybe I'm out of my league here, but...
>
>We tend to get small-medium sized projects which are assigned single
resource
>(one developer). The obvious answer might seem to be "add another
resource",
>but budget and available time constraints often prohibit this. Assuming
this
>is the environment, What are some effective "single developer" strategies
for
>testing?


See the JUnit testing framework notes at

http://members.pingnet.ch/gamma/junit.htm

Just write the tests as you go along, preferably before you write the code.

For end to end testing, enroll your users into defining the test cases for
you (but you will probably end up being the person who creates and runs
them).

And always try to make the tests automated, then you will never be tempted
to skip them for lack of time. This is one of teh essential messages from
the JUnit framework, if a test is automated, it only costs CPU cycles to run
it, so it gets run lots of times.

Pete

Frank Adrian

unread,
Dec 17, 1998, 3:00:00 AM12/17/98
to
Schedule to complete your coding no later than 60% of the way through the
code development timeframe (at least that way, you won't be more than about
30% over when testing is finished).

Keep code development timeframes short (e.g., build a little test, build a
little, test a little).

Create a test plan before you code your tests - at least know what you will
test before you go in.

Check the negative cases as well as the positive ones. Create smoke testing
strategies to be run on every unit - for pointer parameters NULL/bad input
pointers, for integers negative numbers/0/>2^31/-2^32, for strings very long
inputs/0-length inputs, etc.

Remember that no matter how much it hurts to be late, ADEQUATE testing still
needs to be done. Good rule of thumb - until you can test for at least 30%
of the total coding time without defects being found, you haven't tested
enough. That means for every day of coding you spend at least two and a
half testing; at least one and a half solid days of testing per coding week.
Some people would advocate bumping this figure as high as 100% (but I'd
think that's a bit Xtreme :-).

If you can't find defects, you're probably kidding yourself.


--
Frank A. Adrian
First DataBank

frank_...@firstdatabank.com (W)
fra...@europa.com (H)

This message does not necessarily reflect the views of my employer,
its parent company, or any of the co-subsidiaries of the parent
company.

Lee Derbenwick wrote in message <75bvf9$a...@nntpb.cb.lucent.com>...


>In article <75bq0m$l...@enews1.newsguy.com>,
>Rob Hudson <rhu...@hotmail.com> wrote:

>>Maybe I'm out of my league here, but...
>>
>>We tend to get small-medium sized projects which are assigned single
resource
>>(one developer). The obvious answer might seem to be "add another
resource",
>>but budget and available time constraints often prohibit this. Assuming
this
>>is the environment, What are some effective "single developer" strategies
for
>>testing?
>

Volker Wurst

unread,
Dec 18, 1998, 3:00:00 AM12/18/98
to
See also: The XP-discussion on Ward Cunningham's WikiWeb, e.g.

http://c2.com/cgi/wiki?UnitTests

Volker Wurst


"Pete McBreen, McBreen.Consulting" wrote:

> For some notes on the state of the art of Unit Testing see
> http://www.armaties.com/ (look for the xUnit testing harnesses.)
>
> Their practice is to write the unit tests first, then write the code.
>
> Also see http://members.pingnet.ch/gamma/junit.htm for a article on unit
> testing written by Kent Beck and Erich Gamma (of Gang of Four fame).


Paul E. Black

unread,
Dec 18, 1998, 3:00:00 AM12/18/98
to
"Frank Adrian" <frank_...@firstdatabank.com> writes:
> If you can't find defects, you're probably kidding yourself.

When I was doing one-man coding, my own informal rule of thumb was:
when I am finding more bugs in other code (including the operating
system!) than in my code, I can definitely stop.

-paul-
--
Paul E. Black (p.b...@acm.org) 100 Bureau Drive, Stop 8970
paul....@nist.gov Gaithersburg, Maryland 20899-8970
voice: +1 301 975-4794 fax: +1 301 926-3696
Web: http://hissa.ncsl.nist.gov/~black/black.html KC7PKT

Paul E. Black

unread,
Dec 18, 1998, 3:00:00 AM12/18/98
to
rhu...@hotmail.com (Rob Hudson) writes:

> We tend to get small-medium sized projects which are assigned single resource
> (one developer). The obvious answer might seem to be "add another resource",
> but budget and available time constraints often prohibit this. Assuming this
> is the environment, What are some effective "single developer"
> strategies for testing?

One old, but useful guidline is "Validation, Verification, and Testing
for the Individual Programmer." It should be available from the
Superintendent of Documents,
U.S. Government Printing Office
Washington, D.C. 20402 USA
Stock number 003-003-02159-8. The price listed is $1.75.

0 new messages