Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

How to test first

3 views
Skip to first unread message

Phlip

unread,
May 3, 2003, 3:51:50 PM5/3/03
to
eXtremos:

The existing test-first verbiage (with the obvious exception of The Book)
contains distortions, depending on its distance from the Founders.

Please review the following offering for succint comprehensiveness.

The TDD Cycle
When you develop, use a test runner that provides some kind of visual
feedback at the end of the test run. Either use a GUI-based test runner that
displays a Green Bar on success or a Red Bar on failure, or a console-based
test runner that displays "All tests passed" or a cascade of diagnostics,
respectively.

Then engage each action in this algorithm:

- Locate the next missing *code ability* you want to add.
- *Write a test* that will pass if the ability is there.
- Run the test and ensure it *fails for the correct reason*.
(If it passes, inspect the code to ensure you understand
it, and ensure the test passed for the correct reason; then
proceed to the next feature.)
- Only if the test fails for the correct reason and provides a
Red Bar, perform the *minimal edit* needed to make the
test pass, even if this edit leaves a poorer design.
- When the tests pass and you get a Green Bar,
*inspect the design*.
- While the design (anywhere) is poor, *refactor* it.
Perform the smallest edits possible that will
cumulatively improve the design, but make sure
all the tests pass between each edit.
- Only after the design is as squeaky clean as you
reasonably care to make it, *proceed to the next
ability*.

That algorithm needs more interpretation. Looking closely into each *bold*
item reveals a field of nuances. All are beholden to this algorithm and the
intent of each action. Each action leverages different intents; these often
conflict directly with other actions' intents. Our behaviors during each
action differ in opposing ways. The differing intents anneal our code in a
way that one must experience to fully appreciate and begin to understand.
Suspend disbelief, try the cycle, and report your results to an Agile
Alliance near you.

A *code ability*, in this context, is the current coal face in the mine that
our picks swing at. It's the location in the program where we must add new
lines. Typically, this location is near the bottom of our most recent
function. If we can envision one more line to add there, or one more edit to
make there, then we must perforce be able to envision the complementing test
that will fail without that line or edit.

"*Write a test*" can mean to write a test function, and get as far as one
assertion. Alternately, it can mean to take an existing test function, and
add a new assertion to it. To re-use scenarios and bulk up on assertions, we
'll prefer the latter.

If the new test lines assume facts not in evidence - if, for example, they
reference a class or method name that does not exist yet - run the test
anyway and predict a compiler diagnostic. This test collects valid
information just like any other. If the test inexplicably passes, you may
now understand you were about to write a new class name that conflicted with
an existing one.

Work on the assertion and the code's structure (but not behavior) until the
test *fails for the correct reason*. All this work prepares you to make that
*minimal edit*. Write that line which you have been anxious to get out of
your system for the last 3 paragraphs.

The *edit* is *minimal* because until the Bar turns Green we live on
borrowed time. Correct behavior and happy tests are more important than our
design quality. We might pass the test by cloning a method and changing one
line in it. If that's the minimum number of edits, do it. We might
re-writing from scratch a method very similar to an existing method.
Alternately, the simplest edit might naturally produce a clean design that
won't need refactoring.

If the *minimal edit* fails, and if the reason for failure is not obvious
and simple, hit the Undo button and try again. Anything is preferable to
debugging, and an ounce of prevention is worth a pound of cure.

Now that we have a Green Bar, we *inspect the design*. Per the *minimal
edit* rule, the most likely design flaw is duplication. To help us learn to
improve things, we tend to throw the definition of "duplication" as wide as
possible.

The book Design Patterns says we improve designs when we address the
interface, and we "abstract the thing that varies". So if we start with a
*minimal edit*, merging duplication together will tend to approach a
Pattern. TODO is this already in here?

To *refactor*, we inspect our code, and try to envision a design with fewer
moving parts, less duplication, shorter methods, better identifiers, and
deeper abstractions. Start with the code we just changed, and feel free to
involve any other code in the project.

If we cannot envision a better design, we can proceed to the next step
anyway. Identify a minimal edit that will either improve the design or lead
to a series of similar edits leading to an improvement. Between each edit,
run all the tests. If they fail, hit Undo and start again.

We may add assertions at nearly any time; while refactoring the design, and
before *proceed*ing *to the next ability*. Whenever we learn something new,
or realize there's something we don't know, we take the opportunity to write
new assertions that express this learning, or query the code's abilities. As
the TDD cycle operates, and individual abilities add up to small features,
we take time to collect information from the code about its current
operating parameters and boundary conditions.

Boundary conditions are the limits between defined behavior and regions
where bugs might live. Set boundaries for a routine well outside the range
you know production code will call it. Research "Design by Contract" to
learn good strategies; these roll defined behavior up from the lower
routines to their caller routines. Within a routine, simplifying its
procedure will most often remove discontinuities in its response.

Parameters between these limits now typically cause the code to respond
smoothly with linear variations. The odds of bugs occurring between the
boundaries are typically lower than elsewhere. For example, a method that
takes 2, 3 and 5 and returns 10, 15 and 25, respectively, today is unlikely
to take 4 and return 301 tomorrow.

After creating a class, other classes now call it. Their tests engage our
class too. Our tests cover every statement in a program, and they approach
covering every path in a program. The cumulative forces against bugs make
them extraordinarily unlikely.

If you receive a bug report (typically a "missing feature" report), always
write a test. Then use what you learned to improve design, and write more
tests of this category. If you treat the situation "this code does not yet
have that ability" as a kind of bug, then the TDD cycle is nothing but a
specialization of the rule "test-away bugs".

--
Phlip
http://www.c2.com/cgi/wiki?TestFirstUserInterfaces


Steve Jorgensen

unread,
May 3, 2003, 8:59:06 PM5/3/03
to
Wow - a very fine and concise article. This one goes in my study reference
directory.

openbit

unread,
May 21, 2003, 4:12:03 PM5/21/03
to
Excellent

"Phlip" <phli...@yahoo.com> wrote in message news:<q5Vsa.39$xY2...@newssvr19.news.prodigy.com>...
> eXtremos:
>
[...]


> - When the tests pass and you get a Green Bar,
> *inspect the design*.
> - While the design (anywhere) is poor, *refactor* it.
> Perform the smallest edits possible that will
> cumulatively improve the design, but make sure
> all the tests pass between each edit.
> - Only after the design is as squeaky clean as you
> reasonably care to make it, *proceed to the next
> ability*.

[...]

I propose these refactoring steps are only necessary insofar as it's
not possible to have a business plan exposed which fully details the
ROI in question. Of course, when was the last time you saw a
business case detailed on a whiteboard? It won't happen. The
refactoring steps are *your* business, the ROI is not. You would
never get a business plan because ROIs are too controversial, involve
too much politics, or requires secrecy from competitors etc. So,
you need XP, your pristine process--the best way to write
software--in the middle of the sh!t storm we call business.

If enough business context is exposed, a rare event, then the
refactoring bit is included in the TSTTCPW. I could be wrong.

Steve Jorgensen

unread,
May 21, 2003, 10:16:03 PM5/21/03
to
Openbit's reply reminded me to thank you for this as well. I have a printout of
it in my notebook for reference.

On Sat, 03 May 2003 19:51:50 GMT, "Phlip" <phli...@yahoo.com> wrote:

Phlip

unread,
May 22, 2003, 12:01:07 AM5/22/03
to
Steve Jorgensen wrote:

> Openbit's reply reminded me to thank you for this as well. I have a
printout of
> it in my notebook for reference.

Thanks for this synch*! Just today I managed to remove 5 or 6 sentences from
it. Yippee!

*That means a spiritually surprising synchronicity.

> >- Locate the next missing *code ability* you want to add.
> >- *Write a test* that will pass if the ability is there.
> >- Run the test and ensure it *fails for the correct reason*.

> >- *minimal edit* needed to make the


> > test pass, even if this edit leaves a poorer design.
> >- When the tests pass and you get a Green Bar,
> > *inspect the design*.
> >- While the design (anywhere) is poor, *refactor* it.

> >- After the design is squeaky clean *proceed to the next
> > ability*.

So the redundancy with the play-by-play commentary went away ;-)

--
Phlip


Phlip

unread,
May 22, 2003, 12:06:37 AM5/22/03
to
openbit wrote:

> Excellent

Thanks! (And the synch's reinforced; this post has been out for a while ;-)

> > - When the tests pass and you get a Green Bar,
> > *inspect the design*.
> > - While the design (anywhere) is poor, *refactor* it.
> > Perform the smallest edits possible that will
> > cumulatively improve the design, but make sure
> > all the tests pass between each edit.
> > - Only after the design is as squeaky clean as you
> > reasonably care to make it, *proceed to the next
> > ability*.

> I propose these refactoring steps are only necessary insofar as it's


> not possible to have a business plan exposed which fully details the
> ROI in question.

On news:comp.object, yes.

But you are in XP territory. Here, TDD applies even when requirements are
graven in white marble.
During a project, the code always supports its current feature set with the
fewest design elements and fewest lines possible; no more. Programmers
gifted with the ability to envision a complete and full-featured design are
welcome here, so long as they hold that complete design up as a goal but not
the path.

Over time, the code's complexity follows a minimum curve across the most
important features. Complexity begrudgingly ticks up on this chart, only at
need. Complexity is easy to add and hard to remove, so we restrain our
creativity with discipline.

> Of course, when was the last time you saw a
> business case detailed on a whiteboard? It won't happen. The
> refactoring steps are *your* business, the ROI is not. You would
> never get a business plan because ROIs are too controversial, involve
> too much politics, or requires secrecy from competitors etc. So,
> you need XP, your pristine process--the best way to write
> software--in the middle of the sh!t storm we call business.

Ah, but true control is dynamic, not static. Would you call a top, spinning
rapidly over a single point, out of control?

> If enough business context is exposed, a rare event, then the
> refactoring bit is included in the TSTTCPW. I could be wrong.

XP's political brilliance is this: The workers most able to steer have their
hands on the wheel. The workers most able to engine-er are the ones who make
the vehicle move forward.

--
Phlip


0 new messages