Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Unit Testing strategy

11 views
Skip to first unread message

Tom Plunket

unread,
Feb 1, 2002, 7:41:25 PM2/1/02
to
Hey all-

In my continuing pursuit of effective test coverage, I've come up
against something interesting.

A "feature" was handed to me- "Implement polygon reduction on
arbitrary polygonal meshes."

For those of you unfamiliar with the terms, basically I have
arbitrary geometry, constructed of triangles, and I need to
reduce the number of triangles while keeping the resulting
geometry the same.

Since I have little idea on where to start (although I am the
resident geometry expert, unfortunately), I just started with the
easiest thing that I could think of:

void testReduction()
{
Mesh m0, m;

// create the geometry that I want to reduce into m0.

// create the geometry that m0 should become into m

m0.Reduce();
CPPUNIT_ASSERT(m == m0);
}

Now this is a very high-level test, and it's taken several days
of work to implement everything in between (trying to think of
the simplest thing each time, then testing the sub-function-
ality). However, I get the feeling that this is the "wrong" way
to test since I had an outstanding failure for several days, and
it feels like the feature "should" have been broken down into
smaller tasks. However, from the point of view of the users and
even the other programmers, this is standalone functionality;
others will just call "Reduce()" and expect that it works.

As I thought of a step in the process, I wrote tests and
implemented functionality, sometimes needing other functionality
for which I implemented tests as well. But coupled with the idea
that I'm supposed to throw everything away at the end of the day
that isn't checked in, I feel somewhat dirty for writing my first
test as this high-level thing (and it seemed just wrong to simply
set m0 to m in the Reduce() function).

Any guidance to steer my feelings? :)

thx-
-tom!

Ron Jeffries

unread,
Feb 1, 2002, 8:46:48 PM2/1/02
to
On Fri, 01 Feb 2002 16:41:25 -0800, Tom Plunket <to...@fancy.org> wrote:

>Since I have little idea on where to start (although I am the
>resident geometry expert, unfortunately), I just started with the
>easiest thing that I could think of:

Good and clear report. If I ever knew how to do the reduction, I've
forgotten.

I'd start with a mesh that was already reduced, get the test running
with a rather simple reduce() function. ;->

Then I'd do a mesh that was one step away from reduced, and see what I
could do with that. And so on.

And of course, since there's clearly an algorithm for this, I'd look it
up to get a sense of how it's supposed to work. I'd try to resist just
typing it in willy-nilly.

Knowing nothing, that's what I'd do. It's what I always do, actually ...

Hope it's helpful ... tell us more about the problem and solution?

Ronald E Jeffries
http://www.XProgramming.com
http://www.objectmentor.com
I'm giving the best advice I have. You get to decide whether it's true for you.

Peter Hansen

unread,
Feb 1, 2002, 9:56:56 PM2/1/02
to
Tom Plunket wrote:
>
> A "feature" was handed to me- "Implement polygon reduction on
> arbitrary polygonal meshes."
[...snip high-level test...]

> Now this is a very high-level test, and it's taken several days
> of work to implement everything in between (trying to think of
> the simplest thing each time, then testing the sub-function-
> ality). However, I get the feeling that this is the "wrong" way
> to test since I had an outstanding failure for several days, and
> it feels like the feature "should" have been broken down into
> smaller tasks.

> Any guidance to steer my feelings? :)

Well, *some* designs can't be evolved...

(Not very helpful, I suppose. :)

Hmmm... You've only a single requirement, essentially,
which is something along the lines of "Implement a
function that returns the maximally reduced mesh
for an arbitrary input mesh, maintaining the geometry."

If that were my requirement, I'd certainly start with
research. Is it really the case that no one else has
developed an algorithm for this? If that is true, you
should be looking at a Spike Solution to learn the
true nature of the problem, before you try actually
(test-first) developing it as production code.

-Peter

Ron Jeffries

unread,
Feb 1, 2002, 10:06:29 PM2/1/02
to
On Fri, 01 Feb 2002 21:56:56 -0500, Peter Hansen <pe...@engcorp.com>
wrote:

>Well, *some* designs can't be evolved...

How do we know that? I'd really like to know ... because I haven't ever
encountered one that couldn't, except for trivially /simple/ ones.

(I don't know how to evolve in any interesting way to a function that,
say, adds 2 to its input argument. ;-> )

Richard MacDonald

unread,
Feb 2, 2002, 12:26:18 AM2/2/02
to
"Tom Plunket" <to...@fancy.org> wrote in message news:cucm5uk0lrh6sprlk...@4ax.com...

> Hey all-
>
> In my continuing pursuit of effective test coverage, I've come up
> against something interesting.
>
> A "feature" was handed to me- "Implement polygon reduction on
> arbitrary polygonal meshes."
>
> For those of you unfamiliar with the terms, basically I have
> arbitrary geometry, constructed of triangles, and I need to
> reduce the number of triangles while keeping the resulting
> geometry the same.
>
[snip]

>
> Any guidance to steer my feelings? :)

I'm afraid I have to propose something counter to the XP test-first.
First, I presume you already know the algorithm, so your detailed
design can occur during development. I presume you are also
going to prototype or spike this algorithm, although it might be a
several day spike. But it is a research project, right? You don't
really know how you're going to do it.

If the latter is true, stuff the tests for now. Write some input data
and go at it. Go deep, not wide until you get a result. You can
debug and step your way through to the end.

Then write the overall test and validate it. Then start modularizing
it and writing the tests to test the modules.

I personally think the research project development model is just
going to be slowed down by tests until you've got the big picture
algorithm well in hand. Else you'll just be rewriting your tests.

P.S. I've done a little geometrical programming myself. All strictly
research where the algorithm was not even known beforehand.
I just believe that if you have to cross the unknown continent,
build a narrow path all the way across first then widen it.


Richard MacDonald

unread,
Feb 2, 2002, 12:26:19 AM2/2/02
to
"Ron Jeffries" <ronje...@REMOVEacm.org> wrote in message
news:BD88C2C7ED65278A.003E8DD3...@lp.airnews.net...

> On Fri, 01 Feb 2002 21:56:56 -0500, Peter Hansen <pe...@engcorp.com>
> wrote:
>
> >Well, *some* designs can't be evolved...
>
> How do we know that? I'd really like to know ... because I haven't ever
> encountered one that couldn't, except for trivially /simple/ ones.
>
> (I don't know how to evolve in any interesting way to a function that,
> say, adds 2 to its input argument. ;-> )
>
I have done a lot of work in programming simply to figure out the algorithm.
Its like I have to cross an unknown continent. The single most important
factor for success is to first get myself across. I usually run into a mountain
range or two, then either go over the top or find another way around.

But its only when I've crossed that I can stand back and see the whole
continent. Only then might I see that there was a far easier path somewhere
else that I missed during the exploration.

Perhaps its possible to move my path to the other path. Perhaps its faster
to toss everything and do it again. Just like the spike idea.

Have you ever developed a spike that was helpful, it worked, but you later
found a better solution for which no reuse from the spike was possible?


Ron Jeffries

unread,
Feb 2, 2002, 8:15:18 AM2/2/02
to
On Sat, 02 Feb 2002 05:26:18 GMT, "Richard MacDonald"
<macdo...@worldnet.att.net> wrote:

>I'm afraid I have to propose something counter to the XP test-first.
>First, I presume you already know the algorithm, so your detailed
>design can occur during development. I presume you are also
>going to prototype or spike this algorithm, although it might be a
>several day spike. But it is a research project, right? You don't
>really know how you're going to do it.
>
>If the latter is true, stuff the tests for now. Write some input data
>and go at it. Go deep, not wide until you get a result. You can
>debug and step your way through to the end.

Yes. I do that sometimes. I find it feels slower and it certainly feels
more stressful and less satisfying. It's like I'm /debugging/ all the
time.


>
>Then write the overall test and validate it. Then start modularizing
>it and writing the tests to test the modules.
>
>I personally think the research project development model is just
>going to be slowed down by tests until you've got the big picture
>algorithm well in hand. Else you'll just be rewriting your tests.

I'm just guessing here, but I suspect you're guessing here.


>
>P.S. I've done a little geometrical programming myself. All strictly
>research where the algorithm was not even known beforehand.
>I just believe that if you have to cross the unknown continent,
>build a narrow path all the way across first then widen it.

Spikes are good. What about test-first spikes? I'm not sure.

Ron Jeffries

unread,
Feb 2, 2002, 8:12:26 AM2/2/02
to
On Sat, 02 Feb 2002 05:26:19 GMT, "Richard MacDonald"
<macdo...@worldnet.att.net> wrote:

>I have done a lot of work in programming simply to figure out the algorithm.
>Its like I have to cross an unknown continent. The single most important
>factor for success is to first get myself across. I usually run into a mountain
>range or two, then either go over the top or find another way around.
>
>But its only when I've crossed that I can stand back and see the whole
>continent. Only then might I see that there was a far easier path somewhere
>else that I missed during the exploration.
>
>Perhaps its possible to move my path to the other path. Perhaps its faster
>to toss everything and do it again. Just like the spike idea.
>
>Have you ever developed a spike that was helpful, it worked, but you later
>found a better solution for which no reuse from the spike was possible?

I don't recall it, but probably. Well, on the other hand, there was
yesterday ...

I wanted XProgramming to put a randomly-chosen link to a book review on
the front page. Chet found a little patch of Javascript to put a random
image up on a random web page.

When we finished putting it into my XSLT-created web page, there
probably wasn't a line of it that hadn't been touched.

So maybe it happens all the time.

The thing about incremental design is that it seems to usually take you
where you need to go. Sometimes I would toss everything, but I seem to
do it very rarely these days. The incremental way works better for me
most of the time.

Regards,

Marc Poulin

unread,
Feb 2, 2002, 10:03:34 AM2/2/02
to
In article
<BD88C2C7ED65278A.003E8DD3...@lp.airnews.net>, "Ron

Jeffries" <ronje...@removeacm.org> wrote:
>
> (I don't know how to evolve in any interesting way to a function that,
> say, adds 2 to its input argument. ;-> )

I do :-)

Partial evaluation of C++ templates. Many of the
calculations can be done at compile time.

Here's a reference:
http://www.osl.iu.edu/~tveldhui/papers/pepm99/

Ron Jeffries

unread,
Feb 2, 2002, 1:05:38 PM2/2/02
to
On Sat, 02 Feb 2002 10:03:34 -0500, "Marc Poulin" <mpo...@verinet.com>
wrote:

Please explain how this addresses the ability to write a function that
adds 2 to its input argument in an evolutionary way. I don't see that it
helps.

If I needed such a function (which is hard to imagine, since its point
was to be a really simple example), I'd write

def addTwo(x)
x + 2
end

Very difficult to work your way up to that.

I suppose it could be

def testZero
assertequal(2, addTwo(0))
end

with an implementation of

def addTwo(x)
return 2
end

but even I'm not /that/ evolutionary ...

Marc Poulin

unread,
Feb 2, 2002, 12:26:49 PM2/2/02
to
In article
<83B8E0025CE6016A.2D0277CB...@lp.airnews.net>, "Ron
Jeffries" <ronje...@removeacm.org> wrote:

This isn't a C++ newgroup, so I'll spare you the gory details and
hope you can still get the gist of what I'm trying to say.

Well, first we do TheSimplestThing:

int add_two(int i);

But since C++ is strongly typed, the functions quickly multiply:

int add_two(int i);
long int add_two(long int i);
float add_two(float f);
double add_two(double d);

We see that we are writing the same code over and over, so
we refactor using the OnceAndOnlyOnce guideline. We replace all
these functions with a single template function that can be
used for any type:

template <class T>
T add_two(T t)
{
return t+2;
}

So now I can write

main()
{
int i = add_two(1); // i equals 3
double d = add_two(1.0); // d equals 3.0
}

Now comes the clever part. Since the compiler sees all these
constant values, it can pre-compute all the final results.

int i = 3;
double d = 3.0;

The compiler doesn't need to call the functions at run-time
since the answers never change (this only works for CONSTANT
values, of course).

That's the basic idea in a nutshell.

Richard MacDonald

unread,
Feb 2, 2002, 3:15:29 PM2/2/02
to
"Ron Jeffries" <ronje...@REMOVEacm.org> wrote in message
news:91998156D0F3F636.2A634B19...@lp.airnews.net...

> On Sat, 02 Feb 2002 05:26:18 GMT, "Richard MacDonald"
> <macdo...@worldnet.att.net> wrote:
>
> >I'm afraid I have to propose something counter to the XP test-first.
> >First, I presume you already know the algorithm, so your detailed
> >design can occur during development. I presume you are also
> >going to prototype or spike this algorithm, although it might be a
> >several day spike. But it is a research project, right? You don't
> >really know how you're going to do it.
> >
> >If the latter is true, stuff the tests for now. Write some input data
> >and go at it. Go deep, not wide until you get a result. You can
> >debug and step your way through to the end.
>
> Yes. I do that sometimes. I find it feels slower and it certainly feels
> more stressful and less satisfying. It's like I'm /debugging/ all the
> time.

Well I don't have enough experience with your alternative, and I have
a lot of experience my way. I hoped I had made that context clear in
my original response, but if not here it is.

> >Then write the overall test and validate it. Then start modularizing
> >it and writing the tests to test the modules.
> >
> >I personally think the research project development model is just
> >going to be slowed down by tests until you've got the big picture
> >algorithm well in hand. Else you'll just be rewriting your tests.
>
> I'm just guessing here, but I suspect you're guessing here.

Actually, absolutely not. I've had several experiences where I had a
bear of an algorithm and starting writing tests too early. When I
realized how wrong my original design was, I wound up being
slowed down by having to refactor the tests too much. Or simply
abandoning them. I could have done better without the tests.

Please note that I am not claiming this is a general condition. I
expect the original poster to use his own gut.

Please note: When I say "better", I mean only *slightly* better.
In the theoretical sense. The slight difference between this vs having
no tests at all and then falling on my face is so minor as to be ignorable.
I have never been pissed off because I wrote too many tests too early.
Even though I have done so.
Using a test framework is still the single most important thing I
have ever adopted.

> >P.S. I've done a little geometrical programming myself. All strictly
> >research where the algorithm was not even known beforehand.
> >I just believe that if you have to cross the unknown continent,
> >build a narrow path all the way across first then widen it.
>
> Spikes are good. What about test-first spikes? I'm not sure.

Once again, I don't have enough experience in this. One potential
problem is that I can usually quickly abstract the algorthm to a very high
level (which is a good thing in math). This is how I would want write
my test first. However, that high level abstraction indicates the
need for a high-level interface or even *gasp* a framework.(*) And
the time when I'm exploring the algorithm is far too early to be
worrying about building that high-level abstraction. If instead I
start working on a low-level implementation, I'm going to get so
much "movement" from refactoring that I believe the extra
baggage of those tests will slow me down.

(*) Think of writing the Circle/Ellipse classes by starting with the
suite of Conic tests.

The truth is that I always start with a test framework and write a
simple test which allows me to walk through the code in a
debugger. Once I confirm that that bit works for that test, I move
on to the next bit. I may either extend the original test to take
me deeper (and set breakpoints at the current position where I
am working), or I may put the first test aside and write another
test to allow me to work on the next bit. But my point is that I
might decide to let some tests slide for a while (get broken and
stay broken). Yes this is dangerous, however, my defense is
that figuring out this stuff is hard enough that it is dangerous. I
can get lost and hit the wall. But that is why I would place it
in the spike category.

Of course, once I have a working "path across the continent" and
I don't see any better "paths", then I build the interstate with
additional tests. About the only thing I'm not doing well enough
is the test-first part.


Ron Jeffries

unread,
Feb 2, 2002, 8:48:21 PM2/2/02
to
On Sat, 02 Feb 2002 12:26:49 -0500, "Marc Poulin" <mpo...@verinet.com>
wrote:

>In article


><83B8E0025CE6016A.2D0277CB...@lp.airnews.net>, "Ron
>Jeffries" <ronje...@removeacm.org> wrote:
>
>This isn't a C++ newgroup, so I'll spare you the gory details and
>hope you can still get the gist of what I'm trying to say.

Thanks. I knew how to use templates, wasn't thinking that was the
problem I had set out to solve.

Really good and clear example though!

Thanks again,

Ron Jeffries

unread,
Feb 2, 2002, 8:55:09 PM2/2/02
to
On Sat, 02 Feb 2002 20:15:29 GMT, "Richard MacDonald"
<macdo...@worldnet.att.net> wrote:

>"Ron Jeffries" <ronje...@REMOVEacm.org> wrote in message
>news:91998156D0F3F636.2A634B19...@lp.airnews.net...
>> On Sat, 02 Feb 2002 05:26:18 GMT, "Richard MacDonald"
>> <macdo...@worldnet.att.net> wrote:
>>
>> >I'm afraid I have to propose something counter to the XP test-first.
>> >First, I presume you already know the algorithm, so your detailed
>> >design can occur during development. I presume you are also
>> >going to prototype or spike this algorithm, although it might be a
>> >several day spike. But it is a research project, right? You don't
>> >really know how you're going to do it.
>> >
>> >If the latter is true, stuff the tests for now. Write some input data
>> >and go at it. Go deep, not wide until you get a result. You can
>> >debug and step your way through to the end.
>>
>> Yes. I do that sometimes. I find it feels slower and it certainly feels
>> more stressful and less satisfying. It's like I'm /debugging/ all the
>> time.
>
>Well I don't have enough experience with your alternative, and I have
>a lot of experience my way. I hoped I had made that context clear in
>my original response, but if not here it is.

Wasn't arguing, I was just reporting what works for me in the faint hope
that it'd be valuable. ;->


>
>> >Then write the overall test and validate it. Then start modularizing
>> >it and writing the tests to test the modules.
>> >
>> >I personally think the research project development model is just
>> >going to be slowed down by tests until you've got the big picture
>> >algorithm well in hand. Else you'll just be rewriting your tests.
>>
>> I'm just guessing here, but I suspect you're guessing here.
>
>Actually, absolutely not. I've had several experiences where I had a
>bear of an algorithm and starting writing tests too early. When I
>realized how wrong my original design was, I wound up being
>slowed down by having to refactor the tests too much. Or simply
>abandoning them. I could have done better without the tests.
>
>Please note that I am not claiming this is a general condition. I
>expect the original poster to use his own gut.

Yes. Experienced gut, I'd hope. It does take a while to get good at
working in alternative ways. Of course folks have to decide if they want
to make the investment ...


>
>Please note: When I say "better", I mean only *slightly* better.
>In the theoretical sense. The slight difference between this vs having
>no tests at all and then falling on my face is so minor as to be ignorable.
>I have never been pissed off because I wrote too many tests too early.
>Even though I have done so.
>Using a test framework is still the single most important thing I
>have ever adopted.

Yes. Here and below it sounds like sometimes you write more than one
test at a time. Is that the case? Do you sometimes write one, make it
run, etc? Have you noticed any difference in "performance" between the
two ways of proceeding?


>
>> >P.S. I've done a little geometrical programming myself. All strictly
>> >research where the algorithm was not even known beforehand.
>> >I just believe that if you have to cross the unknown continent,
>> >build a narrow path all the way across first then widen it.
>>
>> Spikes are good. What about test-first spikes? I'm not sure.
>
>Once again, I don't have enough experience in this. One potential
>problem is that I can usually quickly abstract the algorthm to a very high
>level (which is a good thing in math). This is how I would want write
>my test first. However, that high level abstraction indicates the
>need for a high-level interface or even *gasp* a framework.(*) And
>the time when I'm exploring the algorithm is far too early to be
>worrying about building that high-level abstraction. If instead I
>start working on a low-level implementation, I'm going to get so
>much "movement" from refactoring that I believe the extra
>baggage of those tests will slow me down.
>
>(*) Think of writing the Circle/Ellipse classes by starting with the
>suite of Conic tests.

Suite, yes, that could be a problem. Even when spiking, I /think/ I'd do
better one test at a time. But I usually do my spikes the old fashioned
way, and I usually get into debugging trouble with them. It's just that
my spikes are usually things like "ftp a file under program control" and
I have /no idea/ how to write an interesting test, compared to just
looking to see if the file is there.

Then I get into a debugging cycle. I'll never learn ...


>
>The truth is that I always start with a test framework and write a
>simple test which allows me to walk through the code in a
>debugger. Once I confirm that that bit works for that test, I move
>on to the next bit. I may either extend the original test to take
>me deeper (and set breakpoints at the current position where I
>am working), or I may put the first test aside and write another
>test to allow me to work on the next bit. But my point is that I
>might decide to let some tests slide for a while (get broken and
>stay broken). Yes this is dangerous, however, my defense is
>that figuring out this stuff is hard enough that it is dangerous. I
>can get lost and hit the wall. But that is why I would place it
>in the spike category.
>
>Of course, once I have a working "path across the continent" and
>I don't see any better "paths", then I build the interstate with
>additional tests. About the only thing I'm not doing well enough
>is the test-first part.

You're working with new ideas, different ideas. You're learning.
Whatever you learn will be useful. And please continue to share it with
the rest of us!

Regards,

Peter Hansen

unread,
Feb 3, 2002, 12:43:38 AM2/3/02
to
Richard MacDonald wrote:
>
> "Ron Jeffries" <ronje...@REMOVEacm.org> wrote in message
> news:BD88C2C7ED65278A.003E8DD3...@lp.airnews.net...
> > On Fri, 01 Feb 2002 21:56:56 -0500, Peter Hansen <pe...@engcorp.com>
> > wrote:
> >
> > >Well, *some* designs can't be evolved...
> >
> > How do we know that? I'd really like to know ... because I haven't ever
> > encountered one that couldn't, except for trivially /simple/ ones.
> >
> > (I don't know how to evolve in any interesting way to a function that,
> > say, adds 2 to its input argument. ;-> )

[I'm quoting Richard's response only because Ron's response to my
post never made it here. At the risk of misinterpreting the question
because I missed something Richard might have removed, I'll try an answer.]

Any time I think "evolution" I'm thinking in patterns identified
in my amateur dabblings in genetic algorithms.

One of the insights I reached once was that a key factor in determining
when genetic algorithms can be effective is whether the solution
space (better term here: fitness landscape) has gradients. If you
have lots of sloping hills and valleys, you can evolve a solution.
Many interesting problems are like this. (Yes, all this is obvious
to some of the great brains around here, but I remember not having
an intuitive understanding of this.)

If you have a flat landscape, with extremely sharp spikes at various points
where solutions exist, genetic algorithms (and, I suspect, almost anything
but brute force) are ineffective.

The simplest example I've thought of, which first showed me the limits
of genetic algorithms, was cracking encryption. Provided the encryption
algorithm is good, the fitness levels of all "wrong" solutions are equal,
and the one "right" solution has a maximum fitness value. No solution
can be evolved. In more concrete terms, you don't get partial decryption
of the message just because part of your decryption key matches the
correct one.

I suspect there are dozens of other such examples, but I've never
explored that avenue further (I suppose I find any area where
designs cannot be evolved as inherently uninteresting). Perhaps this
actually provides a definition for "trivially /simple/"?

-Peter

Richard MacDonald

unread,
Feb 3, 2002, 2:56:10 AM2/3/02
to
"Ron Jeffries" <ronje...@REMOVEacm.org> wrote in message
news:76F3CF8C1C3E70FF.7216E289...@lp.airnews.net...

> On Sat, 02 Feb 2002 20:15:29 GMT, "Richard MacDonald"
> <macdo...@worldnet.att.net> wrote:
>
> >"Ron Jeffries" <ronje...@REMOVEacm.org> wrote in message
> >news:91998156D0F3F636.2A634B19...@lp.airnews.net...
> >> On Sat, 02 Feb 2002 05:26:18 GMT, "Richard MacDonald"
> >> <macdo...@worldnet.att.net> wrote:
> >>
> >> >I'm afraid I have to propose something counter to the XP test-first.
> >> >First, I presume you already know the algorithm, so your detailed
> >> >design can occur during development. I presume you are also
> >> >going to prototype or spike this algorithm, although it might be a
> >> >several day spike. But it is a research project, right? You don't
> >> >really know how you're going to do it.
> >> >
> >> >If the latter is true, stuff the tests for now. Write some input data
> >> >and go at it. Go deep, not wide until you get a result. You can
> >> >debug and step your way through to the end.
> >>
> >> Yes. I do that sometimes. I find it feels slower and it certainly feels
> >> more stressful and less satisfying. It's like I'm /debugging/ all the
> >> time.
> >
> >Well I don't have enough experience with your alternative, and I have
> >a lot of experience my way. I hoped I had made that context clear in
> >my original response, but if not here it is.
>
> Wasn't arguing, I was just reporting what works for me in the faint hope
> that it'd be valuable. ;->

Wasn't arguing that you were arguing :-)

Let me first say that I believe in the "magic" of test-first as far as it relates
to good design but I have yet to do it to a level in which that occurs.
So all I can talk about is the technique to develop working algorithms for which
I have only vague ideas and know the path to the final implementation
will be a pure adventure.

When developing a difficult algorithm / research project, I write one
test to give me something to work on. I think you need a test to give you
direction and debugging support (stepping through the debugger), but its
better to go deep rather than wide. There are too many unknowns ahead
that could require you to backtrack, so don't take the risk of "firming up"
(going wide) the parts that are currently complete but may have to be
redone.

I'm not sure I'm answering your question. There may be some times
where it is easy to write a few tests at once, but generally the essence
of my current focus can be handled in one test. Then I keep adding
new tests to the existing ones. FWIW, I've taken "single
test" implementations and -- via combinatorics such as varying the
order of the parameters, working with 1, 2, 3...n parameters, etc,
I've been able to create thousands of tests automatically...and found
a really subtle bug in the 1345th test :-) So I do appreciate the need
to eventually "go wide". Just not when I'm still figuring out how something
should be built.

And I have had occasion to find that the 1345th test invalidated my
entire design. One of the hardest things I ever wrote (this stunned me;
I thought it was going to be easy and it turned into a nightmare :-)
was a NumericFormatter
object to take any number (say from 1e-99 to 1e+99) and output
a String showing the maximum precision that could be displayed in
n characters, rounded to m digits, and p precision, with or without
US std commas. It took me 3 days and became a hell I eventually
solved by hideous brute force. FWIW, the Smalltalk code and test
code can be found at
http://home.att.net/~macdonaldrj4/smalltalk/index.htm
I must say I was proud of the result and embarrassed by the implementation.
But that is what you get from a good self-taught Fortran programmer
who doesn't have Knuth on his shelf :-)

Ron, I may have lost the context in which you were asking me to compare
the "two ways of proceeding". If I'm not answering, please rephrase.

> >> >P.S. I've done a little geometrical programming myself. All strictly
> >> >research where the algorithm was not even known beforehand.
> >> >I just believe that if you have to cross the unknown continent,
> >> >build a narrow path all the way across first then widen it.
> >>
> >> Spikes are good. What about test-first spikes? I'm not sure.
> >
> >Once again, I don't have enough experience in this. One potential
> >problem is that I can usually quickly abstract the algorthm to a very high
> >level (which is a good thing in math). This is how I would want write
> >my test first. However, that high level abstraction indicates the
> >need for a high-level interface or even *gasp* a framework.(*) And
> >the time when I'm exploring the algorithm is far too early to be
> >worrying about building that high-level abstraction. If instead I
> >start working on a low-level implementation, I'm going to get so
> >much "movement" from refactoring that I believe the extra
> >baggage of those tests will slow me down.
> >
> >(*) Think of writing the Circle/Ellipse classes by starting with the
> >suite of Conic tests.
>
> Suite, yes, that could be a problem. Even when spiking, I /think/ I'd do
> better one test at a time. But I usually do my spikes the old fashioned
> way, and I usually get into debugging trouble with them. It's just that
> my spikes are usually things like "ftp a file under program control" and
> I have /no idea/ how to write an interesting test, compared to just
> looking to see if the file is there.
>
> Then I get into a debugging cycle. I'll never learn ...

No question that I also run into a wall and get lost when I stray too
far from test support. But I often get so involved in exploring the
algorithm that I just keep going on and on writing the code and
adding a bunch of bugs that take me a long time to fix when I
finally get around to catching up with the tests. And this is inefficient.
But I also fear that if I stop to write the tests I'll lose the concept
of my solution that is currently in my mind and needs to get written
before it gets lost :-)

I'll never learn either. . .

But before I tried automated tests I often failed. Nowadays I don't fail.
I just screw up locally :-)

The folks who have posted about how they keep a notebook and
journal. . .that would really help me a great deal. Instead I just keep
adding single scraps of paper to a messy desk. That is what I really
need to work on.

> >The truth is that I always start with a test framework and write a
> >simple test which allows me to walk through the code in a
> >debugger. Once I confirm that that bit works for that test, I move
> >on to the next bit. I may either extend the original test to take
> >me deeper (and set breakpoints at the current position where I
> >am working), or I may put the first test aside and write another
> >test to allow me to work on the next bit. But my point is that I
> >might decide to let some tests slide for a while (get broken and
> >stay broken). Yes this is dangerous, however, my defense is
> >that figuring out this stuff is hard enough that it is dangerous. I
> >can get lost and hit the wall. But that is why I would place it
> >in the spike category.
> >
> >Of course, once I have a working "path across the continent" and
> >I don't see any better "paths", then I build the interstate with
> >additional tests. About the only thing I'm not doing well enough
> >is the test-first part.
>
> You're working with new ideas, different ideas. You're learning.
> Whatever you learn will be useful. And please continue to share it with
> the rest of us!

Mutual.


Ron Jeffries

unread,
Feb 3, 2002, 7:55:30 AM2/3/02
to
Good report. I'm just snipping this bit because I think we're close
enough on the rest and I'm wondering about this.

On Sun, 03 Feb 2002 07:56:10 GMT, "Richard MacDonald"
<macdo...@worldnet.att.net> wrote:

>And I have had occasion to find that the 1345th test invalidated my
>entire design. One of the hardest things I ever wrote (this stunned me;
>I thought it was going to be easy and it turned into a nightmare :-)
>was a NumericFormatter
>object to take any number (say from 1e-99 to 1e+99) and output
>a String showing the maximum precision that could be displayed in
>n characters, rounded to m digits, and p precision, with or without
>US std commas. It took me 3 days and became a hell I eventually
>solved by hideous brute force. FWIW, the Smalltalk code and test
>code can be found at
>http://home.att.net/~macdonaldrj4/smalltalk/index.htm
>I must say I was proud of the result and embarrassed by the implementation.
>But that is what you get from a good self-taught Fortran programmer
>who doesn't have Knuth on his shelf :-)

In your copious free time, please tell us a bit more about this case, or
another, where a test way downstream invalidated the entire design.

Let me talk about the smalltalk code, which I've just looked at for a
tiny bit and haven't run at all.

Thing one: it is now handling nil arguments all over. Some of the
methods are nearly doubled in size just from handling nil. That makes
things hard to understand, hard to test, hard to write. So even if I was
planning to handle nils, I'd leave the handling till last. Can't tell
whether you did that or not, as we have only the final result, not the
history.

Second, in order to do it, you put methods all over the numeric
hierarchy. Every time I've done that it has gotten me in trouble. The
reason is that the number hierarchy is part implementation and part
logic. Now I try not to do that, because it's always hard to get right
and the code is all over h*ll. So I might have begun with a
NumericFormatter object. That object would use the given properties of
the various number classes, but would retain formatting control in
itself.

Now I'm not sure this would have been better. I think it would be easier
for me to understand, and I think it's what I'd do. I might be wrong on
all counts. I would hope that when I got it done, there might be
explicit methods that really belong on Integer or Float and I'd consider
moving them there at that time, after the shape the solution wants to
have had come clear in my mind.

But anyway ... what was it about the Nth test on this that broke your
design, and what part of it broke? What did you have to do about it?


The skill, and we all have to learn it and none of us has it exactly
right, is in the choosing of the next test case. Perhaps you went a bit
too far testing the other 1344 things, and the 1345th test could have
been the 15th test instead. I don't know, of course: I wasn't there.

But I do know that when a test causes me trouble, I always wish I had
written that one sooner, and I usually also find that I was wandering
down a paved road, while putting off exploring the woods. I usually look
at the test that breaks the world, and realize that I had thought of it
and consciously put it off. I try to make part of my test choice
thinking address "what could break this", or "what will I learn most
from next".

Sometimes the purpose of the test is to break the design and get me to
go in another direction. I find, and can't explain why, that my design
changes don't often feel like "breaking" the design, just discovering
it. Maybe I'm working on problems that break down that way. Or maybe I'm
breaking problems down that way. Hard to know.

Drop by Michigan sometime with a problem. Chet and I will pair with you
and we'll see what we learn.

Of course, you're doing just fine on your own!

Ron Jeffries

unread,
Feb 3, 2002, 8:02:26 AM2/3/02
to
On Sun, 03 Feb 2002 00:43:38 -0500, Peter Hansen <pe...@engcorp.com>
wrote:

>Any time I think "evolution" I'm thinking in patterns identified

>in my amateur dabblings in genetic algorithms.
>
>One of the insights I reached once was that a key factor in determining
>when genetic algorithms can be effective is whether the solution
>space (better term here: fitness landscape) has gradients. If you
>have lots of sloping hills and valleys, you can evolve a solution.
>Many interesting problems are like this. (Yes, all this is obvious
>to some of the great brains around here, but I remember not having
>an intuitive understanding of this.)
>
>If you have a flat landscape, with extremely sharp spikes at various points
>where solutions exist, genetic algorithms (and, I suspect, almost anything
>but brute force) are ineffective.
>
>The simplest example I've thought of, which first showed me the limits
>of genetic algorithms, was cracking encryption. Provided the encryption
>algorithm is good, the fitness levels of all "wrong" solutions are equal,
>and the one "right" solution has a maximum fitness value. No solution
>can be evolved. In more concrete terms, you don't get partial decryption
>of the message just because part of your decryption key matches the
>correct one.

Yes. We might say it this way: if you can't get decent feedback from
your tests (they just keep saying wrong wrong wrong, not better worse
better), evolution doesn't work at all well. We're together on this.

When there's a human involved in the process, things are a bit different
in two important ways:

1. The human can see feedback in many cases where we couldn't write an
algorithm. When /we/ are evolving a design using our mind, we see things
that we couldn't yet program into an evolutionary algorithm. So we can
do design evolution in our heads that we can't as yet do with computer
programs.

2. The human can see other ways of looking at the problem. Many times in
building a program, a strictly incremental way of proceeding might get
trapped in some local pit. But we see more than the local pit, we see
the whole program and how it works. (This is especially true if we are
creating good modularity as we go.) So we turn the program on its side
and often it is now obvious that we're in a pit, and obvious which way
to go.

To me, incremental development isn't without big "insights". It's just
without big "steps". I feel sure that there is something theoretically
important going on in the incremental-with-refactoring design process.
Can't prove it yet.

Regards,

Richard MacDonald

unread,
Feb 3, 2002, 1:26:34 PM2/3/02
to
"Ron Jeffries" <ronje...@REMOVEacm.org> wrote in message
news:B2F1830C663A397A.E46F9272...@lp.airnews.net...

> Good report. I'm just snipping this bit because I think we're close
> enough on the rest and I'm wondering about this.
>
> On Sun, 03 Feb 2002 07:56:10 GMT, "Richard MacDonald"
> <macdo...@worldnet.att.net> wrote:
>
> >And I have had occasion to find that the 1345th test invalidated my
> >entire design. One of the hardest things I ever wrote (this stunned me;
> >I thought it was going to be easy and it turned into a nightmare :-)
> >was a NumericFormatter
> >object to take any number (say from 1e-99 to 1e+99) and output
> >a String showing the maximum precision that could be displayed in
> >n characters, rounded to m digits, and p precision, with or without
> >US std commas. It took me 3 days and became a hell I eventually
> >solved by hideous brute force. FWIW, the Smalltalk code and test
> >code can be found at
> >http://home.att.net/~macdonaldrj4/smalltalk/index.htm
> >I must say I was proud of the result and embarrassed by the implementation.
> >But that is what you get from a good self-taught Fortran programmer
> >who doesn't have Knuth on his shelf :-)
>
> In your copious free time, please tell us a bit more about this case, or
> another, where a test way downstream invalidated the entire design.

Truthfully, I'm having trouble remembering exactly what the issue was in
my example code. Its 2 yrs old. Let me examine the code and see if I
remember. If memory serves, it was in a combination of rounding when
the last digit is '5' and precision. Or something like that. Basically two
separate algorithms that worked fine by themselves but failed
when combined. In the meantime, another example:

I have a system for solving constrained variables. A variable is an unknown
number we are trying to solve for. Call it x. A constraint is a mathematical
relationship between variables that must be satisfied. Call it h. An example
is: x1+x2=x3. The interesting problems are when I have many x and h.
The problem is solvable when the number of x equals the number of h.
I use a solution approach where I instantiate both x and h and link them
in a graph, then use graph algorithms to wander around looking for local
solutions. Say I receive a message from the outside world saying "please
set x1 equals 3". I examine the constraints around x1 to see if this is
allowed. If so, I set the value of x1. If not, I throw an exception. Now,
the constraints connected to x1 are connected to other variables are
connected to other constraints are connected to...so I might have to
wander far and wide in a depth/breadth first search to determine if this
change is legal.

I first implemented this by code in the variable that essentially set the value,
then called a block to propogate this change. This block was recursive
to other variables and their blocks. If all blocks succeeded, the changes
were committed. If any one block failed, an exception was thrown. I
wrote a simple trap around each block to set the new value back to
the old value in case of an exception. It was very simple. I'd even say
it was the "simplest thing possible" :-)

I got down the road into bigger problems and I suddenly started having
stack overflow problems. (This was VSE.) Turned out that the block exception
was adding to the stack and things blew up at the 83rd stack addition.
I could not fix this.

I had to fix the problem by abandoning the entire approach and create a new
controller that perform the graph walk without
recursion and maintained a dictionary of variable->value. So the
variables became dumb and all the work was moved to this new object.
It was a completely different approach and require major brain surgery
on most of the code.
It took me 3 days. It was very difficult. Thank God I had thousands of tests
and the ones that broke that showed me some
of the most subtle problems you could imagine with my new solution.
Errors I would never have dreamed of.
I know that without that test code I would *never* have
succeeded. It was simply too hard and the bugs were too subtle. I would
have run into seemingly "random" errors and would never have to able to
find the source of the problem. I would have run out of gas, no question.

The best thing about my tests were that they were at all integration levels of the code,
so I was very quickly able to find the source of the problem.
3 days of sequential problems :-) Note that
this argues against my other argument of writing a single high-level test
and working through the algorithm. But remember, at the time I found my
design flaw, it was months later and I already had a mature system in place.
I had already bullet-proofed the test code.

(Aside: Experiences like this make me dislike the "throw it away if you don't
finish it by the end of the day" or "refactoring should not take 3 days"
ideas. I understand the sentiment -- believe me :-) -- but sometimes
it is not practical.)

Now for the kicker. When I found the stack overflow bug, I was able to guess
the problem in a few minutes. I was able to write a simple test for it in a couple
of minutes. (Just write a deep single chain of constraints and try it out.)
Sure enough, I isolated the problem. One single test would have saved me
all those problems. And it was such an obvious test that I kicked myself
for not thinking about it sooner. Live and learn, but realize you will always
make these mistakes.

> Let me talk about the smalltalk code, which I've just looked at for a
> tiny bit and haven't run at all.

Please note that it probably depends on a whole lot of methods in the base
classes that I added myself. That would require you downloading some
other packages. It may not be worthwhile. Best just to scan the code.

> Thing one: it is now handling nil arguments all over. Some of the
> methods are nearly doubled in size just from handling nil. That makes
> things hard to understand, hard to test, hard to write. So even if I was
> planning to handle nils, I'd leave the handling till last. Can't tell
> whether you did that or not, as we have only the final result, not the
> history.

I think I did leave that till last. Basically, of the various possible settings
(max length, precision, rounding, etc) I wrote them independently and
wrote client access code independently. Then I made the algorithms
work in combination -- this was the hard and subtle part. Then I
wrote a single method with multiple parameters for all the settings,
i.e., basically a "switching" method. This method allowed nils as
arguments because it was coming from gui/controller code that
came from user preferences. It was just cleaner for this code to
assume the "worst case method" and send nils when it didn't care.
Yes it is ugly, but it was the easiest part of the problem.

> Second, in order to do it, you put methods all over the numeric
> hierarchy. Every time I've done that it has gotten me in trouble. The
> reason is that the number hierarchy is part implementation and part
> logic. Now I try not to do that, because it's always hard to get right
> and the code is all over h*ll. So I might have begun with a
> NumericFormatter object. That object would use the given properties of
> the various number classes, but would retain formatting control in
> itself.

That objection is very valid. Certainly all this formatting logic in the Number
hierarchy is highly suspicious. OTOH, the logic mirrored the Number hierarchy.
I would have to try the alternative and see. If I ever do it again in Java I will
let you know how it works:-)

Btw, I switched to Smalltalk precisely because it *did* let me hack directly
into the base classes. I am aware of the dangers and concerns and have
allowed myself to get bitten when I get sloppy (I once overrode a Number
method that crashed my VM and was a necessary part of the VM startup.
*That* was fun trying to recover.) However, after a lot of long and hard
thinking, I came to believe that the Number class was the appropriate
place for me to extend my mathematical needs. I have experienced the struggle
when I could not do this. When I was finally exposed to Smalltalk and
saw what it let me do, the lightbulbs went off and I *knew* I had arrived
at the appropriate language. It worked for me, but I am fully aware
that it would be very difficult for another programmer to coexist
with me. I would have to find some compromise if I was doing this
within a team environment.

> Now I'm not sure this would have been better. I think it would be easier
> for me to understand, and I think it's what I'd do. I might be wrong on
> all counts. I would hope that when I got it done, there might be
> explicit methods that really belong on Integer or Float and I'd consider
> moving them there at that time, after the shape the solution wants to
> have had come clear in my mind.

Unless you are willing to live with ugly ifInteger, ifFloat case statements
in client code, you will absolutely have methods that naturally migrate
to these base classes. Classic case statement/polymorphism tradeoff.
But I acknowledge that many of these methods should probably move
to class methods or a separate class. Remember, I said I was proud
of the power of the result and embarrased about the implementation :-)
In fact, I'm not sure I really wanted to post the code...no, its ok.

> But anyway ... what was it about the Nth test on this that broke your
> design, and what part of it broke? What did you have to do about it?

It was an unsolvable conflict between two algorithms that worked ok
by themselves but failed in combination. Hopefully my other example
above provides a good illustration as well.

> The skill, and we all have to learn it and none of us has it exactly
> right, is in the choosing of the next test case. Perhaps you went a bit
> too far testing the other 1344 things, and the 1345th test could have
> been the 15th test instead. I don't know, of course: I wasn't there.

In this case, I could only have found that bug by exhaustive testing.
That is what, in fact, happened. In my other example, your suspicion
is exactly correct. I missed a crucial and easy test early on.

> But I do know that when a test causes me trouble, I always wish I had
> written that one sooner, and I usually also find that I was wandering
> down a paved road, while putting off exploring the woods. I usually look
> at the test that breaks the world, and realize that I had thought of it
> and consciously put it off. I try to make part of my test choice
> thinking address "what could break this", or "what will I learn most
> from next".

Hindsight is a bitch because we realize we were dumb the first time.
Not that we can prevent it. I suspect you are correct and will experience
this more as I move towards a test-first mentality. Its easy to "slap on"
a bunch of tests and gain false confidence from them when you really were
just testing the same thing over and over. Taking the time to really think
about "what could break this" is better.

> Sometimes the purpose of the test is to break the design and get me to
> go in another direction. I find, and can't explain why, that my design
> changes don't often feel like "breaking" the design, just discovering
> it. Maybe I'm working on problems that break down that way. Or maybe I'm
> breaking problems down that way. Hard to know.

Oh, I agree with that. In fact I feel I spend most of my time "discovering"
the design. I may use the word "break" too easily. But I also acknowledge
that a missed test or a down the road discovery can cause havoc with
something that was implemented long before.

> Drop by Michigan sometime with a problem. Chet and I will pair with you
> and we'll see what we learn.

Sounds great.
Actually, my pairing has not been good. I did it once.
I paired with a cranky Polish guy who wanted to kill me because I was learning
Java simultaneously and was therefore incompetent. In particular I did not
understand the repository so I blew his code away a couple of times :-)
But we realized that this was temporary and we just could get over that hump.
Nevertheless, after 3 days, we simply sat alongside each other and both typed
while hollering at each other. I wrote the domain. He wrote the GUI.
Worked well.

OTOH, our chief scientist and best programmer just finished a pair project
and have told us that we all need to switch to this style. So I'll have some
more experiences this year.

> Of course, you're doing just fine on your own!
>

Could be doing better. True always.


Richard MacDonald

unread,
Feb 3, 2002, 1:42:10 PM2/3/02
to
"Richard MacDonald" <macdo...@worldnet.att.net> wrote in message news:uhf78.8772$zT.7...@bgtnsc06-news.ops.worldnet.att.net...

> Actually, my pairing has not been good. I did it once.
> I paired with a cranky Polish guy who wanted to kill me because I was learning
> Java simultaneously and was therefore incompetent. In particular I did not
> understand the repository so I blew his code away a couple of times :-)
> But we realized that this was temporary and we just could get over that hump.
> Nevertheless, after 3 days, we simply sat alongside each other and both typed
> while hollering at each other. I wrote the domain. He wrote the GUI.
> Worked well.
>

Correction. This was not, of course, proper pair programming.
I only blew away the code when he went to his own computer
and left me to work alone. All I should have been talking about
was that our personalities were different enough that we could
not stand sitting alongside each other at the same terminal.
Other than that, we made a great team. And I am confident
that we could have made it work if we had continued. But we
had so much work to do that we just felt it would be more
productive to type separately. That may or may not have been
a mistake.


Ron Jeffries

unread,
Feb 3, 2002, 3:39:00 PM2/3/02
to

Good example. I've had exactly the same thing happen, as it turns out,
some part of a tax program if I recall. But the outcome was different
...


>
>I had to fix the problem by abandoning the entire approach and create a new
>controller that perform the graph walk without
>recursion and maintained a dictionary of variable->value. So the
>variables became dumb and all the work was moved to this new object.
>It was a completely different approach and require major brain surgery
>on most of the code.
>It took me 3 days. It was very difficult. Thank God I had thousands of tests
>and the ones that broke that showed me some
>of the most subtle problems you could imagine with my new solution.
>Errors I would never have dreamed of.

Now of course three days may not be what people are in fear of when they
worry that evolutionary design won't work. But it certainly did screw up
your design in a major way.

It turns out that in my case, I solved it a different way: I implemented
the recursion in the controller object. Briefly it goes like this:

Whenever you ask an object about its current situation, it either knows
the answer is yes or no, or it must ask a neighbor. I let the object
retain the state regarding the next neighbor it would like to ask, but I
allow three returns, not two (yes/no, plus "I want to recur").

Upon the latter return, the controller puts the object wanting recursion
in a stack of its own invention (OrderedCollection), then asks the
object who he would like to recur to, then calls that object.

So the controller keeps a long list of people to be asked, but the
recursion is always only one level deep.

Of course, if you don't think of or can't use that trick, you're still
screwed. I have a vague suspicion that in very clean code it is usable
but if the recursion is sufficiently weird and widespread you cannot.

When it happened to me, I think I wasted some time trying to figure out
how to lengthen Smalltalk's call stack, but that was in the virtual
machine. I was also able to double the size of the net I could handle by
folding a couple of methods up, saving a method send at the cost of ugly
code. But the "final" solution was not to recur.

Lesson learned for me? Probably none. There's something in there about
checking size of net early, but I can't say I learned it. Maybe I'd
learn it if I did that sort of thing very often ...

>I know that without that test code I would *never* have
>succeeded. It was simply too hard and the bugs were too subtle. I would
>have run into seemingly "random" errors and would never have to able to
>find the source of the problem. I would have run out of gas, no question.
>
>The best thing about my tests were that they were at all integration levels of the code,
>so I was very quickly able to find the source of the problem.
>3 days of sequential problems :-) Note that
>this argues against my other argument of writing a single high-level test
>and working through the algorithm. But remember, at the time I found my
>design flaw, it was months later and I already had a mature system in place.
>I had already bullet-proofed the test code.

Yep. Good story. Decent outcome. Would more design up front have shown
us the error of our ways in assuming recursion would hold up? I'm not
sure. It's more like "if we had thought of it we wouldn't have done it
that way".

The tests always help. Getting the right idea early enough on always
helps. For me, I continue to do test-first with impunity. But I'm not
sure if it works because it works or it works because I have lots of
experience, or it works because I'm just lucky so far. I think it works
because it works. But I could be wrong. So far ... it's working.

Good report, thanks!

Richard MacDonald

unread,
Feb 3, 2002, 11:07:38 PM2/3/02
to
"Ron Jeffries" <ronje...@REMOVEacm.org> wrote in message
news:C49E2543558ED292.193EF7D1...@lp.airnews.net...

In essence, that is what I did. I did it "behind" the variable object, though,
not in "front", i.e., the client still dealt with the variable and the variable
delegated.

> Whenever you ask an object about its current situation, it either knows
> the answer is yes or no, or it must ask a neighbor. I let the object
> retain the state regarding the next neighbor it would like to ask, but I
> allow three returns, not two (yes/no, plus "I want to recur").

I did not use that 3rd option but I like it. I delegated this knowhow to
a couple of methods in the object and the controller. Basically my
controller was responsible for figuring out the next neighbor using some
helper methods in the variable. Yours might be cleaner.

> Upon the latter return, the controller puts the object wanting recursion
> in a stack of its own invention (OrderedCollection), then asks the
> object who he would like to recur to, then calls that object.
>
> So the controller keeps a long list of people to be asked, but the
> recursion is always only one level deep.
>
> Of course, if you don't think of or can't use that trick, you're still
> screwed. I have a vague suspicion that in very clean code it is usable
> but if the recursion is sufficiently weird and widespread you cannot.

It is clean. I wrote fairly complex graph traversal code plus a Dictionary.
A Stack would have been simpler and might have worked well enough.
I've been doing some parsing implementation lately (a SAXListener)
and the Stack is working very nicely. I'm pretty sure my math solution
was too elegant and hence too complex. I know it was too complex.

[snip]


Tom Plunket

unread,
Feb 4, 2002, 4:20:31 AM2/4/02
to
First of all, thanks for all of the responses. It felt like I
was on the right track anyway, but the responses help steer me a
bit in the areas where I feel like I have two wheels hanging off
the shoulder.


Ron Jeffries wrote:

> I'd start with a mesh that was already reduced, get the test
> running with a rather simple reduce() function. ;->
>
> Then I'd do a mesh that was one step away from reduced, and see
> what I could do with that. And so on.

Yeah- that's how I've been approaching it thus far.
Unfortunately a lot of my geometry is rusty, so I'm just sort of
guessing at some of the solutions (e.g. to make sure that a
"window" doesn't get optimized out of a plane).

> And of course, since there's clearly an algorithm for this, I'd
> look it up to get a sense of how it's supposed to work. I'd try
> to resist just typing it in willy-nilly.

Yep- there's been a lot of research, and there are a lot of
papers that say "we did this." Unfortunately I have had little
luck actually turning up "this is how we went from these ideas to
code", so I'm test-firsting from the ideas in the papers into
compilable code.

> Knowing nothing, that's what I'd do. It's what I always do,
> actually ...

Yep- sounds like a road that I'm going to get really used to. ;)


Peter Hansen wrote:

> Well, *some* designs can't be evolved...
>

> If that were my requirement, I'd certainly start with
> research. Is it really the case that no one else has
> developed an algorithm for this? If that is true, you
> should be looking at a Spike Solution to learn the
> true nature of the problem, before you try actually
> (test-first) developing it as production code.

Thanks- I had forgotten about the applicability of Spike
Solutions.

I have not found the difficulty of refactoring my tests (as
Richard McDonald I believe was talking about) to be too much of a
burden. A few times I want to rearrange some code or classes,
move some methods, that sort of thing, but the tests support it
pretty well- just moving the test method and updating the classes
that are used seems to work ok for now. Of course, I'm still
battling with test coverage; I often find myself writing an if
(boundaryCondition) statement before I test that boundary
condition. In some cases, I am frustrated because I don't know
how to code "intended failures" in CppUnit yet- I spent some time
looking around in the code and I have an idea how to do it, but
it doesn't appear directly feasible using the macros (which is as
close as my hands have gotten thus far). Anyway- I'm
tangentializing... ;)

Perhaps I won't worry too much about it then; just focus on
getting the code "working." ;) I do like the fact that I'm not
spending time in the debugger though; it's nice to whip something
up and see that it just works.


Richard MacDonald wrote:

> [After the spike solution works] write the overall test and

> validate it. Then start modularizing it and writing the tests to
> test the modules.

Ahh- good point to remember. I should think about applying this
to some of my existing code that filled a similar need and just
sort of came about due to hackery. Thanks. I do like the tests
coming with me at this point, though now I won't fear setting
them aside for at least "mini-spikes".

> ...[T]he time when I'm exploring the algorithm is far too early

> to be worrying about building that high-level abstraction. If

> instead I start working on a low-level implementation...

Sage advice again.

Thanks a bunch, guys.


-tom!

Rolv Inge Seehuus

unread,
Feb 4, 2002, 7:36:54 AM2/4/02
to
In article <cucm5uk0lrh6sprlk...@4ax.com>, Tom Plunket wrote:
>Hey all-
>
>In my continuing pursuit of effective test coverage, I've come up
>against something interesting.
>
>A "feature" was handed to me- "Implement polygon reduction on
>arbitrary polygonal meshes."

[Znipped some..]

>Now this is a very high-level test, and it's taken several days
>of work to implement everything in between (trying to think of
>the simplest thing each time, then testing the sub-function-
>ality). However, I get the feeling that this is the "wrong" way
>to test since I had an outstanding failure for several days, and
>it feels like the feature "should" have been broken down into
>smaller tasks. However, from the point of view of the users and
>even the other programmers, this is standalone functionality;
>others will just call "Reduce()" and expect that it works.
>
>As I thought of a step in the process, I wrote tests and
>implemented functionality, sometimes needing other functionality
>for which I implemented tests as well. But coupled with the idea
>that I'm supposed to throw everything away at the end of the day
>that isn't checked in, I feel somewhat dirty for writing my first
>test as this high-level thing (and it seemed just wrong to simply
>set m0 to m in the Reduce() function).

When doing mesh reduction, you probably have some special cases that
you would start with:

- Colapse triangle in to a vertex
- Colapse edge in to a vertex

I would partition the feature/story at hand (which is, as you mention,
a big one) in to several smaller steps, starting with the ones on the
list.

Testing that a collapse is correct (e.g. the correct connections are
removed and added to the mesh graph) can be made trivial if you chose
your test data carefully. It is also trivial to check that the
collapse happens at the right time (some threshold according to the
'flatness' or 'levelness' of the patch in analysis). Then you need
some mechanism for traversing the mesh graph to choose the right
elements for collapse. This is two tasks (at first glance):

- Traverse graph
- Evaluate a graph neighborhood for colapse.

This can probably be further partitioned to become 'one day' tasks (if
you aren't able to write the complete graph-search with thests in one
day.. )

When having these, you can expand your mesh data-structure further to
do incremental level of detail. (Just tell the mesh how far it is from
the eye-point, then it adjusts cheaply up or down the list of
collapse/expand transforms you've made.. uh.. you noticed I mentioned
that earlier? ... and then adjusts it's complexity for the best
detail/distance ratio trimmed according to hardware, resolution and
the likes.... Assuming something like a game or another realtime
system, ofcource.. :-)

>
>Any guidance to steer my feelings? :)
>

The conclusion should be: Follow the practise of braking up features
in to manageable pieces that can be completed in a days worth of work.
:-) Throwing away the big test you wrote the first day, should not be
any problem when you figured out how to decompose the task in to more
manageable units (that give you feedback on progress, and keeps you
happy and motivated..). Writing it later, when you actually are
creating the big-bang-reduction (and knows much much more about the
whole issue) feature isn't so much work anyway...

reg.
Rolv

Kent Beck

unread,
Feb 4, 2002, 4:56:14 PM2/4/02
to

Tom Plunket <to...@fancy.org> wrote in message
news:cucm5uk0lrh6sprlk...@4ax.com...
> Since I have little idea on where to start (although I am the
> resident geometry expert, unfortunately), I just started with the
> easiest thing that I could think of:
>
> void testReduction()
> {
> Mesh m0, m;
>
> // create the geometry that I want to reduce into m0.
>
> // create the geometry that m0 should become into m
>
> m0.Reduce();
> CPPUNIT_ASSERT(m == m0);
> }
>
> Now this is a very high-level test, and it's taken several days
> of work to implement everything in between (trying to think of
> the simplest thing each time, then testing the sub-function-
> ality). However, I get the feeling that this is the "wrong" way
> to test since I had an outstanding failure for several days, and
> it feels like the feature "should" have been broken down into
> smaller tasks. However, from the point of view of the users and
> even the other programmers, this is standalone functionality;
> others will just call "Reduce()" and expect that it works.

Fake It Til You Make It, a pattern I came up with for a test-driven
development book I'm flailing away at. Credit for the idea goes to Martin
Fowler. I think I may have done it before, but he convinced me to talk about
it in public.

Start with m0 == m, and implement Reduce() as a no-op. Test passes and you
can check in.

I always start such a session with a list of known interesting test
cases--two polygons that turn into one, three that turn into two, three that
could turn into two in two different ways, etc. At each step, pick the next
test case that will teach you something but that you are confident you can
get running.

As for some designs not being evolvable, I would certainly like an example.

Kent


Peter Hansen

unread,
Feb 4, 2002, 9:12:56 PM2/4/02
to
Kent Beck wrote:
> [...]

> As for some designs not being evolvable, I would certainly like an example.

Example involving solving problems "evolutionarily" by genetic algorithms
and how if the search space is flat with spikes (e.g. cracking a secure code),
instead of rolling hills, posted in another branch of the thread.

I don't know if all such examples are degenerate/trivial cases of "design",
but offer it merely as the case that was in my mind as I wrote the phrase
"not all designs can be evolved".

-Peter

Ron Jeffries

unread,
Feb 4, 2002, 10:12:32 PM2/4/02
to
On Mon, 04 Feb 2002 21:12:56 -0500, Peter Hansen <pe...@engcorp.com>
wrote:

>Kent Beck wrote:

Trouble with these examples is that humans don't evolve code in the same
way that genetic algorithms do. (And I'm not sure that even genetic
algorithms get caught in local minima. Doesn't it depend how they
mutate?)

Consider an arbitrary 3-d projection of, say, a 6-d space. Suppose it
has hills and valleys, and you are stuck in some local minimum. Now
replace any one of the dimensions in the projection with any dimension
not in the projection. You're probably not stuck any more. So evolve on
those dimensions a while.

I t hink what we do as humans evolving code is kind of like that: when
we feel stuck in one direction, we look along another dimension.

I could be wrong.

What would be interesting to see would be a program that was

a) simple enough to understand
b) well-enough factored that we all agreed it was modular and good
c) unable to change in some reasonable direction without major change.

I keep not running into that case when I work incrementally. But maybe
I'm just lucky ...

Peter Hansen

unread,
Feb 5, 2002, 2:29:22 AM2/5/02
to
Ron Jeffries wrote:
>
> On Mon, 04 Feb 2002 21:12:56 -0500, Peter Hansen <pe...@engcorp.com>
> wrote:
>
> >Example involving solving problems "evolutionarily" by genetic algorithms
> >and how if the search space is flat with spikes (e.g. cracking a secure code),
> >instead of rolling hills, posted in another branch of the thread.
>
> Trouble with these examples is that humans don't evolve code in the same
> way that genetic algorithms do. (And I'm not sure that even genetic
> algorithms get caught in local minima. Doesn't it depend how they
> mutate?)

I may have explained myself poorly, but the possibly-degenerate
examples I mentioned don't actually have local minima. They have,
in fact, only a single large flat surface with one-dimensional
vertical spikes where the solutions lie. Since there is no
gradient from any of the non-solutions towards any of the solutions
(that is, you can't effectively compare fitness levels in order
to select portions of genomes to combine to form new genomes with
possibly higher fitness levels), evolution fails. I strongly doubt
any interesting solution spaces have such a landscape.

<ascii-art mode="primitive" degree="intensely">

* <-- can't evolve to this solution
|
______________|_____________


* <-- easy to evolve to this point
___/ \__
__ / \___ ___
\___/ \__/

</ascii-art>

Whatever... I'm rambling. :)

-Peter

Ron Jeffries

unread,
Feb 5, 2002, 6:42:26 AM2/5/02
to
On Tue, 05 Feb 2002 02:29:22 -0500, Peter Hansen <pe...@engcorp.com>
wrote:

>I may have explained myself poorly, but the possibly-degenerate


>examples I mentioned don't actually have local minima. They have,
>in fact, only a single large flat surface with one-dimensional
>vertical spikes where the solutions lie. Since there is no
>gradient from any of the non-solutions towards any of the solutions
>(that is, you can't effectively compare fitness levels in order
>to select portions of genomes to combine to form new genomes with
>possibly higher fitness levels), evolution fails. I strongly doubt
>any interesting solution spaces have such a landscape.

Yes, OK. I understand and agree.

Phlip

unread,
Feb 5, 2002, 6:55:43 AM2/5/02
to
Peter Hansen wrote:

> I may have explained myself poorly, but the possibly-degenerate
> examples I mentioned don't actually have local minima. They have,
> in fact, only a single large flat surface with one-dimensional
> vertical spikes where the solutions lie. Since there is no
> gradient from any of the non-solutions towards any of the solutions
> (that is, you can't effectively compare fitness levels in order
> to select portions of genomes to combine to form new genomes with
> possibly higher fitness levels), evolution fails. I strongly doubt
> any interesting solution spaces have such a landscape.
>
> <ascii-art mode="primitive" degree="intensely">
>
> * <-- can't evolve to this solution
> |
> ______________|_____________
>
>
> * <-- easy to evolve to this point
> ___/ \__
> __ / \___ ___
> \___/ \__/
>
> </ascii-art>

I understand and don't agree.

KB asked for a design that can't emerge.

This is a space that resists >automated< Evolutionary Programming.

A program design emerges via artificial selection. It uses Intelligent
Design up close and personal. Humans will know the flat area is
sub-optimal, and will actively and aggressively seek that spike in it,
using criteria with far more dimensionality than the automated evolver
would.

--
Phlip
http://www.greencheese.org/LucidScheming
-- This machine last rebooted during the Second Millenium --

Rolv Inge Seehuus

unread,
Feb 5, 2002, 7:18:04 AM2/5/02
to

What if "evolving the design" wasn't such a good analogy after all?
Especially considering all the emotional baggage that exists
conserning the whole idea of evolution.

I can pull the historical rabbit out of the hat here to
illustrate. Alongside (or alongtime, perhaps) Darwin, there was this
scientist[1] called Lamark, who had a different view of the whole
process of evolution. He believed that things an organism learned
during its lifetime, was passed on to it's siblings. E.g. gjiraffes
grew long necks because the trees with the leaves they wanted were
tall, cheetas evolved to fast-running animals because of the empirics
showing that they became a better hunter if they ran faster, humans
evolved to software engineers because they.. uhm.. erh.. had bad crops
or something... whatever. In the view of what is seemed to be the
correct theory concerning evolution[2], the Lamarkian theory is a
major mix-up of cause vs effect. (It also leaves a lot of room for a
God interviening with the evolution, making the theory more edible for
the european church at that time.) I also /believe/, do correct me if
I'm wrong, that this is the basis for the creationists, who see God's
hands controlling the evolution in the place of the learning theory
Lamark proposed.

So to wrap up, XP programmers are Gods in their universe(s), and the
whole "XP is religion" debate has come full circle. We can say we are
doing Lamarkian/creationistic evolution. When we set up a pattern as a
target for evolution, we perform a devine intervention. Thus, there is
no design that cant be "evolved" or "bred" using the Lamarkian
"rules".

Well.. I'm only rambling too, and I see (in retro-read) that I play
very loud on the religious strings.. I hope I don't step on any toes
here[3]...

reg.
Rolv


[1] I should probably have "quoted" that... :-)
[2] Not the time, nor the place for that discussion... :-)
[3] ...from experiencing to many times how mentioning God in any
discussion in any way at any place creates a war zone...

Phlip

unread,
Feb 5, 2002, 10:31:19 AM2/5/02
to
Rolv Inge Seehuus wrote:

> What if "evolving the design" wasn't such a good analogy after all?
> Especially considering all the emotional baggage that exists
> conserning the whole idea of evolution.

Ah, the old USENET chestnut, Young Earth Creationism vs. Scientific
Nihilism. No room for any middle ground where this "God" character is
any smarter than the average self-hating immature newsgroup
personality.

> I can pull the historical rabbit out of the hat here to
> illustrate. Alongside (or alongtime, perhaps) Darwin, there was this
> scientist[1] called Lamark, who had a different view of the whole
> process of evolution. He believed that things an organism learned
> during its lifetime, was passed on to it's siblings. E.g. gjiraffes
> grew long necks because the trees with the leaves they wanted were
> tall, cheetas evolved to fast-running animals because of the empirics
> showing that they became a better hunter if they ran faster, humans
> evolved to software engineers because they.. uhm.. erh.. had bad crops
> or something... whatever. In the view of what is seemed to be the
> correct theory concerning evolution[2], the Lamarkian theory is a
> major mix-up of cause vs effect. (It also leaves a lot of room for a
> God interviening with the evolution, making the theory more edible for

> the european church at that time.) I also believe, do correct me if


> I'm wrong, that this is the basis for the creationists, who see God's
> hands controlling the evolution in the place of the learning theory
> Lamark proposed.

The word you seek is "Emergent Design". This can be seen as an
umbrella term for natural selection, lamarkian selection, /and/
artificial selection (where "artificial" means "directed by a mind
more puny than God's").

> So to wrap up, XP programmers are Gods in their universe(s),

Ayup.

> and the
> whole "XP is religion" debate has come full circle.

Noope.

XP is science. You propose a testable hypothesis, construct an
experiement, run the test, record the results, get them peer reviewed,
attempt to reproduce them, and add them to a thesis from which you can
derive new testable hypotheses.

Sound familiar?

The "XP is religion" camp (really a mini-camp) have a lot in common
with those luddites who don't understand reproducible results, and
consider the popularizations and thesis that attend to Science an
attack on their flimsy morals.

> Well.. I'm only rambling too, and I see (in retro-read) that I play
> very loud on the religious strings.. I hope I don't step on any toes
> here[3]...

There are those who will try to claim Professional Victim status after
going out of their way to stick their toes under your wheels.

Tell them this:

These (bogus) USENET threads illustrate the schism between "scientific
nihilism" on the one hand and any kind of spiritualism on the other.
"Solipsism" says (generally) that the Universe is deterministic, and
that its behavior is indistinguishable from a situation where yours is
the only real consciousness and everything else is just a wind-up toy.

"Panpsism" says that everything has a consciousness or awareness of
some kind.

Suppose you are a Panpsist, but not a Christian, and you don't happen
to believe that any diety created all the animals in one fixed shape,
or that any external deity created templates that the animals evolved
into.

Suppose then that all time were simultaneous, and that our universe
was just one of an infinite number of alternate probable universes,
each distinguished by the merest whim. From this perspective, the
chains of causality we see going forward in time are really just the
tips of icebergs whose whole existences are independent of time.

Evolution does not march from the past into the future. Instead,
precognitively each species is aware of those changes it wants to
make, and reaching back from the "future" it alters the "present"
state of the chromosomes and genes to bring about in probable futures
the specific changes it desires. Both above and below your usual
concious focus, then, time is experienced in an entirely different
fashion, and is constantly manipulated, as physically as you
manipulate matter.

From this point of view, the theory of evolution is as beautiful a
fairy tale as the theory of Biblical creation. Both are quite handy &
useful; both are methods of telling stories, and both might seem to
agree within their own systems, and yet, in larger respects they
cannot be realities.

It's not nice to fool Mother Nature!

--
Phlip
http://flea.sourceforge.net
-- Proud victim of the dreaded boomerang effect --

Rolv Inge Seehuus

unread,
Feb 5, 2002, 12:45:16 PM2/5/02
to
In article <63604d2.02020...@posting.google.com>, Phlip wrote:
>Rolv Inge Seehuus wrote:
>
>> What if "evolving the design" wasn't such a good analogy after all?
>> Especially considering all the emotional baggage that exists
>> conserning the whole idea of evolution.
>
>Ah, the old USENET chestnut, Young Earth Creationism vs. Scientific
>Nihilism. No room for any middle ground where this "God" character is
>any smarter than the average self-hating immature newsgroup
>personality.

Weeeelll.. Actually.. From one of the sides in this "usenet chestnut",
the room for any middle ground at all is getting infinitesimally
small.. and smaller every day.. Guess which side... :-)

>The word you seek is "Emergent Design". This can be seen as an
>umbrella term for natural selection, lamarkian selection, /and/
>artificial selection (where "artificial" means "directed by a mind
>more puny than God's").

That's a good description, indeed. But unfortunately, I have the habit
of drawing far fetched analogies and elaborate... Not so very
news-reader friendly though.. :-)

The whole reason for the post, btw, was to show that even though
genetic algorithms in special cases fails to evolve a particular
feature (design) due to lack of gradients to use as a guidance, this
don't imply that XP can't evolve all designs. (Because XP isn't
'evolving' the design, per se, so the argument don't apply.)

I probably should have mentioned that in my first post, shouldn't
I.. :-)

>
>> So to wrap up, XP programmers are Gods in their universe(s),
>
>Ayup.
>
>> and the
>> whole "XP is religion" debate has come full circle.
>
>Noope.

It was ment to be read as a joke... :-)

>
>XP is science. You propose a testable hypothesis, construct an
>experiement, run the test, record the results, get them peer reviewed,
>attempt to reproduce them, and add them to a thesis from which you can
>derive new testable hypotheses.
>
>Sound familiar?

Somewhere here I posted an analogy describing XP as setting up a
science lab with the purpose of conducting controlled experiments with
respect to breeding a particulary fit software species, so... I guess?
:-)

>
>The "XP is religion" camp (really a mini-camp) have a lot in common
>with those luddites who don't understand reproducible results, and
>consider the popularizations and thesis that attend to Science an
>attack on their flimsy morals.
>

I actually thought that "XP is religion" was a joke, steming from the
way XP is marketed. Some times, the marketing is not far from
preachin', you know... :-)

[Znipped the rest, as it became somewhat of topic, but hey, thanx. ]

> It's not nice to fool Mother Nature!

There is this thing with cause and effect again. Who's fooling who you
say? :-)

reg.
Rolv

Kent Beck

unread,
Feb 5, 2002, 1:52:10 PM2/5/02
to
I'm coming to prefer "organic" as the adjective. "Emergent" creates no
picture in my mind (although it is probably perfectly accurate), and
"evolutionary" requires that individuals die, which is tough to sell to the
suits.

The pieces of the program swell, divide, and swell again, differentiating as
they grow in different micro-environments, just as the cells of an organism
swell and divide. A sapling is a tree, but one which you know will grow
deeper roots, a stronger trunk, wider branches, and more leaves. A one-line
program is still a program (the no-op implementation of Reduce() from
another thread, for example).

A Program is Like a Tree, to paraphrase Christopher Alexander.

Kent


William Johnson

unread,
Feb 5, 2002, 3:20:23 PM2/5/02
to

Peter Hansen drew:

> <ascii-art mode="primitive" degree="intensely">
>
> * <-- can't evolve to this solution
> |
> ______________|_____________
>
> * <-- easy to evolve to this point
> ___/ \__
> __ / \___ ___
> \___/ \__/
>
> </ascii-art>

Hello Peter,

Let's try a different view. The solution space is often
imagined "upside down" to your description and your "art".

If I might make a small change...

(I apologize to those who are "reading" this in a
proportional font).


> ___ __
> __/ \ ___/ \___
> \___ __/
> \ /


> * <-- easy to evolve to this point
>
>

> ____________________________
> |
> |
> * <-- can't (?) evolve to this solution
> (only less likely)


If you imagine your "solution" as a marble rolling
around over the solution space then gravity seems an
intuitive way to visualize the seeking of local minimums.

It would appear much less likely, without artificial help,
that our marble would fall into the single well that
represents the best solution. Sometime our solution needs
the same help to get out of our local best solution to
a better (deeper) solution just over that hill there.

Peter Hansen

unread,
Feb 5, 2002, 8:12:19 PM2/5/02
to
William Johnson wrote:
>
> (I apologize to those who are "reading" this in a
> proportional font).
>
> > ___ __
> > __/ \ ___/ \___
> > \___ __/
> > \ /
> > * <-- easy to evolve to this point
> >
> > ____________________________
> > |
> > |
> > * <-- can't (?) evolve to this solution
> > (only less likely)
>
> If you imagine your "solution" as a marble rolling
> around over the solution space then gravity seems an
> intuitive way to visualize the seeking of local minimums.
>
> It would appear much less likely, without artificial help,
> that our marble would fall into the single well that
> represents the best solution. Sometime our solution needs
> the same help to get out of our local best solution to
> a better (deeper) solution just over that hill there.

My (?) and your (only less likely) probably don't apply,
since the nature of the evolutionary process (as it is
currently understood to occur) is that it involves combining
together portions of genomes from relatively "fit" solutions
because that tends to increase the probability of stumbling
across an "even more fit" solution, than if one picked randomly.
Since none of the non-solutions in that picture give any
hint as to where the solution actually lies, evolution _per se_
cannot work (so my theory goes).

Also, regarding your suggestion that an evolutionary
search might get caught by local minima...
Since by their very nature these searches work on a
whole population of potential solutions (ranging
across the whole search space) and because they make
use of mutations to add a randomizing factor, they
cannot, in theory, get stuck in local minima. (In
practice, we produce rather brittle genetic algorithms,
but nature does a much more sophisticated job.)

How much of all this applies to "designs" is another
question yet to be answered. Since we tend not to
work on many solutions in parallel, nor do we try
applying random mutations to the solution (except
rarely, using what we call "bugs" :-), I'm not sure
"evolution" often applies accurately in the world of
engineering or software design.

I think I only brought this up in relation to the use
of the term "evolution" with respect to designing.
Perhaps Kent's "organic" is more appropriate for
describing the designs themselves, or the processes
that are often behind them.

-Peter

Jeff Grigg

unread,
Feb 5, 2002, 9:57:49 PM2/5/02
to
"Kent Beck" <kent...@my-deja.com> wrote:
> I'm coming to prefer "organic" as the adjective. "Emergent" creates no
> picture in my mind (although it is probably perfectly accurate), and
> "evolutionary" requires that individuals die, which is tough to sell to the
> suits.
>
> [...]

>
> A Program is Like a Tree, to paraphrase Christopher Alexander.


"Emergent" is accurate, but the suits don't understand chaos theory.
(They just create it. ;-)

"Organic" is nice; sometimes the tree needs a little TRIMMING:
Cutting off dead branches improves the health of the tree by reducing
the chance of infection or insect attack from there.

William Johnson

unread,
Feb 5, 2002, 11:08:02 PM2/5/02
to
Peter Hansen wrote a bunch of smart stuff:

...

> How much of all this applies to "designs" is another
> question yet to be answered. Since we tend not to
> work on many solutions in parallel, nor do we try
> applying random mutations to the solution (except
> rarely, using what we call "bugs" :-), I'm not sure
> "evolution" often applies accurately in the world of
> engineering or software design.
>
> I think I only brought this up in relation to the use
> of the term "evolution" with respect to designing.
> Perhaps Kent's "organic" is more appropriate for
> describing the designs themselves, or the processes
> that are often behind them.
>
> -Peter

I suspect, as I think you do, that this has little to
do with software design as we know it.

regards,
William Johnson

Ron Jeffries

unread,
Feb 6, 2002, 6:49:19 AM2/6/02
to
On Tue, 05 Feb 2002 20:08:02 -0800, William Johnson <w...@acm.org> wrote:

>Peter Hansen wrote a bunch of smart stuff:
>

>> How much of all this applies to "designs" is another
>> question yet to be answered. Since we tend not to
>> work on many solutions in parallel, nor do we try
>> applying random mutations to the solution (except
>> rarely, using what we call "bugs" :-), I'm not sure
>> "evolution" often applies accurately in the world of
>> engineering or software design.
>>
>> I think I only brought this up in relation to the use
>> of the term "evolution" with respect to designing.
>> Perhaps Kent's "organic" is more appropriate for
>> describing the designs themselves, or the processes
>> that are often behind them.
>

>I suspect, as I think you do, that this has little to
>do with software design as we know it.

"As we know it"

It may have. Software design as I /now/ know it has a very organic feel,
and one that I'm willing to describe as "evolutionary", not in the
strict genetic sense that Peter has describe so well but in the sense
that the design seems to be changing and growing in response to forces
around it. Organic is better.

In the olden days, like most fledgling programmers, I didn't know much
about design. I got interested, and interested in being good, and so I
worked hard to understand design. Dijkstra, Brinch-Hansen, Constantine,
Knuth ... I read everyone, got enthusiastic about their ideas, tried to
build them into my work.

I got to be pretty darn good at design, and I got to believing that if I
injected enough good design into a project at the beginning, then the
project might come out at the end still having a pretty good design.My
teams and I shipped software that created half a billion dollars in
revenue that way. It wasn't bad.

Beck, C3, and XP taught me something new.

1. When a program is new and small, it doesn't take much design to make
it perfectly fine: after all, it doesn't do much, how much design could
it need.

2. As a program grows, if we just "change" it, its design tends to
worsen just a bit every time we change it.

3. If every time we change the program, we bring the design back up to
snuff, the program stays in a form that we'd call, looking at it,
"well-designed".

When I work that way, I don't envision the design as a target that I'm
shooting for. If teels like the program is growing naturally, in
response to what's going on around it.

Software design // as I know it now // is very organic. I find that I
can deliver valuable /features/ rather than infrastructure, from day
one, and keep the program well-designed all the time.

I like that.

Kay Pentecost

unread,
Feb 6, 2002, 11:50:54 AM2/6/02
to
So, the / old / way was like going to school, learning a buncha stuff,
then going out to work... and never learning anything new...

We all know people like that...

The eXquisite Programming way is like learning /every minute / --
really growing in knowledge from everything that's in the world to
teach us, everything that's out here to learn from.

I know lotsa people like that, too. They're much more fun.


So XP doesn't neglect design at all... on the contrary, it's
*continually* designing -- and the design being done by the people who
are implementing brings in reality....

Oh, it's all awesome.

I have to remember not to get too carried away....

Nawwww, what the heck.

Kay

Ron Jeffries <ronje...@REMOVEacm.org> wrote in message news:<83DB1B2E4BD501A1.61EF01D5...@lp.airnews.net>...