TDD in A Philosophy of Software Design

146 views
Skip to first unread message

Kowal

unread,
Apr 22, 2024, 3:41:18 PM4/22/24
to software-d...@googlegroups.com
Hello Sir,
thank you for the superb book on software design.

For the entire book there was only one thing I disagree with.The point that TDD doesn't encourage engineers to pay attention to the design. If you take the newest Dave Farley's book (Modern Software Engineering), you can see how much good design is emphasized in the "red-green-REFACTOR" cycle.

I would even argue that the refactor step is the only method that forces people to think about the design and TDD allows them to build a safety net around the code that will make the details easy to refactor (but only if you follow Chicago school and not use mocks inside the module).

Kind regards,
Kowal

Milan Vydareny

unread,
Apr 22, 2024, 4:16:43 PM4/22/24
to software-design-book
Now I'm really curious. I've never heard of "Chicago school" as it relates to software. Where can I learn more about it?
Thanks in advance,
MV

Venkat Dinavahi

unread,
Apr 23, 2024, 12:14:48 PM4/23/24
to software-design-book
Hi Koval,

I don't think there is anything wrong with TDD. But having practiced it in the past, I feel that it can lead to habits that make you end up in "tactical programming" land.

You get caught in these tight cycles that incrementally improve the design (doing just minimum to get the tests passing and some surface level clean-up). You don't spend enough taking a step back and strategically thinking about the overall design.

I still use TDD on occasion but I don't think it's always the right tool to reach for. You can test this in your own work. Try using TDD for some components. Try doing it without TDD. See what results in a better design for you.

Cheers,
Venkat

reuven...@gmail.com

unread,
Apr 23, 2024, 5:02:24 PM4/23/24
to software-design-book

Jason

unread,
May 21, 2025, 11:46:18 AMMay 21
to software-design-book
Thanks for posting https://github.com/testdouble/contributing-tests/wiki/Detroit-school-TDD 

Its a really interesting comparison of TDD schools. I didn't realize there were both London and
Detroit camps.

In section 19.4 of the A Philosophy of Software Design, 2nd Edition book, I was surprised to read,

"“Test-driven development is too incremental: at any point in time, it’s tempting to just hack in the next feature to make the next test pass. There’s no obvious time to do design, so it’s easy to end up with a mess.”

I thought, "I'm not sure if John actually learned how to do TDD in the first place"... maybe
this is just an accidental strawman, from lack of good information.

In my practice, which maybe veers more towards Behavior Driven than Test Driven,
the design happens primarily when first writing the test.  While classic Detroit TDD
says only do it during Refactor, I find starting so myopically suboptimal. To elaborate
on my variation/different approach to TDD, arrived at through years of practice:

I think of tests as the "clients" of the implementation "server". My mantra is, "write the client first". This means
I am designing the external client interface before writing the internal server
implementation for it.  This has two advantages:

1. the need to refactor in the red-green-refactor cycle is minimized. This is
nice because refactoring is has no (Skinner operant conditioning) behavioral
reinforcer, and so we as humans are conditioned to omit it. 

2. The superior operant conditioning of writing tests (clients) first is preserved.
The major revelation/value I derive from TDD/BDD is self-managing my own
development behavior. And, design has happened.

 Consider:

If you write the test (client) first, you are proposing a design. But you have
an actual implementation of that design in the form of the test, which is
runnable and red at first, which means you have validated the design partially
but immediately--if you can't translate your idea into client (test) code, it
really has no chance of working.

Now you write the implementation (server) that is driven by that test client.  
You are free to experiment, even make guesses, about what might satisfy
both the new test and the full set of existing tests. This freedom allows
discovery and evaluation of (server implementation) approaches that work;
and rules out those that cannot under the existing set of rules (the test suite).

I cannot begin to describe how delightful it is to "take a stab", "a wild guess", 
a "leap of faith", or "this is so crazy, but maybe it will work?" and actually have
it turn out to make the test suite all green. You want to shout from the
mountain top "Hurrah!!"

Here was my other big insight: the major advantage compared to the reversed procedure (server then client),
is operant conditioning based. The implementation making the test go green
is a major reinforcer. I've just reinforced the chain of: test-implementation, and
now I have a one-test-bigger test suite, along with a server that has the new capability.

In contrast, doing server first then client (traditional unit testing) results in writing 
tests being operantly punished. You "think" you are done when you've got the
first server implementation compiling. It is very "off putting" -- it is "a drag" -- to realize now you
have an additional step! Drat, now I've got to write the test. Grrr. In TDD, you actually
_are_ done(!). 

As we know from Skinner, things that get punished, by definition,
decrease the frequency of the behavior. Hence nobody tends to write tests when
doing server/implementation first. It requires alot of discipline to fight our
operant condition-able natures.  

It is much better to leverage rather than to fight human nature.

The other big insight is how best to teach TDD. As I observed above, TDD can
easily get strawman critiqued when it is not understood.

When I took a TDD class to learn the approach, the instructors were very good at
again leveraging operant conditioning principles. They "back-chained" the behavioral
sequence that is practiced in "red-green-REFACTOR".  

That is, THEY TAUGHT REFACTORING FIRST. This is the most efficient way we know in
operant conditioning to teach longish behavior chains. Start from the end. Get
the end (refactoring/design) fluent, then add the next-to-the-last step. And repeat.

Refactoring is the design step, obviously. So in "properly taught" TDD, or at
least how I teach and practice it, design is both taught first, and practiced first
when following my mantra, "write the client first".

Best wishes,
Jason

Reuven Yagel

unread,
May 21, 2025, 12:20:56 PMMay 21
to Jason, software-design-book
You probably want to go over this conversion: https://github.com/johnousterhout/aposd-vs-clean-code


--
You received this message because you are subscribed to a topic in the Google Groups "software-design-book" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/software-design-book/vE2aqVXi8nw/unsubscribe.
To unsubscribe from this group and all its topics, send an email to software-design-...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/software-design-book/aabc3823-4748-4f1f-a070-c4427de06796n%40googlegroups.com.

Jason

unread,
May 21, 2025, 1:55:18 PMMay 21
to software-design-book
Thanks Reuven. That was interesting. It is a massive transcript; for anyone else searching for the
part on TDD, it is linked in the table of contents, and here:
 https://github.com/johnousterhout/aposd-vs-clean-code?tab=readme-ov-file#test-driven-development

The Ousterhout alternative to TDD is defined as "bundling" in this transcript, quoting it:

> JOHN: The approach I prefer is one where the developer works in somewhat larger units than in TDD, perhaps a few methods or a class. The developer first writes some code (anywhere from a few tens of lines to a few hundred lines), then writes unit tests for that code. As with TDD, the code isn't considered to be "working" until it has comprehensive unit tests.
>  UB: How about if we call this technique "bundling" for purposes of this document? This is the term I use in Clean Code 2d ed.
> JOHN: Fine by me.

Perhaps my practice could be described as BDD, in an homage to Behavior Driven Development, where BDD 
is redefined to mean = Bundling + the operant conditioning advantages accrued by writing the design down in tests first. 

I almost never write a "unit" test per server function; only if that function is doing something very hard to get right. My tests are 90% about capturing larger system behaviors.

In other words, BDD can also mean,

Bundling-made-better Due-to-being-test Driven

:)

Jason

p.s. I'm always curious to learn about any testing practices, whatever their names.

p.p.s. If anyone has links to video or detailed textual descriptions of actual testing
practices that go by the name "unit testing" for instance, I'd be happy to read them.
The term is poorly defined, and too commonly assumed to be understood. 
I really appreciated, for example, John's description in this recent
the new server function on the left half of the screen, and going line by
line through it while writing the "unit" test on the right side of the screen,
with the aim of getting every line on the left (in the new function) exercised by 
the the new unit test on the right.

On Wednesday, May 21, 2025 at 5:20:56 PM UTC+1 reuven wrote:
You probably want to go over this conversion: https://github.com/johnousterhout/aposd-vs-clean-code


John Ousterhout

unread,
May 22, 2025, 12:25:27 AMMay 22
to Jason, software-design-book
I'd like to follow up on one of Jason's comments:

Here was my other big insight: the major advantage compared to the reversed procedure (server then client),
is operant conditioning based. The implementation making the test go green
is a major reinforcer. I've just reinforced the chain of: test-implementation, and
now I have a one-test-bigger test suite, along with a server that has the new capability.

In contrast, doing server first then client (traditional unit testing) results in writing
tests being operantly punished. You "think" you are done when you've got the
first server implementation compiling. It is very "off putting" -- it is "a drag" -- to realize now you
have an additional step! Drat, now I've got to write the test. Grrr. In TDD, you actually
_are_ done(!).

As we know from Skinner, things that get punished, by definition,
decrease the frequency of the behavior. Hence nobody tends to write tests when
doing server/implementation first. It requires alot of discipline to fight our
operant condition-able natures.

That's a fair argument, but the exact same argument applies to TDD, in a way that I think is even more dangerous. You write the first test, then write the code that makes it pass. At this point you naturally think you are done with that code. But then you write the second test and start working on the code for it, and you discover that the code you wrote for the first test gets in the way of the second test. As you said, "Grrr". But, you suck it up and rewrite that original code, plus add new code to make the second test pass. Now, you think, surely the code for the first test is done. But, you didn't think about tests 3 and 4, so when you go to write them, all the original code breaks again.  This keeps happening over and over, because you never take time to think ahead and actually design, you just write enough code to make the next test pass. If you're really conscientious you will continue to suck it up, rewriting over and over, and you can end up in a good place. But a lot of people will start thinking "I'm sick and tired of continually rewriting the same code... if I just make this tiny little kludge over here, I won't have to do massive rewriting." And they do this every time a new test is introduced, so they end up with a pile of spaghetti. The problem is that TDD discourages doing good design and encourages kludgy hacking to make the next test work.

You are worried that without TDD people won't write enough tests. That's a fair concern. I'm worried that with TDD people won't do enough design; I think that's much more damaging. If there aren't enough tests you can always go back and add more. If a system hasn't been carefully designed, it's almost impossible to go back and fix it. 

-John-

--
You received this message because you are subscribed to the Google Groups "software-design-book" group.
To unsubscribe from this group and stop receiving emails from it, send an email to software-design-...@googlegroups.com.

Jaime Pillora

unread,
May 22, 2025, 2:52:05 AMMay 22
to John Ousterhout, Jason, software-design-book
I’m with John

Moving in the right direction matters more than perfecting each individual step

A focus on rapid refactoring grants you rapid directional change, and correction. Whereas a focus on testing tends to solidify your direction, which is fine if you’ve designed perfectly - but this rarely the case
Reply all
Reply to author
Forward
Message has been deleted
Message has been deleted
0 new messages