here is the post: http://parlezuml.com/blog/?postid=987
The two "schools" of TDD are really addressing different aspects of a codebase, neither is sufficient on its own. For example, in the book we have a message translator that I think pretty much is driven by triangulation, even if the results are detected by expectations.
"Our goal here is reliable code and good internal design, which means
that both schools of TDD are at times necessary - the Classic school
when we're focused on algorithms and the London school when we're
focused on interactions. Logic dictates that software of any
appreciable complexity cannot be either all algorithms or all
interactions (although, to be fair, I have actually seen applications
like these - and wish to never see such horrors ever again).
This is why my TDD training courses have one day of "Classic TDD" and
another day of "London school" TDD. It's not an either/or decision."
Sometimes it helps to actually read the post I guess :)
I agree with you. What he calls "London School" is more "TDD used to
model your domain" and "classic TDD" is more "TDD used to solve
Still I read the article several times but I didn't get the email
First there are regex to validate email addresses.
Second without any code it's difficult to understand for me how he
though London School would have solve it without triangulation.
That's a very entertaining, well-written piece and I bet it's a good
way of drumming up customers for his training course, but
unfortunately is full of factual and logical errors. I don't know why
he wrote what he did, since Jason has known us for years and we've
frequently spoken at XTC and many conferences.
His "history" of where his "London School" of TDD comes from is
incorrect -- it didn't come from banking projects in the City of
London. The mock objects technique was invented by the guys at
Connextra who were writing a contextual advertising system for the
web. I learned of it at XTC and applied it to the Ruby back-end of a
(what would now be called) AJAX mobile web-app I was writing to
display F1 race results on hand-held computers at race-tracks. I
ported my Ruby code to Java to support my development of an F1 race
strategy simulation, and shared the code with the folks at Connextra
-- Tim McKinnon and Steve in particular -- and we extended it and
eventually open-sourced it. That evolved into jMock as more and more
people asked for features. The emphasis on driving from acceptance
tests and functioinal tests we picked up from the XP white book,
emphasised by our own experience of the limitations of unit-test-only
By picking two different examples he's actually *not* showing the
difference between his two Schools. How does his "Classic School" of
TDD address the kind of larger-scale integration concerns he talks
about in his "London School" example? He doesn't actually say.
The Roman number example is a small, pure function and, as we say in
GOOS (despite it apparently being the key text of his other school of
TDD), naturally lends itself to being tested by making assertions on
The distinction he seems really to be drawing is not between
"algorithms" and "interaction". It's between pure functional (and
maybe procedural) and event-driven. There is, after all, an entire
field of study of distributed algorithms that are entirely event
The idea that using mock objects does not mean you do test
triangulation is also nonsense. But his example doesn't even support
that statement. The test of the sign-up controller would use a mock
object and would (it seems -- it's hard to tell without code) entirely
cover it's behaviour. The problem he describes with the "London
School", where it does not perform the triangulation that comes from
his "Classic School" approach, is in the test of the EmailValidator.
This object takes an email address and returns true or false to
indicate its validity. That's a *pure* function -- the does not
involve mocking and would be tested with asserts, following in the
style of his Classic School. So his example actually illustrates a
problem with the way he has applied "Classic School" TDD! Assuming by
"London School" he means GOOS, we certainly do not suggest skimping on
unit-testing individual classes so that their behaviour is not
(However, some of is "duff" email addresses are actually perfectly
correct -- test triangulation won't help you if you don't actually
know the domain!)
Again, I see the problem with this article to be a focus on finely
detailed TDD *processes* that are to be blindly followed. His
conclusion is reasonable -- as long as you believe that the choice is
between following one detailed process vs. another.
In my opinion it's better to focus on the benefits of different design
styles in different contexts (there are usually many in the same
system) and what that implies for modularisation and inter-module
interfaces. Different design styles have different techniques that are
most applicable for test-driving code written in those styles, and
there are different tools that help you with those techniques. Those
tools should give useful feedback about the external and *internal*
quality of the system so that programmers can "listen to the tests".
That's what we -- with the help of many vocal users over many years
-- designed jMock to do for "Tell, Don't Ask" object-oriented design.
> On 16 January 2011 18:43, Nat Pryce <nat....@gmail.com> wrote:
>> That's a very entertaining, well-written piece and I bet ...
> This really ought to be a blog post.
JMock is a nice library to do what KB preaches on mocking in
"Implementation Patterns"... still I wouldn't call it a "school" for
I also consider GOOS the book that is offering the best whole view of
software development. Other books are more about on specific
So I think there is a "TDD done right school" and a "misunderstood TDD
school". But the first is not London specific... ;)
Like any technique, there is a huge "misunderstood" school. The "London" tag is an accident of history, but we did have a strong local group at the time that led to a number of innovations.
here is the post: http://parlezuml.com/blog/?postid=987
Of course you're much more qualified of me on this. ;)
But what I wanted to say is that (IMHO of course) the "GOOS way" is
not a different technique from "traditional TDD", rather a kind of
evolution. So presenting them as opposite schools on how to do TDD is
off the mark.
I had a discussion on this with a coworker so I rechecked on some
other books I have on OO design and TDD (TestDriven Develoment,
Implementation Patterns, Refactoring to Patterns, Working Efficiently
with Legacy Code, Clean Code).
At the end, my conclusion was that GOOS contains several innovations,
but it's on the same "stream". But as I said, maybe I'm too dense to
spot the differences.
Specifically on interfaces, my old position was to avoid any interface
with just one implementation. After reading GOOS I've started to use
them more freely but still a bit pedantic to write both interface and
class when I cannot find different meaningful names for each of them.
GOOS examples are very clear, still when writing our own software it's
> In the video, Jason shows interfaces called Boxoffice, Performance, Seat
> and implementation classes BoxofficeImpl etc.
> This is frowned upon in GOOS, but in cases like above, I usually can't
> find any real better name to give the implementation classes. What would
> you do in this case?
Quick answer: try a Structural name of the form
Examples: JmsBasedPostOffice, HibernateBasedCustomerRepository,
If you can't see what about an implement makes it different from the
underlying interface, then the interface might belong elsewhere. Try
moving up one level of abstraction.
J. B. Rainsberger :: http://www.jbrains.ca ::
Thinking some more about this, I think the line that really bothers me
is "we move on to the next easiest failing test."
Maybe it's a poor choice of words but, taken at face value, I think it
illustrates a deep misunderstanding of TDD.
Firstly, TDD is not about writing a bunch of failing tests that we
make pass one at a time. TDD stipulates that we write one test at a
time, implement just enough to make it pass, make the implementation
as simple as possible (note: simple, not easy), and continue by
writing another test.
Secondly. we don't order tests by what is easy. We test what we need
to do next and we MAKE that easy to test. Making it easy to test might
actually be difficult to do: it might bring to light a shortcoming in
our design and force us to address it.
That, to me, is the essence of the TDD process: the experience of
writing a test gives us feedback about the quality of our design. And
I want that that feedback is to be met head on, not avoided by always
taking the easy route.
i was trying to reply in his video, but it seems that youtube
does not accept big texts! So I am gonna put it here! :)
I guess the "London school" is the natural way when you develop a
really object-oriented code. When your code grows, you start breaking
up responsibilities in different classes. So, as we have dependencies
between objects, we need to mock them in order to test the behavior of
I don't get when you say that "classic school" is focused on
algorithms. Even using "classic school", when you are testing a system
that grew up to a big OO system, you will test the relation between
dependencies and you would end up using mock objects the same way.
As I have read both books, the main difference for me is that, GOOS
are focused on objects and their relationships since from the
beginning. It tries to solve problems, but always thinking on objects.
That doesn't happen in Beck's book, which is more focused on sorting
the problem out in the simplest way, with less possible code, one step
at a time.
Also, in the last e-mail, Nat talked about simple, not easy. I blogged
about it in my brazilian portuguese blog, but I translated it to
english a couple weeks ago. In this post I talk about that "simple
changes may not lead you to simple solutions":
. Any opinions would be appreciated.