The underlying issue, which I only really understood while writing the book, is that there are different approaches to OO. Ours comes under what Ralph Johnson calls the "mystical" school of OO, which is that it's all about messages. If you don't follow this, then much of what we talk about doesn't work well.
That said, I would also expect the balance of practices to be different in a dynamic language where we can't rely on the language to keep interfaces in sync--although the original ideas came from a(nother) bunch of Smalltalkers.
My approach is, I hope, much more akin to what you'd pick up from reading GOOS. I'd say a significant part of Brian's scepticism about mocks is mostly because he hasn't ever worked with anyone who knows how to use them properly. He certainly seemed to be intrigued by the idea from watching my session that you didn't need to keep scurrying off down a rabbit warren each time you needed a collaborator class.
If you watch the videos, you'll also see we ended up with quite different designs, even for such a simple problem.
Looking at the presentation, it reminds me of a classical dialect
techniques: make your opponent ideas idiotic and then have fun to
It's used by politician everywhere, I hope this is not the start of a
new trend in tech conference.
I think it's completely ok dislike mocks, it's also perfectly ok to
ignore them, but if you present a session about mocks and you
misrepresent them in this way either you do on purpose or you didn't
any preparation for the session.
Said that, I agree that sometimes it's difficult (at least for me) to
choose how much mocking and where cut the layers... but you can always
1. where exactly stubs and spies are better than mocks and why.
2. (more important) everything can be misused, so do you see something
wrong with the "correct" way to use mocks (let's take GOOS as
> style much of the mock advice won't work well. What I find in practice
> is that many teams don't recognise this and use mocks inappropriately.
In my experience, most of teams don't use TDD at all. Most of the
misuses of mocks I saw they were on teams who wrote the test later or
in teams who were forced to use overcomplex architectures.
Said that, taking extreme positions is fine for me, but then what else
could you expect if not extreme reactions? ;)
"Hammers suck. They strip the threads from my screws."
Yes, a lack of skill will lead to brittle code and tests. And there are plenty of broken TDD codebases that don't use mocks. And even more broken codebases that don't use TDD. No technique can survive inadequately trained developers.
I only had the slides to go on, so I don't know if the words altered the position. Personally, I'm very, very tired of "mocks are bad because X", "Yes, that's why we never do X" arguments. There's a lot of misunderstanding in the world, much of it perpetrated by Big Names, which leads to exactly the sort of mess you've been seeing. If the teams that you were working with had been Listening to the Tests, rather than just typing furiously, then they wouldn't have gone down that hole.
It might have been more productive (if less noticeable) to have done a symptoms and cures talk. To me, taking an extreme anti-mocks position is getting a bit old.
personally, I don't mind that much whether interactions are specified before or after. I prefer before because that's what I'm used to and it looks a bit like a poor man's Design by Contract.
What does concern me is that all the interactions should be verified unless I turn them off explicitly. If the behaviour of the object under test changes, than I want my test to have changed too (preferably beforehand). If I need to default or ignore a significant proportion of the interactions then that's a clue that something's too big.
I would guess that there's an interesting project in the dynamic languages to write a tool that reads the expectations and cross-checks the signatures with candidate classes. In Smalltalk these could be defined as protocols. Ruby's less regular, so it would be harder to be consistent.
I'd be happy to discuss why I dislike using mocks and prefer to use
stubs and spies in the cases where I want to verify that specific
messages were sent to collaborating objects.
I've been doing TDD for over 10 years and have seen numerous
situations where overusing mocks leads to brittle, hard to read test
Having moved from Java to Ruby over that last couple of years
I'm seeing a lot of the problems I saw in Java code bases reappearing
in Ruby code bases. The goal of the presentation was to highlight
these problems, albeit by taking an extreme position.
I don't believe I particularly misrepresented mockist TDD in the
presentation but certainly emphasised the aspects that I dislike.
I completely agree with Steve's comment above that there are different
styles of OO (and TDD) and that if you don't follow the GOOS OO design
style much of the mock advice won't work well. What I find in practice
is that many teams don't recognise this and use mocks inappropriately.
By taking an extreme position in my presentation I hope that people
will discuss both views and as a result refine the advice on when to
use (or not use) mocks.
That's what we set out to do in the book.
When writing the book we were very careful to avoid any absolute
statements and to describe *our experience* of using a variety of TDD
tools (including mocks), and the context in which we found the tools
useful. We deliberately did not write in absolute terms ("mocks are
great" or "mocks suck", for example), because any tool is applicable
in some context. Outside that context any tool becomes less helpful
and, far enough away from the sweet spot, a hindrance. We wanted to
let the readers understand enough about the contexts we have
experienced that they could hopefully map them to their own contexts
and make informed decisions as to which tools are appropriate for
them. Anything else is, I think, a disservice to the audience and just
continues the parlous state of debate and education the industry in
which technologies are described almost like fundamentalist religious
beliefs instead of in terms of engineering trade-offs. No technology
is bad, it's all context dependent -- even Spring might be useful
* only kidding, Spring fans!
That, I think, is the key difference in thinking. You're designing in
terms of inputs and outputs, mock objects (and jMock in particular)
aid designing in terms of state machines that communicate by message
passing protocols. In that model, a state transition is triggered by
an event (incoming message) and causes some observable effects
(outgoing messages). The transition rule -- trigger, effects, new
state -- is a single "chunk" of behaviour.
> (although I notice that JMock has added 'allowing' to add more distinction).
JMock has always had that feature, although it was called "stubs" in jMock 1.
Sorry to repeat myself again (and again (and again ...))
THERE IS NO SUCH THING AS MOCKIST TDD!
There are different ways of designing how a system is organised into
modules and the interfaces between those modules. In one design, you
might split the system into modules that communicate by leaving data
in shared data structures. In another, you might split the system
into modules that communicate by synchronous message-passing. In
another, modules might communicate by lazily calculated streams, or by
async messaging between concurrent actors, or by content-based pub/sub
Depending on how you design inter-module interfaces, you'll need
different tools to unit-test modules (whether TDD or not).
Mock Objects are designed for test-driving code that is modularised
into objects that communicate by "tell, don't ask" style message
passing. In that style, there is little visible state to assert about
because state is an implementation detail -- housekeeping used to
coordinate the message-passing protocols.
In other styles of code, procedural or functional, assertions on
queried state works fine. (In asynchronous code, they do not because
of race conditions).
Even in a system that uses Mock Objects appropriately in its
unit-tests, a significant part is purely functional in my experience,
and so will not be tested with mock objects.
Talking about Mockist TDD vs Statist TDD leads people to think in
terms of adopting a "TDD style" and rigidly following a process,
rather than understanding how to select from a wide array of tools
those that are appropriate for the job at hand.
Rant over (this time!).
I find where the expectations are set to affect the readability of the test and that asserting on a spy after the action to be more natural and readable.
In addition a mock will fail if the implementation invokes a method on the collaborator that isn't specified in the test, whether or not I care about checking it.
This is an aspect in the use of mocks that I genuinely don't "get", the influence on the design that comes from using mocks.
What I'd like to find is the qualified advice on when to use mocks, which lies somewhere between always and never.
I have found that using verifying test doubles (spies and mocks both) as opposed to non verifying test doubles (stubs) creates positive pressure on your design by highlighting unhealthy dependencies.
Then that might a symptom of unbalanced relationships between the object and its collaborators, or that you should be using real objects in the test. One clue is if you're mocking out setters and getters, which suggests there isn't really an action in the collaboration.
>> If I need to default or ignore a significant proportion of the interactions then that's a clue that something's too big.
> I agree that people don't "listen to the tests" enough. In response to
> problem tests the options to consider also include changing the
> testing style to a more stateful style.
That might be an answer, or perhaps you're looking at the wrong interactions. I often find that tests that involve mocks are actually exercising a small cluster of objects: a small container and some helper and value objects. The focus should be on interactions that trigger changes in the environment.
Winner of the Agile Alliance Gordon Pask award 2006
+44 (0) 797 179 4105
M3P Limited. http://www.m3p.co.uk
Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
Company registered in England & Wales. Number 03689627
That's a reasonable position. I prefer to think in terms of protocols between objects, so I like to keep the interactions together. Not a show stopper either way.
> (although I notice that JMock has added 'allowing' to add more distinction).
allowing has been there since the beginning. It was called stubbing in jMock 1, but Nat changed the terminology. I'm not sure where he took it from, but I know that it's used in Syntropy in event specifications. We've been banging on about "Stub queries, mock actions" for some years.
I think this also why Easymock has had such a pernicious effect, since it makes that distinction at the end of the line where it's hard to see.
still not sure what you mean with "disadvantages of mocks".
Also there are differences between (for example) mockito and JMock in
> I also prefer that spies allow me to put asserts after the invocation
> of the method being tested, I consider it to be much more readable and
> again makes a clearer distinction between input and output.
I need to see the code here to understand what you mean.
Generally speaking I'm not very concerned by the "distinction between
input and output", I'm interested in testing the behaviour as a whole.
> I don't see anything wrong with the GOOS 'correct' way of using mocks
> but I think what people generally don't appreciate is the relationship
> between using mocks and the GOOS design style. What I find is that if
> your design style differs, or is constrained by using a framework,
> then mocking can lead to the brittle, unreadable test problems that I
> have often seen.
As Steve put it: "No technique can survive inadequately trained developers."
Anyway my target is improving my team skills to go near the GOOS
design style model.
So I want for them to learn to use mocks in the correct way, nothing less. ;)
technically speaking yes, but given its name, all its method names I
though it was included in the "mocks suck".
So at the end it's all about JMocks vs. Mockito? how boring...
>> I need to see the code here to understand what you mean.
>> Generally speaking I'm not very concerned by the "distinction between
>> input and output", I'm interested in testing the behaviour as a whole.
> Here is an example:
I don't see the point. Why using JMocks instead of mockito in that
class would result in less brittle tests and better design?
You didn't. The discussion has been going on for over ten years. You
could have just joined in.
I'm really struggling to see how a team that doesn't have the skills to write mocks properly will do better with spies. I can understand that they will get away with inadequate design for longer, but is that really a good thing?
Is it really? I'm pretty sure that 90% of programmers cannot tell the
I saw your examples, in the first example first case I think you're
also testing RoR, that could be a good thing or a bad thing depending
on the contest.
In the second example I really don't understand why the second case is
less brittle than the first one.
>> So at the end it's all about JMocks vs. Mockito? how boring...
> Not at all, it's about the misuse of mocks and making people aware of
> test spies as an alternative which may be more suitable depending on
> your design style.
More suitable => yes. More solid => no.
I can provide you with lots of tests using mockito in the wrong way,
but what's the point?
This one has a singleton and it's expecting on a query, rather than stubbing, which we don't recommend. Also, I might factor out the book params to show that a value is being passed through.
> Perhaps instead of input and output I should have said setup and
> asserts. So when I read a test I like to be able to clearly see what
> is setup code and what are asserts and I find that spies make that
> clearer. Here is another example that I think demonstrates that.
Here the expectation is buried in the setup, it should come afterwards. One of the advantages of the jmock initialiser hack is that it makes the protocol stand out. And the problem with the spy version is I haven't made explicit that I'm ignoring other calls rather than I've forgotten about them.
That's because a bunch of well-known names mischaracterised the technique from the very beginning, and we were too busy to write it up properly (and too unimportant to get heard). Dave Astels was the only author who took the trouble to understand. The damage continues to propagate, especially in the .Net world. That's why I'm so tired of these arguments.
> Would you say that using mock objects leads to
> the design style you describe or that you favour that design style and
> use mocks because they enable testing that design style?
There's a postscript in the book, copied on the mockobject.com website, that describes how we got there. In the spirit of Nat's recent rant, mocks arise naturally from doing responsibility-driven OO. All these mock/not-mock arguments are completely missing the point. If you're not writing that kind of code, people, please don't give me a hard time.
In some frameworks, you can specify never(), which is the same as not specifying but makes the intention clear.
I think it's reasonable for a set of tests to break because an expectation fails, especially if you can have it pop up the debugger at the right time. Controlling for single failures at this granularity seems a bit too precise for me.
Again, what's missing from this is making explicit which unspecified calls can be ignored and which have just been forgotten about.