Mocks Suck

834 views
Skip to first unread message

Sageniuz

unread,
Dec 20, 2010, 12:33:25 PM12/20/10
to Growing Object-Oriented Software
Hi everybody,

because I'm a big fanboy of the book and I've learned so much by
reading it, I was a little bit shocked as I saw Brian Swan's
presentation title "Mocks Suck".

Here's the link to the presentation:
http://www.testingtv.com/2010/12/20/mocks-suck-and-what-to-do-about-it/

I'd really recommend watching this presentation and would love to
discuss all Brian's criticism about "Mockist TDD".

Has somebody already seen the presentation? What are your thoughts?

Steve Freeman

unread,
Dec 20, 2010, 12:44:04 PM12/20/10
to growing-object-o...@googlegroups.com
I worked through the slides a while ago and was as impressed as you'd expect me to be. As usual with these cases, he appears to be attacking us for stuff we don't do.

The underlying issue, which I only really understood while writing the book, is that there are different approaches to OO. Ours comes under what Ralph Johnson calls the "mystical" school of OO, which is that it's all about messages. If you don't follow this, then much of what we talk about doesn't work well.

That said, I would also expect the balance of practices to be different in a dynamic language where we can't rely on the language to keep interfaces in sync--although the original ideas came from a(nother) bunch of Smalltalkers.

S.

Matt Wynne

unread,
Dec 20, 2010, 1:12:42 PM12/20/10
to growing-object-o...@googlegroups.com
I know Brian, and in fact I recently did a 'head to head' session at ScotRUG where we solved the same problem in our different styles. Both recitals took about half an hour, links here: http://blog.mattwynne.net/2010/08/31/outside-in-vs-inside-out-comparing-tdd-approaches/

My approach is, I hope, much more akin to what you'd pick up from reading GOOS. I'd say a significant part of Brian's scepticism about mocks is mostly because he hasn't ever worked with anyone who knows how to use them properly. He certainly seemed to be intrigued by the idea from watching my session that you didn't need to keep scurrying off down a rabbit warren each time you needed a collaborator class.

If you watch the videos, you'll also see we ended up with quite different designs, even for such a simple problem.

cheers,
Matt

ma...@mattwynne.net
07974 430184

Uberto Barbini

unread,
Dec 20, 2010, 1:43:49 PM12/20/10
to growing-object-o...@googlegroups.com

Looking at the presentation, it reminds me of a classical dialect
techniques: make your opponent ideas idiotic and then have fun to
demolish them.
It's used by politician everywhere, I hope this is not the start of a
new trend in tech conference.

I think it's completely ok dislike mocks, it's also perfectly ok to
ignore them, but if you present a session about mocks and you
misrepresent them in this way either you do on purpose or you didn't
any preparation for the session.

Said that, I agree that sometimes it's difficult (at least for me) to
choose how much mocking and where cut the layers... but you can always
refactor later.

cheers

Uberto

Brian Swan

unread,
Dec 29, 2010, 12:39:28 PM12/29/10
to Growing Object-Oriented Software
Hi all,

I'd be happy to discuss why I dislike using mocks and prefer to use
stubs and spies in the cases where I want to verify that specific
messages were sent to collaborating objects.

I've been doing TDD for over 10 years and have seen numerous
situations where overusing mocks leads to brittle, hard to read test
cases. Having moved from Java to Ruby over that last couple of years
I'm seeing a lot of the problems I saw in Java code bases reappearing
in Ruby code bases. The goal of the presentation was to highlight
these problems, albeit by taking an extreme position.

I don't believe I particularly misrepresented mockist TDD in the
presentation but certainly emphasised the aspects that I dislike.

I completely agree with Steve's comment above that there are different
styles of OO (and TDD) and that if you don't follow the GOOS OO design
style much of the mock advice won't work well. What I find in practice
is that many teams don't recognise this and use mocks inappropriately.

By taking an extreme position in my presentation I hope that people
will discuss both views and as a result refine the advice on when to
use (or not use) mocks.

Regards

Brian

Uberto Barbini

unread,
Dec 29, 2010, 5:50:58 PM12/29/10
to growing-object-o...@googlegroups.com
Hi, Brian
I agree with pretty everything you wrote here, but I think you should
try to explain 2 things:

1. where exactly stubs and spies are better than mocks and why.
2. (more important) everything can be misused, so do you see something
wrong with the "correct" way to use mocks (let's take GOOS as
example).

> style much of the mock advice won't work well. What I find in practice
> is that many teams don't recognise this and use mocks inappropriately.

In my experience, most of teams don't use TDD at all. Most of the
misuses of mocks I saw they were on teams who wrote the test later or
in teams who were forced to use overcomplex architectures.

Said that, taking extreme positions is fine for me, but then what else
could you expect if not extreme reactions? ;)

cheers

Uberto

Steve Freeman

unread,
Dec 30, 2010, 4:23:01 AM12/30/10
to growing-object-o...@googlegroups.com
Thanks for engaging with the list. So...

"Hammers suck. They strip the threads from my screws."

Yes, a lack of skill will lead to brittle code and tests. And there are plenty of broken TDD codebases that don't use mocks. And even more broken codebases that don't use TDD. No technique can survive inadequately trained developers.

I only had the slides to go on, so I don't know if the words altered the position. Personally, I'm very, very tired of "mocks are bad because X", "Yes, that's why we never do X" arguments. There's a lot of misunderstanding in the world, much of it perpetrated by Big Names, which leads to exactly the sort of mess you've been seeing. If the teams that you were working with had been Listening to the Tests, rather than just typing furiously, then they wouldn't have gone down that hole.

It might have been more productive (if less noticeable) to have done a symptoms and cures talk. To me, taking an extreme anti-mocks position is getting a bit old.

S.

Steve Freeman

unread,
Dec 30, 2010, 4:32:18 AM12/30/10
to growing-object-o...@googlegroups.com
On 29 Dec 2010, at 18:39, Brian Swan wrote:
> I'd be happy to discuss why I dislike using mocks and prefer to use
> stubs and spies in the cases where I want to verify that specific
> messages were sent to collaborating objects.

personally, I don't mind that much whether interactions are specified before or after. I prefer before because that's what I'm used to and it looks a bit like a poor man's Design by Contract.

What does concern me is that all the interactions should be verified unless I turn them off explicitly. If the behaviour of the object under test changes, than I want my test to have changed too (preferably beforehand). If I need to default or ignore a significant proportion of the interactions then that's a clue that something's too big.

I would guess that there's an interesting project in the dynamic languages to write a tool that reads the expectations and cross-checks the signatures with candidate classes. In Smalltalk these could be defined as protocols. Ruby's less regular, so it would be harder to be consistent.

S.

Brian Swan

unread,
Dec 30, 2010, 12:31:26 PM12/30/10
to Growing Object-Oriented Software
Hi Uberto,

> 1. where exactly stubs and spies are better than mocks and why.

Better is obviously subjective but for me stubs and spies let me do
the same things you can do with mocks but without what I consider to
be the disadvantages of mocks.

I prefer that stubs are distinctly providing input and that spies are
checking indirect outputs. With mocks I find the distinction between
input and output less obvious (although I notice that JMock has added
'allowing' to add more distinction).

I also prefer that spies allow me to put asserts after the invocation
of the method being tested, I consider it to be much more readable and
again makes a clearer distinction between input and output.

> 2. (more important) everything can be misused, so do you see something
> wrong with the "correct" way to use mocks (let's take GOOS as
> example).

I don't see anything wrong with the GOOS 'correct' way of using mocks
but I think what people generally don't appreciate is the relationship
between using mocks and the GOOS design style. What I find is that if
your design style differs, or is constrained by using a framework,
then mocking can lead to the brittle, unreadable test problems that I
have often seen.

Brian

Brian Swan

unread,
Dec 30, 2010, 1:01:03 PM12/30/10
to Growing Object-Oriented Software
> If the behaviour of the object under test changes, than I want my test to have changed too (preferably beforehand).

I too would expect to change the test first if the behaviour of the
object under test changes. However the failure mode that most concerns
me with using mocks is when the behaviour doesn't change but a
refactoring of the object under test results in test failures because
the mock is effectively duplicating the implementation.

> If I need to default or ignore a significant proportion of the interactions then that's a clue that something's too big.

I agree that people don't "listen to the tests" enough. In response to
problem tests the options to consider also include changing the
testing style to a more stateful style.

Brian

J. B. Rainsberger

unread,
Dec 30, 2010, 1:58:38 PM12/30/10
to growing-object-o...@googlegroups.com
On Wed, Dec 29, 2010 at 12:39, Brian Swan <bgs...@gmail.com> wrote:

I'd be happy to discuss why I dislike using mocks and prefer to use
stubs and spies in the cases where I want to verify that specific
messages were sent to collaborating objects.

Spies and mocks do the same thing, so I don't understand this statement. Spies only differ from mocks in when you set the expectations: spies after the action, mocks before.

I've been doing TDD for over 10 years and have seen numerous
situations where overusing mocks leads to brittle, hard to read test
cases.

Overusing spies results in the same problems.
 
Having moved from Java to Ruby over that last couple of years
I'm seeing a lot of the problems I saw in Java code bases reappearing
in Ruby code bases. The goal of the presentation was to highlight
these problems, albeit by taking an extreme position.

I don't believe I particularly misrepresented mockist TDD in the
presentation but certainly emphasised the aspects that I dislike.

I completely agree with Steve's comment above that there are different
styles of OO (and TDD) and that if you don't follow the GOOS OO design
style much of the mock advice won't work well. What I find in practice
is that many teams don't recognise this and use mocks inappropriately.

I have found that using verifying test doubles (spies and mocks both) as opposed to non-verifying test doubles (stubs) creates positive pressure on your design by highlighting unhealthy dependencies. Verifying test doubles encourage me to do something about the unhealthy dependencies, whereas non-verifying test doubles help me tolerate them. When designing new elements, I prefer my tools to encourage me to fix design problems early, so that I don't let them linger and grow; when changing legacy code, I prefer my tools to let me decide which of the many existing design problems I want to address at the moment.

As a result, I posit two central inappropriate uses of verifying test doubles:

1. When trying to change code with existing, deep design problems,
2. When modeling queries, rather than commands

By taking an extreme position in my presentation I hope that people
will discuss both views and as a result refine the advice on when to
use (or not use) mocks.

How clear have you made that intent in your presentation? I haven't read it, so don't interpret that as a passive aggressive question, because I genuinely don't know.
-- 
J. B. (Joe) Rainsberger :: http://www.jbrains.ca :: http://blog.thecodewhisperer.com
Diaspar Software Services :: http://www.diasparsoftware.com
Author, JUnit Recipes
2005 Gordon Pask Award for contribution to Agile practice :: Agile 2010: Learn. Practice. Explore.

Brian Swan

unread,
Dec 30, 2010, 4:48:47 PM12/30/10
to Growing Object-Oriented Software
>
> Spies and mocks do the same thing, so I don't understand this statement.
> Spies only differ from mocks in when you set the expectations: spies after
> the action, mocks before.

Spies and mocks do the same thing but in different ways. I find where
the expectations are set to affect the readability of the test and
that asserting on a spy after the action to be more natural and
readable. In addition a mock will fail if the implementation invokes a
method on the collaborator that isn't specified in the test, whether
or not I care about checking it. With a spy I can make the choice of
what I want to assert in the test, making the test less brittle in the
face of refactoring the implementation.

>
> Overusing spies results in the same problems.
>

I find less so as a result of the asserts being limited to those
methods on the collaborator that I care about checking rather that
having to specify all (or explicitly ignore) the methods invoked by
the implementation on the mock.

> I have found that using verifying test doubles (spies and mocks both) as
> opposed to non-verifying test doubles (stubs) creates positive pressure on
> your design by highlighting unhealthy dependencies.

This is an aspect in the use of mocks that I genuinely don't "get",
the influence on the design that comes from using mocks. I've always
had an interest in different techniques for software design and would
like to understand this aspect more. What I'm not sure about is how
much mocks influence the design versus mocks are useful when testing
code written in a certain style.

>
> As a result, I posit two central inappropriate uses of verifying test
> doubles:
>
> 1. When trying to change code with existing, deep design problems,
> 2. When modeling queries, rather than commands
>

I like this advice. I remember something from Bertrand Meyers Object
Oriented Software Construction, his 'Advice to the advisors' about
giving 'always' or 'never' or qualified 'always' or 'never' advice.

What I'd like to find is the qualified advice on when to use mocks,
which lies somewhere between always and never.

> By taking an extreme position in my presentation I hope that people
>
> > will discuss both views and as a result refine the advice on when to
> > use (or not use) mocks.
>
> How clear have you made that intent in your presentation? I haven't read it,
> so don't interpret that as a passive aggressive question, because I
> genuinely don't know.
> --

It's not made clear in the presentation.

Brian

Nat Pryce

unread,
Dec 30, 2010, 7:58:10 PM12/30/10
to growing-object-o...@googlegroups.com
On 30 December 2010 21:48, Brian Swan <bgs...@gmail.com> wrote:
> I like this advice. I remember something from Bertrand Meyers Object
> Oriented Software Construction, his 'Advice to the advisors' about
> giving 'always' or 'never' or qualified 'always' or 'never' advice.
>
> What I'd like to find is the qualified advice on when to use mocks,
> which lies somewhere between always and never.

That's what we set out to do in the book.

When writing the book we were very careful to avoid any absolute
statements and to describe *our experience* of using a variety of TDD
tools (including mocks), and the context in which we found the tools
useful. We deliberately did not write in absolute terms ("mocks are
great" or "mocks suck", for example), because any tool is applicable
in some context. Outside that context any tool becomes less helpful
and, far enough away from the sweet spot, a hindrance. We wanted to
let the readers understand enough about the contexts we have
experienced that they could hopefully map them to their own contexts
and make informed decisions as to which tools are appropriate for
them. Anything else is, I think, a disservice to the audience and just
continues the parlous state of debate and education the industry in
which technologies are described almost like fundamentalist religious
beliefs instead of in terms of engineering trade-offs. No technology
is bad, it's all context dependent -- even Spring might be useful
somewhere, maybe*...

--Nat

* only kidding, Spring fans!

--
http://www.natpryce.com

Nat Pryce

unread,
Dec 30, 2010, 8:07:08 PM12/30/10
to growing-object-o...@googlegroups.com
On 30 December 2010 17:31, Brian Swan <bgs...@gmail.com> wrote:
> I prefer that stubs are distinctly providing input and that spies are
> checking indirect outputs. With mocks I find the distinction between
> input and output less obvious

That, I think, is the key difference in thinking. You're designing in
terms of inputs and outputs, mock objects (and jMock in particular)
aid designing in terms of state machines that communicate by message
passing protocols. In that model, a state transition is triggered by
an event (incoming message) and causes some observable effects
(outgoing messages). The transition rule -- trigger, effects, new
state -- is a single "chunk" of behaviour.

> (although I notice that JMock has added 'allowing' to add more distinction).

JMock has always had that feature, although it was called "stubs" in jMock 1.

--Nat

--
http://www.natpryce.com

Nat Pryce

unread,
Dec 30, 2010, 8:22:48 PM12/30/10
to growing-object-o...@googlegroups.com
On 29 December 2010 17:39, Brian Swan <bgs...@gmail.com> wrote:
> I don't believe I particularly misrepresented mockist TDD in the
> presentation but certainly emphasised the aspects that I dislike.

Sorry to repeat myself again (and again (and again ...))

THERE IS NO SUCH THING AS MOCKIST TDD!

There are different ways of designing how a system is organised into
modules and the interfaces between those modules. In one design, you
might split the system into modules that communicate by leaving data
in shared data structures. In another, you might split the system
into modules that communicate by synchronous message-passing. In
another, modules might communicate by lazily calculated streams, or by
async messaging between concurrent actors, or by content-based pub/sub
events, or...

Depending on how you design inter-module interfaces, you'll need
different tools to unit-test modules (whether TDD or not).

Mock Objects are designed for test-driving code that is modularised
into objects that communicate by "tell, don't ask" style message
passing. In that style, there is little visible state to assert about
because state is an implementation detail -- housekeeping used to
coordinate the message-passing protocols.

In other styles of code, procedural or functional, assertions on
queried state works fine. (In asynchronous code, they do not because
of race conditions).

Even in a system that uses Mock Objects appropriately in its
unit-tests, a significant part is purely functional in my experience,
and so will not be tested with mock objects.

Talking about Mockist TDD vs Statist TDD leads people to think in
terms of adopting a "TDD style" and rigidly following a process,
rather than understanding how to select from a wide array of tools
those that are appropriate for the job at hand.

Rant over (this time!).

--Nat
--
http://www.natpryce.com

Dale Emery

unread,
Dec 31, 2010, 12:28:32 AM12/31/10
to growing-object-o...@googlegroups.com
Hi Brian,

I find where the expectations are set to affect the readability of the test and that asserting on a spy after the action to be more natural and readable.

I too prefer to describe the desired results after the action that leads to the results. It nicely fits the way I think about responsibilities: some context exists, some stimulus or event occurs in that context, and some object has the obligation to produce one or more planned results in response.

Describing responsibilities in other orders always feels awkward to me, as in the old joke: If I had some bread I could make a ham sandwich, if I had some ham.

In addition a mock will fail if the implementation invokes a method on the collaborator that isn't specified in the test, whether or not I care about checking it.

I don't think there's anything about mocks that says they must by default fail when other-than-specified methods are called. I've used mocking frameworks that allowed other calls by default; in order to convince them to reject other calls, you had to put them into "strict" mode. So perhaps your complaint is about a particular mocking framework, and not about mocks per se.

This is an aspect in the use of mocks that I genuinely don't "get", the influence on the design that comes from using mocks.

I tend to think of the influence as being in the other direction: My design choices often create collaborations in which mocks are useful.

It's possible that the influence also sometimes flows the other way, and I'm not aware of it.

What I'd like to find is the qualified advice on when to use mocks, which lies somewhere between always and never.

My personal guideline is this: When I give an object the responsibility to send a message to another object, mocks offer a nice way to describe that responsibility.

Dale

--
Dale Emery
Consultant to software teams and leaders
Web: http://dhemery.com

Dale Emery

unread,
Dec 31, 2010, 12:30:30 AM12/31/10
to growing-object-o...@googlegroups.com
Hi Joe,

I have found that using verifying test doubles (spies and mocks both) as opposed to non verifying test doubles (stubs) creates positive pressure on your design by highlighting unhealthy dependencies.

Can you give an example or two?

Dale

Steve Freeman

unread,
Dec 31, 2010, 4:54:09 AM12/31/10
to growing-object-o...@googlegroups.com
On 30 Dec 2010, at 19:01, Brian Swan wrote:
>> If the behaviour of the object under test changes, than I want my test to have changed too (preferably beforehand).
>
> I too would expect to change the test first if the behaviour of the
> object under test changes. However the failure mode that most concerns
> me with using mocks is when the behaviour doesn't change but a
> refactoring of the object under test results in test failures because
> the mock is effectively duplicating the implementation.

Then that might a symptom of unbalanced relationships between the object and its collaborators, or that you should be using real objects in the test. One clue is if you're mocking out setters and getters, which suggests there isn't really an action in the collaboration.

>> If I need to default or ignore a significant proportion of the interactions then that's a clue that something's too big.
>
> I agree that people don't "listen to the tests" enough. In response to
> problem tests the options to consider also include changing the
> testing style to a more stateful style.

That might be an answer, or perhaps you're looking at the wrong interactions. I often find that tests that involve mocks are actually exercising a small cluster of objects: a small container and some helper and value objects. The focus should be on interactions that trigger changes in the environment.

S.

Steve Freeman

Winner of the Agile Alliance Gordon Pask award 2006
Book: http://www.growing-object-oriented-software.com

+44 (0) 797 179 4105
M3P Limited. http://www.m3p.co.uk
Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
Company registered in England & Wales. Number 03689627

Steve Freeman

unread,
Dec 31, 2010, 5:01:37 AM12/31/10
to growing-object-o...@googlegroups.com
On 30 Dec 2010, at 18:31, Brian Swan wrote:
> I prefer that stubs are distinctly providing input and that spies are
> checking indirect outputs. With mocks I find the distinction between
> input and output less obvious

That's a reasonable position. I prefer to think in terms of protocols between objects, so I like to keep the interactions together. Not a show stopper either way.

> (although I notice that JMock has added 'allowing' to add more distinction).

allowing has been there since the beginning. It was called stubbing in jMock 1, but Nat changed the terminology. I'm not sure where he took it from, but I know that it's used in Syntropy in event specifications. We've been banging on about "Stub queries, mock actions" for some years.

I think this also why Easymock has had such a pernicious effect, since it makes that distinction at the end of the line where it's hard to see.

S.

Uberto Barbini

unread,
Dec 31, 2010, 5:32:06 AM12/31/10
to growing-object-o...@googlegroups.com
>> 1. where exactly stubs and spies are better than mocks and why.
>
> Better is obviously subjective but for me stubs and spies let me do
> the same things you can do with mocks but without what I consider to
> be the disadvantages of mocks.

still not sure what you mean with "disadvantages of mocks".
Also there are differences between (for example) mockito and JMock in
practical use.

> I also prefer that spies allow me to put asserts after the invocation
> of the method being tested, I consider it to be much more readable and
> again makes a clearer distinction between input and output.

I need to see the code here to understand what you mean.
Generally speaking I'm not very concerned by the "distinction between
input and output", I'm interested in testing the behaviour as a whole.

> I don't see anything wrong with the GOOS 'correct' way of using mocks
> but I think what people generally don't appreciate is the relationship
> between using mocks and the GOOS design style. What I find is that if
> your design style differs, or is constrained by using a framework,
> then mocking can lead to the brittle, unreadable test problems that I
> have often seen.

As Steve put it: "No technique can survive inadequately trained developers."
Anyway my target is improving my team skills to go near the GOOS
design style model.
So I want for them to learn to use mocks in the correct way, nothing less. ;)

Cheers

Uberto

Esko Luontola

unread,
Dec 31, 2010, 6:56:13 AM12/31/10
to Growing Object-Oriented Software
On Dec 31, 12:32 pm, Uberto Barbini <ube...@ubiland.net> wrote:
> > Better is obviously subjective but for me stubs and spies let me do
> > the same things you can do with mocks but without what I consider to
> > be the disadvantages of mocks.
>
> still not sure what you mean with "disadvantages of mocks".
> Also there are differences between (for example) mockito and JMock in
> practical use.

Mockito creates spies, not mocks. That is what makes it different from
JMock.

http://code.google.com/p/mockito/wiki/FAQ

> > I also prefer that spies allow me to put asserts after the invocation
> > of the method being tested, I consider it to be much more readable and
> > again makes a clearer distinction between input and output.
>
> I need to see the code here to understand what you mean.
> Generally speaking I'm not very concerned by the "distinction between
> input and output", I'm interested in testing the behaviour as a whole.

Here is an example:
https://github.com/orfjackal/tdd-tetris-tutorial/blob/beyond/src/test/java/tetris/RemovingFullRowsTest.java

Uberto Barbini

unread,
Dec 31, 2010, 9:11:03 AM12/31/10
to growing-object-o...@googlegroups.com
On Fri, Dec 31, 2010 at 12:56 PM, Esko Luontola <esko.l...@gmail.com> wrote:
> On Dec 31, 12:32 pm, Uberto Barbini <ube...@ubiland.net> wrote:
>> > Better is obviously subjective but for me stubs and spies let me do
>> > the same things you can do with mocks but without what I consider to
>> > be the disadvantages of mocks.
>>
>> still not sure what you mean with "disadvantages of mocks".
>> Also there are differences between (for example) mockito and JMock in
>> practical use.
>
> Mockito creates spies, not mocks. That is what makes it different from
> JMock.
>
> http://code.google.com/p/mockito/wiki/FAQ

technically speaking yes, but given its name, all its method names I
though it was included in the "mocks suck".

So at the end it's all about JMocks vs. Mockito? how boring...

>> I need to see the code here to understand what you mean.
>> Generally speaking I'm not very concerned by the "distinction between
>> input and output", I'm interested in testing the behaviour as a whole.
>
> Here is an example:
> https://github.com/orfjackal/tdd-tetris-tutorial/blob/beyond/src/test/java/tetris/RemovingFullRowsTest.java

I don't see the point. Why using JMocks instead of mockito in that
class would result in less brittle tests and better design?

cheers

Uberto

Brian Swan

unread,
Dec 31, 2010, 9:40:08 AM12/31/10
to Growing Object-Oriented Software

On Dec 31, 12:58 am, Nat Pryce <nat.pr...@gmail.com> wrote:
> Outside that context any tool becomes less helpful
> and, far enough away from the sweet spot, a hindrance. We wanted to
> let the readers understand enough about the contexts we have
> experienced that they could hopefully map them to their own contexts
> and make informed decisions as to which tools are appropriate for
> them.

Unfortunately what I've seen in many of the teams I've worked with is
a much less nuanced approach to using mock objects than that presented
in the GOOS book, similar to the pattern explosion that is often seen
after reading the GoF book.

> Anything else is, I think, a disservice to the audience and just
> continues the parlous state of debate and education the industry in
> which technologies are described almost like fundamentalist religious
> beliefs instead of in terms of engineering trade-offs.

I agree, it's unfortunate that I had to use such a blunt instrument to
get people to notice and to have this discussion. I'd really like to
see more tools and techniques being upfront about their sweet spot/
trade offs, it's what first attracted me to both XP and Ruby on Rails.

Brian

Brian Swan

unread,
Dec 31, 2010, 10:09:56 AM12/31/10
to Growing Object-Oriented Software
On Dec 31, 1:07 am, Nat Pryce <nat.pr...@gmail.com> wrote:
>
> That, I think, is the key difference in thinking.  You're designing in
> terms of inputs and outputs, mock objects (and jMock in particular)
> aid designing in terms of state machines that communicate by message
> passing protocols.  In that model, a state transition is triggered by
> an event (incoming message) and causes some observable effects
> (outgoing messages).  The transition rule -- trigger, effects, new
> state  -- is a single "chunk" of behaviour.
>

I think you've hit the nail on the head here. I tend to think of
objects using design by contract so my tests follow a pattern of,
invoke a command and assert on queries. I'm rarely interested in
verifying the interactions between collaborators and when I need to I
find spies a better fit.

I'm not sure that the relationship between mock objects and design
style is well known. Would you say that using mock objects leads to
the design style you describe or that you favour that design style and
use mocks because they enable testing that design style?

Brian

Nat Pryce

unread,
Dec 31, 2010, 11:24:08 AM12/31/10
to growing-object-o...@googlegroups.com
On 31 December 2010 14:40, Brian Swan <bgs...@gmail.com> wrote:
> it's unfortunate that I had to use such a blunt instrument to
> get people to notice and to have this discussion.

You didn't. The discussion has been going on for over ten years. You
could have just joined in.

--Nat

--
http://www.natpryce.com

Brian Swan

unread,
Dec 31, 2010, 11:24:25 AM12/31/10
to Growing Object-Oriented Software


On Dec 31, 10:32 am, Uberto Barbini <ube...@ubiland.net> wrote:

>
> I need to see the code here to understand what you mean.
> Generally speaking I'm not very concerned by the "distinction between
> input and output", I'm interested in testing the behaviour as a whole.

Here is an example of a common use (abuse) of mocks from a Rails app
demonstrating what I see as the disadvantages of poor readability and
brittleness.

https://gist.github.com/761102

Perhaps instead of input and output I should have said setup and
asserts. So when I read a test I like to be able to clearly see what
is setup code and what are asserts and I find that spies make that
clearer. Here is another example that I think demonstrates that.

https://gist.github.com/761110

>
> Anyway my target is improving my team skills to go near the GOOS
> design style model.

Where I have seen problems is applying mocks without using the GOOS
design style.

Brian

Brian Swan

unread,
Dec 31, 2010, 11:32:09 AM12/31/10
to Growing Object-Oriented Software

On Dec 31, 2:11 pm, Uberto Barbini <ube...@ubiland.net> wrote:
>
> technically speaking yes, but given its name, all its method names I
> though it was included in the "mocks suck".

I actually used Mocktio in the presentation as an example of a test
spy framework as an alternative to mocking. The terminology of the
Mockito api is unfortunate.

>
> So at the end it's all about JMocks vs. Mockito? how boring...
>

Not at all, it's about the misuse of mocks and making people aware of
test spies as an alternative which may be more suitable depending on
your design style.

Brian

Steve Freeman

unread,
Dec 31, 2010, 12:03:31 PM12/31/10
to growing-object-o...@googlegroups.com
On 31 Dec 2010, at 17:32, Brian Swan wrote:
>> So at the end it's all about JMocks vs. Mockito? how boring...
>
> Not at all, it's about the misuse of mocks and making people aware of
> test spies as an alternative which may be more suitable depending on
> your design style.

I'm really struggling to see how a team that doesn't have the skills to write mocks properly will do better with spies. I can understand that they will get away with inadequate design for longer, but is that really a good thing?

S.

Uberto Barbini

unread,
Dec 31, 2010, 12:20:43 PM12/31/10
to growing-object-o...@googlegroups.com
On Fri, Dec 31, 2010 at 5:32 PM, Brian Swan <bgs...@gmail.com> wrote:
>
> On Dec 31, 2:11 pm, Uberto Barbini <ube...@ubiland.net> wrote:
>>
>> technically speaking yes, but given its name, all its method names I
>> though it was included in the "mocks suck".
>
> I actually used Mocktio in the presentation as an example of a test
> spy framework as an alternative to mocking. The terminology of the
> Mockito api is unfortunate.

Is it really? I'm pretty sure that 90% of programmers cannot tell the
difference.

I saw your examples, in the first example first case I think you're
also testing RoR, that could be a good thing or a bad thing depending
on the contest.
In the second example I really don't understand why the second case is
less brittle than the first one.

>
>>
>> So at the end it's all about JMocks vs. Mockito? how boring...
>>
>
> Not at all, it's about the misuse of mocks and making people aware of
> test spies as an alternative which may be more suitable depending on
> your design style.

More suitable => yes. More solid => no.
I can provide you with lots of tests using mockito in the wrong way,
but what's the point?


cheers

Uberto

Steve Freeman

unread,
Dec 31, 2010, 1:07:10 PM12/31/10
to growing-object-o...@googlegroups.com
On 31 Dec 2010, at 17:24, Brian Swan wrote:
> Here is an example of a common use (abuse) of mocks from a Rails app
> demonstrating what I see as the disadvantages of poor readability and
> brittleness.
>
> https://gist.github.com/761102

This one has a singleton and it's expecting on a query, rather than stubbing, which we don't recommend. Also, I might factor out the book params to show that a value is being passed through.

> Perhaps instead of input and output I should have said setup and
> asserts. So when I read a test I like to be able to clearly see what
> is setup code and what are asserts and I find that spies make that
> clearer. Here is another example that I think demonstrates that.
>
> https://gist.github.com/761110

Here the expectation is buried in the setup, it should come afterwards. One of the advantages of the jmock initialiser hack is that it makes the protocol stand out. And the problem with the spy version is I haven't made explicit that I'm ignoring other calls rather than I've forgotten about them.

S.

Steve Freeman

unread,
Dec 31, 2010, 1:15:00 PM12/31/10
to growing-object-o...@googlegroups.com
> I'm not sure that the relationship between mock objects and design
> style is well known.

That's because a bunch of well-known names mischaracterised the technique from the very beginning, and we were too busy to write it up properly (and too unimportant to get heard). Dave Astels was the only author who took the trouble to understand. The damage continues to propagate, especially in the .Net world. That's why I'm so tired of these arguments.

> Would you say that using mock objects leads to
> the design style you describe or that you favour that design style and
> use mocks because they enable testing that design style?

There's a postscript in the book, copied on the mockobject.com website, that describes how we got there. In the spirit of Nat's recent rant, mocks arise naturally from doing responsibility-driven OO. All these mock/not-mock arguments are completely missing the point. If you're not writing that kind of code, people, please don't give me a hard time.

S.

Brian Swan

unread,
Dec 31, 2010, 1:39:55 PM12/31/10
to Growing Object-Oriented Software

On Dec 31, 5:03 pm, Steve Freeman <st...@m3p.co.uk> wrote:
> I'm really struggling to see how a team that doesn't have the skills to write mocks properly will do better with spies.

My understanding is that using mocks "properly" is strongly related to
designing in the GOOS style. What I'm seeing is teams trying to use
mocks when the design style differs and having the problems I
presented. These teams are not necessarily aware of spies as an
alternative.

> I can understand that they will get away with inadequate design for longer, but is that really a good thing?

There are many ways to design a system, because a team is misusing
mocks does not mean that their design is inadequate. As Matt Wynne
posted earlier, he and I did an exercise based on this paper
http://blogs.agilefaqs.com/wp-content/uploads/2007/12/avatars-of-tdd.pdf
at ScotRUG to build the same app test first but using different
approaches. Matt used mocks, I didn't but I don't think my design was
"inadequate". The code is here for my solution https://github.com/bgswan/tddavatar2
and here for Matt's https://github.com/mattwynne/tddavatar/

What I think I'm getting at is that there is a relationship between
design style and use of mocks that most people aren't aware of. When
people try to apply mocks to a different type of design they have
problems. Basically what you and Nat have said earlier about using a
tool outside of it's sweet spot, just that people don't know what the
sweet spot is.

Brian

Brian Swan

unread,
Dec 31, 2010, 2:04:08 PM12/31/10
to Growing Object-Oriented Software

On Dec 31, 6:15 pm, Steve Freeman <st...@m3p.co.uk> wrote:
> The damage continues to propagate, especially in the .Net world. That's why I'm so tired of these arguments.

Also in the Ruby world

> If you're not writing that kind of code, people, please don't give me a hard time.

I think we're actually in total agreement here (except for me giving
you a hard time, sorry). I think it would be fair to say that my
presentation could be retitled 'Mocks suck, when used out of context',
without changing the content.

It seems the message of what the correct context is continues to be
missed.

Happy new year all.

Brian

Esko Luontola

unread,
Jan 1, 2011, 4:37:31 AM1/1/11
to Growing Object-Oriented Software
On Dec 31 2010, 4:11 pm, Uberto Barbini <ube...@ubiland.net> wrote:
> I don't see the point. Why using JMocks instead of mockito in that
> class would result in less brittle tests and better design?

Compare this spy version https://github.com/orfjackal/tdd-tetris-
tutorial/blob/beyond/src/test/java/tetris/RemovingFullRowsTest.java
with this mock version https://github.com/orfjackal/tdd-tetris-
tutorial/blob/beyond/src/test/java/tetris/easymock/
EasyMockExample_RemovingFullRowsTest.java

When using this style, setting the expectations before the action
confuses the tests - the expectations need to be called in the setup
instead of having a test for each feature. And in the last test there
is no easy way to say that the method was not called.

In addition to the style breakage, if one of those expectations fails,
then the tests for unrelated features break. This makes the tests
highly coupled because there is more than one reason why a test could
fail. As J.B. Rainsberger would say, they depend on more than one
piece of interesting behavior - they are integrated tests.

Steve Freeman

unread,
Jan 1, 2011, 5:12:39 AM1/1/11
to growing-object-o...@googlegroups.com
Agreed, in this format, there isn't a good place to put expectations. Using setup just hides the relationship.

In some frameworks, you can specify never(), which is the same as not specifying but makes the intention clear.

I think it's reasonable for a set of tests to break because an expectation fails, especially if you can have it pop up the debugger at the right time. Controlling for single failures at this granularity seems a bit too precise for me.

Again, what's missing from this is making explicit which unspecified calls can be ignored and which have just been forgotten about.

S.

Steve Freeman

unread,
Jan 1, 2011, 5:13:47 AM1/1/11
to growing-object-o...@googlegroups.com
On 31 Dec 2010, at 20:04, Brian Swan wrote:
>> If you're not writing that kind of code, people, please don't give me a hard time.
> I think we're actually in total agreement here (except for me giving
> you a hard time, sorry). I think it would be fair to say that my
> presentation could be retitled 'Mocks suck, when used out of context',
> without changing the content.

perhaps saying that explicitly at the time would have been more helpful...

Here's to the New Year.

S.


Steve Freeman

unread,
Jan 1, 2011, 5:19:01 AM1/1/11
to growing-object-o...@googlegroups.com
On 31 Dec 2010, at 19:39, Brian Swan wrote:
>> I can understand that they will get away with inadequate design for longer, but is that really a good thing?
> There are many ways to design a system, because a team is misusing
> mocks does not mean that their design is inadequate. As Matt Wynne
> posted earlier, he and I did an exercise based on this paper
> http://blogs.agilefaqs.com/wp-content/uploads/2007/12/avatars-of-tdd.pdf
> at ScotRUG to build the same app test first but using different
> approaches. Matt used mocks, I didn't but I don't think my design was
> "inadequate". The code is here for my solution https://github.com/bgswan/tddavatar2
> and here for Matt's https://github.com/mattwynne/tddavatar/

if the unit tests are getting large enough that they have to rely on defaults and implicit ignores, then that seems wrong to me. Not least because I can't see the relationship between all the inputs and outputs. One of the frustrations is that Mockito was written to help with a codebase that was going sour, it has many fine qualities but it was fixing the wrong problem.

> What I think I'm getting at is that there is a relationship between
> design style and use of mocks that most people aren't aware of. When
> people try to apply mocks to a different type of design they have
> problems. Basically what you and Nat have said earlier about using a
> tool outside of it's sweet spot, just that people don't know what the
> sweet spot is.

So, perhaps you could help to spread the word? None of our design practices are new.

S

Kim Gräsman

unread,
Jan 1, 2011, 5:31:06 AM1/1/11
to growing-object-o...@googlegroups.com
Brian and all,

On Fri, Dec 31, 2010 at 19:39, Brian Swan <bgs...@gmail.com> wrote:
>
> What I think I'm getting at is that there is a relationship between
> design style and use of mocks that most people aren't aware of. When
> people try to apply mocks to a different type of design they have
> problems. Basically what you and Nat have said earlier about using a
> tool outside of it's sweet spot, just that people don't know what the
> sweet spot is.

Also, I think there's a relationship between the type of system and
suitable design style -- I think Nat commented at some point that the
GOOS style works well for behavior-heavy reactive systems and not so
well for more data-oriented batch systems. I'd love to hear Steve and
Nat expand more on that, especially if I'm mischaracterizing...

I was really enthusiastic about the book, but I've found it hard to
use some of the ideas. I wonder if that's because I work in a
different family of systems or because I haven't tried hard enough.

Thanks,
- Kim

Steve Freeman

unread,
Jan 1, 2011, 12:13:08 PM1/1/11
to growing-object-o...@googlegroups.com
I know of at least one batch system it's been used for. And, after all, a batch can be viewed as one really large event :)

What got in the way of things working for you?

S.

Steve Freeman

Kim Gräsman

unread,
Jan 1, 2011, 2:00:27 PM1/1/11
to growing-object-o...@googlegroups.com
Hi Steve,

On Sat, Jan 1, 2011 at 18:13, Steve Freeman <st...@m3p.co.uk> wrote:
> I know of at least one batch system it's been used for. And, after all, a batch can be viewed as one really large event :)
>
> What got in the way of things working for you?

Nothing specific. Old habits, I think, and the relative lo-fi-ness of
C++, both wrt test- and mock frameworks and the constraints the
language puts on design that make things like value objects, role
interfaces, etc, less idiomatic and more costly.

Thanks,
- Kim

Steve Freeman

unread,
Jan 2, 2011, 4:54:12 AM1/2/11
to growing-object-o...@googlegroups.com
Ah, C++. Then you have my sympathies :)

S.

David Chelimsky

unread,
Jan 2, 2011, 11:53:23 AM1/2/11
to growing-object-o...@googlegroups.com
On Dec 31 2010, 12:07 pm, Steve Freeman <st...@m3p.co.uk> wrote:
> On 31 Dec 2010, at 17:24, Brian Swan wrote:
>
> > Here is an example of a common use (abuse) of mocks from a Rails app
> > demonstrating what I see as the disadvantages of poor readability and
> > brittleness.
>
> >https://gist.github.com/761102
>
> This one has a singleton and it's expecting on a query, rather than stubbing, which we don't recommend. Also, I might factor out the book params to show that a value is being passed through.

Ugh!

Well, folks. I have to claim full responsibility for this
transgression. I wrote the rspec-rails generator that creates rspec
code examples (ok, some people like to call them tests) that are built
around code generated by Rails. I recently changed the generator so
that instead of this:

mock_book = mock(:book)
Book.should_receive(:find).with("37") { mock_book }
mock_book.should_receive(:update_attributes).with({:these =>
'params'})
put :update, :id => "37", :book => {:these => 'params'}

it now does this:

mock_book = mock(:book)
Book.stub(:find).with("37") { mock_book }
mock_book.should_receive(:update_attributes).with({:these =>
'params'})
put :update, :id => "37", :book => {:these => 'params'}

The difference is that the query against the singleton becomes a stub,
but there is still a message expectation that the instance receives a
command. I think this is closer to "goosher" (pronounced GO-sher to
rhyme with closer, and means kosher in a GOOS world. With any luck,
this won't stick), but I still have a question:

The stub still constrains the arguments ("37") on the query, so it
will fail if the implementation fails to call Book.find("37"). The
resulting binding between the test and the implementation is no
different, except now a failure will not point the developer to the
right problem. If the implementation uses a different argument, the
message expectation will fail but there won't be a direct indication
as to why. Similarly, if the implementation changes the method it uses
to find the instance, this will fail, but again, the developer won't
know exactly why. Leaving the message expectation on the query makes
it such that either of these changes will result in a test failure
pointing directly to the change that was just made, and that strikes
me as a good thing. Two questions come from that:

1. do you agree that is a good thing?
2. if so, how much weight does it carry considering all the other
context here?

re: the binding to the implementation, if we loosen the stub so that
it doesn't constrain the argument(s):

Book.stub(:find) { mock_book }

then we're in the same boat. A failure will not tell the developer
what changed to cause the failure.

Now a completely different approach would be to not use stubs or
message expectations at all:

book = Book.create!(:title => "Mocks suck!")
put :update, :id => book.id, :book => {:title => "Mocks rock!"}
book.reload.title.should eq("Mocks rock!")

That's all fine and dandy, but presents some interesting problems for
a code generator because it has to know something about the
attributes. It also presents new problems because now it's bound
tightly to the validation rules (and other behavior) of the Book,
whereas using a test double for the book isolates you from all of
that. Now this is all solvable, but it starts to get quite convoluted.
Imagine:

book = Book.create!(valid_attributes)
put :update, :id => book.id, :book =>
valid_attributes.merge(valid_attributes.first =>
variant_of(valid_attributes.first))
book.reload.title.should eq(variant_of(valid_attributes.first)))

Eeeek, I say! This has the benefit of being very black-boxy re:
implementation, but it also has the drawback of being black-boxy re:
the test itself.

Another common solution is to use any_instance:

book = Book.create!(valid_attributes)
Book.any_instance.expects(:update_attributes).with({'these' =>
'params'})
put :update, :id => "37", :book => {'these' => 'params'}

The thing that strikes me as odd about this one is that we create an
instance of book, and then say "any instance" but we _really mean this
instance_. Unfortunately we can't put that expectation on the instance
in the test, because ActiveRecord does not use an identity map, so the
object returned by Book.find("37") is not the same object as the one
in the test.

I could go on and on here, but I'll stop at this point. I welcome any
thoughts, suggestions, criticisms, praise, or money, if that suits
you. My goal here is for the generator to generate code that does a
few different things:

* demonstrates "tests as documentation"
* demonstrates isolation between layers
* demonstrates the difference between stubs and message expectations
* completely covers the code generated by Rails
* is readable
* points developers in the right direction when the wheels fall off
* makes Nat and Steve smile

Not necessarily in that order.

Thanks for listening, and I look forward to ensuing lambasting.

David Chelimsky

unread,
Jan 2, 2011, 1:59:54 PM1/2/11
to Growing Object-Oriented Software
FYI - my intent was to spawn a separate thread about this, but
apparently google groups had a different idea in mind and changed the
name of this discussion instead. Apologies for the unintended
hijacking.

Steve Freeman

unread,
Jan 2, 2011, 4:11:09 PM1/2/11
to growing-object-o...@googlegroups.com
Phew! Thanks for joining in.

I'm struggling a little with this because I'm not sure what this is testing. It looks like it's exercising persistence, in which case it might be best to write integration tests that persist and retrieve objects for real.

Alternatively, if 'book' is really just a struct with a bit of updating, then perhaps it would be better to just use a real one and check its contents afterwards.

Further, I'm not sure that I understand the point of code-genning unit tests. What are they trying to prove? Wouldn't it be better to test the hell out of some examples (Nat calls these Guinea Pig objects) and then assume that the framework works? Also, there's no opportunity for the tests to influence the design since that's fixed already.

So, what failures are common in this world?

S.

Steve Freeman

Kim Gräsman

unread,
Jan 2, 2011, 4:50:56 PM1/2/11
to growing-object-o...@googlegroups.com
And you have mine for being stuck in Java. To each their own pain ;-)

- Kim

David Chelimsky

unread,
Jan 2, 2011, 5:05:22 PM1/2/11
to Growing Object-Oriented Software
On Jan 2, 3:11 pm, Steve Freeman <st...@m3p.co.uk> wrote:
> Phew! Thanks for joining in.

Thanks for the reply, Steve. First, a bit of background:

When you use a Rails generator, it generates code _in your app_ for
you. It is boilerplate code, but it is not encapsulated behind an
interface. You get stuff like this:

def update
@gadget = Gadget.find(params[:id])

respond_to do |format|
if @gadget.update_attributes(params[:gadget])
format.html { redirect_to(@gadget, :notice => 'Gadget was
successfully updated.') }
format.xml { head :ok }
else
format.html { render :action => "edit" }
format.xml { render :xml => @gadget.errors, :status
=> :unprocessable_entity }
end
end
end

That's one of seven methods generated in the controller. All of that
code is exposed to the developer and is subject to change by the
developer, and therefore subject to breakage by the developer.

If you're not using rspec, the generator also generates a functional
test that covers only the happy paths, so the conditional logic in the
generated controller is not really covered.

When you plug in rspec, rails delegates to rspec to generate the
functional test, which rspec calls a "controller spec." The idea is
that it is an object level, or unit test, of the responsibilities of
the controller, and is intended to be isolated from the model and view
layers.

With that context, let me answer some of your questions:

> I'm struggling a little with this because I'm not sure what this is testing. It looks like it's exercising persistence, in which case it might be best to write integration tests that persist and retrieve objects for real.

The intent is focus on the controller object itself. There should be
end to end tests elsewhere in the test suite. Perhaps we should just
stick with those and bag these entirely, as long as the complimentary
end to end tests get generated. Even if they do, there are still some
tricky problems to solve vis a vis generating unhappy path scenarios
when we have no way of knowing what the validations rules (which lead
to the unhappy paths) will be.

> Alternatively, if 'book' is really just a struct with a bit of updating, then perhaps it would be better to just use a real one and check its contents afterwards.

That could work, but we still need to coerce the model class to return
the object we want to inspect.

> Further, I'm not sure that I understand the point of code-genning unit tests. What are they trying to prove? Wouldn't it be better to test the hell out of some examples (Nat calls these Guinea Pig objects) and then assume that the framework works?

The code being specified is not framework code - it's generated code
that uses framework code.

> Also, there's no opportunity for the tests to influence the design since that's fixed already.

Agreed.

> So, what failures are common in this world?

The main problem is that there is a fair amount of code that gets
generated. Without thorough coverage, refactoring can lead to
functional failures that are not exposed by tests. So I think the
generator has a responsibility to generate tests sufficient to cover
the generated code.

Beyond that responsibility, we want the generated tests to demonstrate
how to use the tools. And this is where it gets sticky, because it's
very likely that if we were using the tools to write actual code, that
the resulting code would look nothing like the code that is actually
generated :)

OK - maybe that's enough for the 2nd day of the year.

Thoughts so far?

Cheers,
David

J. B. Rainsberger

unread,
Jan 3, 2011, 1:55:23 PM1/3/11
to growing-object-o...@googlegroups.com
On Fri, Dec 31, 2010 at 00:30, Dale Emery <da...@dhemery.com> wrote:
Hi Joe,

I have found that using verifying test doubles (spies and mocks both) as opposed to non verifying test doubles (stubs) creates positive pressure on your design by highlighting unhealthy dependencies.

Can you give an example or two?

I admit that that's hard to do, because all the serious examples from my past are in proprietary code bases to which I no longer have access. I can only describe one general case in general terms.

I've had the experience of refactoring code tested with mocks and having to change tests just because of the mocks they use, even though the intent of the test really hasn't changed. This usually means that I've uncovered an unnecessary dependency on a too-fine-grained interface, which encourages me to introduce a more abstractly-designed interface that better expresses intent with less detail. I tend to find these problems sooner with mocks than without them.
-- 
J. B. (Joe) Rainsberger :: http://www.jbrains.ca :: http://blog.thecodewhisperer.com
Diaspar Software Services :: http://www.diasparsoftware.com
Author, JUnit Recipes
2005 Gordon Pask Award for contribution to Agile practice :: Agile 2010: Learn. Practice. Explore.

J. B. Rainsberger

unread,
Jan 3, 2011, 2:12:24 PM1/3/11
to growing-object-o...@googlegroups.com
Indeed. This is one time when I think some patronising is in order. I apologise for that.

Brian, one sign of maturity is knowing the difference among "X sucks", "I don't like X" and "I sometimes find X gets in my way". Another sign is routinely taking into account the difference when articulating oneself.

David Peterson

unread,
Jan 3, 2011, 2:19:20 PM1/3/11
to growing-object-o...@googlegroups.com
Yeah, Brian, maybe if you'd have called it "Mocks are a scam" it would have been better... :)

Brian Swan

unread,
Jan 3, 2011, 3:09:47 PM1/3/11
to Growing Object-Oriented Software
Joe,

I'm sure you can appreciate the value of a provocative title to a
presentation. I don't think there was anything in the presentation
itself that mis-represented mock objects. I was essentially presenting
the poor use of mocks that I've seen many times in practice.

Brian

J. B. Rainsberger

unread,
Jan 3, 2011, 4:49:36 PM1/3/11
to growing-object-o...@googlegroups.com
On Mon, Jan 3, 2011 at 14:19, David Peterson <da...@crowdsoft.com> wrote:

Yeah, Brian, maybe if you'd have called it "Mocks are a scam" it would have been better... :)

Oh… you want me to follow my own advice? Dammit. Point taken, and objection withdrawn.
-- 

Lance Walton

unread,
Jan 3, 2011, 5:31:22 PM1/3/11
to growing-object-o...@googlegroups.com
The problem that I have with provocative titles is that I have to deal with swathes of incompetent dolts who read the title and maybe the first paragraph and then think they understand. Worse still, they can refer to 'an authority on the internet' who wrote a great blog / did a great presentation about mocks sucking / integration tests being a scam.

How about we all agree to make the title say what we mean?

Dale Emery

unread,
Jan 3, 2011, 6:14:06 PM1/3/11
to growing-object-o...@googlegroups.com
Hi Lance,


How about we all agree to make the title say what we mean?

Or append each deliberately provocative title with "ha ha j/k"

Dale ha ha j/k

--
Dale Emery ha ha j/k
Consultant to software teams and leaders ha ha j/k
Web: http://dhemery.com ha ha j/k

David Chelimsky

unread,
Jan 3, 2011, 6:55:21 PM1/3/11
to growing-object-o...@googlegroups.com
On Jan 3, 2011, at 5:14 PM, Dale Emery wrote:

Hi Lance,

How about we all agree to make the title say what we mean?

Or append each deliberately provocative title with "ha ha j/k"

Dale ha ha j/k

Do you find your name deliberately provocative?

David ha ha in all seriousness.

J. B. Rainsberger

unread,
Jan 3, 2011, 2:02:25 PM1/3/11
to growing-object-o...@googlegroups.com
On Thu, Dec 30, 2010 at 16:48, Brian Swan <bgs...@gmail.com> wrote:
>
> Spies and mocks do the same thing, so I don't understand this statement.
> Spies only differ from mocks in when you set the expectations: spies after
> the action, mocks before.

Spies and mocks do the same thing but in different ways. I find where
the expectations are set to affect the readability of the test and
that asserting on a spy after the action to be more natural and
readable. In addition a mock will fail if the implementation invokes a
method on the collaborator that isn't specified in the test, whether
or not I care about checking it. With a spy I can make the choice of
what I want to assert in the test, making the test less brittle in the
face of refactoring the implementation.

1. Brian Marick uses a style of mocking in Ruby that allows him to specify the expectations at the end. The corresponding Java syntax would be so ugly that few would use it, so I haven't seen a port of his idea to Java. I would call this a Java/Ruby problem rather than a mock/spy problem.

2. If you need to check two unrelated bits of behavior in a test that executes the same code, then perhaps you've violated the Single Responsibility Principle. Try splitting that behavior and compare the resulting design to what you already have.

(Dale, this is an example of what I meant about positive pressure.)

> Overusing spies results in the same problems.

I find less so as a result of the asserts being limited to those
methods on the collaborator that I care about checking rather that
having to specify all (or explicitly ignore) the methods invoked by
the implementation on the mock.

Overusing spies results in either brittle tests or not detecting overly-detailed interface boundaries. Mocks simply tend to result in the first kind of problem and not the second. Both indicate it's time to review the design.

> I have found that using verifying test doubles (spies and mocks both) as
> opposed to non-verifying test doubles (stubs) creates positive pressure on

> your design by highlighting unhealthy dependencies.

This is an aspect in the use of mocks that I genuinely don't "get",
the influence on the design that comes from using mocks. I've always
had an interest in different techniques for software design and would
like to understand this aspect more. What I'm not sure about is how
much mocks influence the design versus mocks are useful when testing
code written in a certain style.

We'd have to pair to give you a deeper answer, but I've given the superficial answer above. Mocks strongly encourage you to deal with overly-detailed interfaces sooner, so as to avoid brittle tests.
 
> As a result, I posit two central inappropriate uses of verifying test
> doubles:
>
> 1. When trying to change code with existing, deep design problems,
> 2. When modeling queries, rather than commands

I like this advice. I remember something from Bertrand Meyers Object
Oriented Software Construction, his 'Advice to the advisors' about
giving 'always' or 'never' or qualified 'always' or 'never' advice.

I try give "here's what works for me" advice.
 
What I'd like to find is the qualified advice on when to use mocks,
which lies somewhere between always and never.

I think I give that kind of advice. I think Steve and Nat do, as well.
 
> By taking an extreme position in my presentation I hope that people
>
> > will discuss both views and as a result refine the advice on when to
> > use (or not use) mocks.
>
> How clear have you made that intent in your presentation? I haven't read it,
> so don't interpret that as a passive aggressive question, because I
> genuinely don't know.

It's not made clear in the presentation.

Please consider making it clear. :)
-- 

J. B. Rainsberger

unread,
Jan 3, 2011, 4:50:04 PM1/3/11
to growing-object-o...@googlegroups.com
On Mon, Jan 3, 2011 at 15:09, Brian Swan <bgs...@gmail.com> wrote:
 
I'm sure you can appreciate the value of a provocative title to a
presentation. I don't think there was anything in the presentation
itself that mis-represented mock objects. I was essentially presenting
the poor use of mocks that I've seen many times in practice.

I agree, and as David pointed out, I'm hardly one to talk.
-- 

diabolo

unread,
Jan 4, 2011, 6:35:40 AM1/4/11
to Growing Object-Oriented Software


On Dec 31 2010, 4:24 pm, Brian Swan <bgs...@gmail.com> wrote:
> On Dec 31, 10:32 am, Uberto Barbini <ube...@ubiland.net> wrote:
>
>
>
> > I need to see the code here to understand what you mean.
> > Generally speaking I'm not very concerned by the "distinction between
> > input and output", I'm interested in testing the behaviour as a whole.
>
> Here is an example of a common use (abuse) of mocks from a Rails app
> demonstrating what I see as the disadvantages of poor readability and
> brittleness.
>
> https://gist.github.com/761102
>

Your common example is a generated, is uncommon because it is a
controller spec, (most speccing in rails is done, or should be done,
for the model) and is arguably out of context because the whole idea
of unit testing controllers in this way is questionable. Most of the
problems of this example are to do with the architecture of Rails.

Your improvement hits the database and looks for changes in the model.
What you have written is an integration test for the controller model
stack, not a unit spec for the controller. You are not comparing like
with like.

All best

Andrew

> Perhaps instead of input and output I should have said setup and
> asserts. So when I read a test I like to be able to clearly see what
> is setup code and what are asserts and I find that spies make that
> clearer. Here is another example that I think demonstrates that.
>
> https://gist.github.com/761110
>


>
>
> > Anyway my target is improving my team skills to go near the GOOS
> > design style model.
>
> Where I have seen problems is applying mocks without using the GOOS
> design style.
>
> Brian

Steve Freeman

unread,
Jan 4, 2011, 8:46:39 AM1/4/11
to growing-object-o...@googlegroups.com
On 2 Jan 2011, at 23:05, David Chelimsky wrote:
>> So, what failures are common in this world?
>
> The main problem is that there is a fair amount of code that gets
> generated. Without thorough coverage, refactoring can lead to
> functional failures that are not exposed by tests. So I think the
> generator has a responsibility to generate tests sufficient to cover
> the generated code.
>
> Beyond that responsibility, we want the generated tests to demonstrate
> how to use the tools. And this is where it gets sticky, because it's
> very likely that if we were using the tools to write actual code, that
> the resulting code would look nothing like the code that is actually
> generated :)
>
> OK - maybe that's enough for the 2nd day of the year.

without seeing this in depth, my only observation is that perhaps the generated code should be more extensible so that it doesn't need to be changed. Then the generated could Just Work, and the devs can put their effort into their extensions.

I don't really have an answer, but it looks like the tests are revealing tensions in the design :)

S.

David Chelimsky

unread,
Jan 4, 2011, 8:52:17 AM1/4/11
to growing-object-o...@googlegroups.com

There are extension libraries that do that, but I'm dealing with the
code generated by the Rails framework itself, and doubt that I'd have
any influence on that code.

>
> I don't really have an answer, but it looks like the tests are revealing tensions in the design :)

Agreed. I'm actually playing with an idea to improve this situation.
If it goes well, I'll post back w/ info.

>
> S.
>
>

Matt Wynne

unread,
Jan 4, 2011, 9:42:39 AM1/4/11
to growing-object-o...@googlegroups.com

On 4 Jan 2011, at 13:52, David Chelimsky wrote:

> On Tue, Jan 4, 2011 at 7:46 AM, Steve Freeman <st...@m3p.co.uk> wrote:
>> On 2 Jan 2011, at 23:05, David Chelimsky wrote:
>>>> So, what failures are common in this world?
>>>
>>> The main problem is that there is a fair amount of code that gets
>>> generated. Without thorough coverage, refactoring can lead to
>>> functional failures that are not exposed by tests. So I think the
>>> generator has a responsibility to generate tests sufficient to cover
>>> the generated code.
>>>
>>> Beyond that responsibility, we want the generated tests to demonstrate
>>> how to use the tools. And this is where it gets sticky, because it's
>>> very likely that if we were using the tools to write actual code, that
>>> the resulting code would look nothing like the code that is actually
>>> generated :)
>>>
>>> OK - maybe that's enough for the 2nd day of the year.
>>
>> without seeing this in depth, my only observation is that perhaps the generated code should be more extensible so that it doesn't need to be changed. Then the generated could Just Work, and the devs can put their effort into their extensions.
>
> There are extension libraries that do that, but I'm dealing with the
> code generated by the Rails framework itself, and doubt that I'd have
> any influence on that code.

Steve has hit the nail on the head here, I think. There are at least 5 different ways to ask ActiveRecord to fetch a given row of data from the database, and you can't easily write a stub that could simulate all of them at the same time.

The way I've solved this before was to write an abstraction in the tests, delegating the query stub to a single method. As long as all the controllers use a consistent protocol for fetching a single row from the database, then you have a single place to change the tests if you want to change that protocol. So my code would look something like this instead:

https://gist.github.com/762553

I've also extracted the params which were noise, IMO.

>
>>
>> I don't really have an answer, but it looks like the tests are revealing tensions in the design :)
>
> Agreed. I'm actually playing with an idea to improve this situation.
> If it goes well, I'll post back w/ info.

Pray tell?

cheers,
Matt

ma...@mattwynne.net
07974 430184

David Chelimsky

unread,
Jan 4, 2011, 10:19:06 AM1/4/11
to growing-object-o...@googlegroups.com

Not until I have something useful :)

Steve Freeman

unread,
Jan 5, 2011, 7:49:01 AM1/5/11
to growing-object-o...@googlegroups.com
On 4 Jan 2011, at 15:42, Matt Wynne wrote:
> Steve has hit the nail on the head here, I think. There are at least 5 different ways to ask ActiveRecord to fetch a given row of data from the database, and you can't easily write a stub that could simulate all of them at the same time.

Not sure it was me, just listen to the tests :)

S.

Matteo Vaccari

unread,
Jan 5, 2011, 8:54:32 AM1/5/11
to growing-object-o...@googlegroups.com

Steve, David,

the intent of Rails, the way I understand it, is to make it so easy to write controllers that you don't mind a bit of duplication.  The idea is that the models and the views provide API that are so expressive and powerful that the controller provides little more than wiring, or configuration so to speak.  And configuration code is probably best tested with end-to-end tests.

If we make the controller code more general, in the sense that we close it so that we don't have to test it, then we have defeated the intent of the Rails design.

Which is not to say that the Rails design does not have problems.  The biggest one, IMO, is the reliance on singletons for data access. 

Matteo

diabolo

unread,
Jan 6, 2011, 9:50:24 PM1/6/11
to Growing Object-Oriented Software
I think you might find thats quite difficult to achieve :)

The rails controller test is a really poor example to use to criticise
mocks, and the improved test is a completely different kind of test.
David's generated test is an attempt to make a unit test part of a
rails controller, this, I think, is futile, I'll explain later.
Steve's improved test is closer to an acceptance/integration test. It
has very little to do with the controller and is much more about
exercising the controller model stack. So all in all the example
provides little insight into the use of mocks.

As to the futility of unit testing controllers (excluding routing).
Rails has a major architectural flaw between the model and the
controller. Due to models using inheritance rather than composition to
get their data saving capabilities, rails models end up exposing this
functionality to controllers. A more considered approach fitting in
with Rails' liking of REST would be to use composition for these
capabilities. By keeping this component hidden from the controller we
can then have just two methods between the model and controller -
getObject and getObjects. With this in place unit testing the
controllers with mocks would be simple and not at all fragile. Perhaps
if this bit of Rails had been developed using TDD then we wouldn't be
having these problems.

I'm sure you all know Jamis Bucks, Fat Model, Skinny Controller blog
post. I think you have to do this in rails, and forget about unit
testing controller actions, and exercise the controllers through
features or maybe integration tests. If you try and test controllers
by mocking find you will inevitably end up with brittle code that is
intruding into areas that are none of its business. But this is not
the fault of mocking it is a fault of Rails.

All best

Andrew

Matt Wynne

unread,
Jan 7, 2011, 6:15:47 AM1/7/11
to growing-object-o...@googlegroups.com

I agree. I am leaning more and more towards using InheritedResources for my controllers in Rails, which means they have almost no bespoke code in them at all, and I can then concentrate on the domain model objects having the logic to process the parameters they might receive from web requests. The problem then is that you end up with Obese Models which ignore the SRP. Next step is to factor out those responsibilities from the main model objects themselves and leave them as facades that delegate work to other classes.

I haven't had a chance to try them in earnest, but the newer ORMs like MongoMapper do use composition instead of inheritance, and I'd imagine they help enormously with building a proper domain model in app/models, rather than just a database persistence layer.

> I'm sure you all know Jamis Bucks, Fat Model, Skinny Controller blog
> post. I think you have to do this in rails, and forget about unit
> testing controller actions, and exercise the controllers through
> features or maybe integration tests. If you try and test controllers
> by mocking find you will inevitably end up with brittle code that is
> intruding into areas that are none of its business. But this is not
> the fault of mocking it is a fault of Rails.

I still find some value in simple controller unit tests to test, for example, what to do when there's an XHR request vs what to do when there's an HTML request, or testing a flash message. But I think you're right about the overall root cause.

>
> All best
>
> Andrew

Steve Freeman

unread,
Jan 7, 2011, 7:22:09 AM1/7/11
to growing-object-o...@googlegroups.com
On 7 Jan 2011, at 03:50, diabolo wrote:
> [...]

> I'm sure you all know Jamis Bucks, Fat Model, Skinny Controller blog
> post. I think you have to do this in rails, and forget about unit
> testing controller actions, and exercise the controllers through
> features or maybe integration tests. If you try and test controllers
> by mocking find you will inevitably end up with brittle code that is
> intruding into areas that are none of its business. But this is not
> the fault of mocking it is a fault of Rails.

Listen to the tests! (again) :)

S.


Derek Greer

unread,
Jan 8, 2011, 4:36:45 PM1/8/11
to Growing Object-Oriented Software
Hey guys. First, I've found the book to be a good read. I've also
found this thread to be extremely interesting, but it seems to have a
low signal to noise ratio going on.

At various points, the topic has touched on state-based testing verses
behavior-based testing, what kind of double is preferred when you need
behavior-based testing, the stylistic order in which behavior
assertion occurs within the test, strict-mocking vs. loose-mocking and
several other things. Of the topics discussed thus far, the only one
that really seems substantive is the topic of strict-mocking. Style
and nomenclature aside, Brian Swan's position seems to be advocating
against encoding the interaction between the SUT and it's
collaborators when that interaction isn't the subject of the test,
while Steve Freeman's position seems to be advocating the use of
strict-mocking as a feedback mechanism for being alerted that the
original context of the test may have changed and needs to be
revisited. From reading the book and watching Brian's presentation,
it seems both advocate using real objects when it doesn't concern the
subject of the test. Please correct me if I've misrepresented either
position or have otherwise grossly oversimplified the matter.

To these distinctions I say there are pros and cons to be gained by
both approaches and I think, like Nat's excellent points concerning
the validity of viewing things from a "Mockist TDD vs Statist TDD"
perspective, I likewise see room for both positions to be held
depending on the nature of the system you're developing ... maybe :)

Regardless, some of the other points seem to have more to do with
style or circumstance (i.e. the tools you're using) than a substantive
difference in approach as it pertains to the maintainability of the
system.

To split hairs a bit, Meszaros defines a Test Spy to be a test double
used for behavior verification of indirect outputs where the test, not
the double, performs assertions to verify that the expected
interaction occurred and he defines a Mock as a double which is
configured with expectations where the double, not the test, performs
assertions to verify that the expected interaction occurred.
Personally, I prefer the Context/Specification BDD style which,
similar to the AAA style, moves all observations to the end of the
test. For behavior verification, I usually use a mocking framework to
create the test double and explicitly verify that a particular
interaction occurred in an observation. Based on Meszaros definition
this is still a mock because, while my test is explicitly rather than
implicitly causing the mock to verify specific expectations were met,
it's still the mock that's doing the assert, not the test. Of course,
semantically it's still very much like a Test Spy.

I bring this up, not to be pedantic, but only to illustrate that
technically the preference one has over where the assertion comes
isn't really a distinction between mocking and stubbing, but has more
to do with style and tooling.

Similarly, my experience has been comprised exclusively with using
loose mocks, so that's the technique I'm most comfortable with, but
any hazards to using strict-mocks isn't a "Mock's Suck" or a "Mocks
Rule" issue, but a question of which kind of mocking you are talking
about and moreover in what circumstances.

Anyway, just thought I'd share my perspective on the discussion for
what it's worth.


Derek Greer
http://aspiringcraftsman.com
http://twitter.com/derekgreer






Jamie

unread,
Jan 9, 2011, 8:03:21 AM1/9/11
to Growing Object-Oriented Software
Nat: Yes,

That's what I meant to say.

(This is not aimed at you Brian). Extreme debates are all about
identify and ego and finding a reason to pat yourself on the back.
Fighting, feeding and fucking are what make us feel good. It's harder
- both emotionally and in terms of preparation - to present a balanced
talk. And controlling emotions and preparing well are really what
engineering's all about. My whole life is about unraveling zealots.
It pays well, but it's boring. I recommend a crash course in
Cognitive Dissonance for all engineers.

If you know you have a blunt instrument and use it anyway... yes. We
call that very naughty.

J

On Dec 31 2010, 5:24 pm, Nat Pryce <nat.pr...@gmail.com> wrote:
> On 31 December 2010 14:40, Brian Swan <bgs...@gmail.com> wrote:
>
> > it's unfortunate that I had to use such a blunt instrument to
> > get people to notice and to have this discussion.
>
> You didn't.  The discussion has been going on for over ten years.  You
> could have just joined in.
>
> --Nat
>
> --http://www.natpryce.com

Jamie

unread,
Jan 9, 2011, 7:57:40 AM1/9/11
to Growing Object-Oriented Software
Brian,

Was it in an error of judgment, to 'use such a blunt instrument'. I
agree with Nat in terms of the state of teaching and debate. To me,
we are an adolescent community, looking for closure where there is
none. For models that are too simple. I know dissent is a good
thing, useful, but in taking such a strong stance you may alienate the
audience. What did that dude say about doubt and tolerance? I think
I may be suggesting that if you wanted to create discussion, there are
other ways of going about it. (this thread doesn't count because we
are all interested in TDD - it's the outliers we need to reach). That
been said, well done for having a crack.

On Dec 31 2010, 3:40 pm, Brian Swan <bgs...@gmail.com> wrote:
> On Dec 31, 12:58 am, Nat Pryce <nat.pr...@gmail.com> wrote:
>
> > Outside that context any tool becomes less helpful
> > and, far enough away from the sweet spot, a hindrance. We wanted to
> > let the readers understand enough about the contexts we have
> > experienced that they could hopefully map them to their own contexts
> > and make informed decisions as to which tools are appropriate for
> > them.
>
> Unfortunately what I've seen in many of the teams I've worked with is
> a much less nuanced approach to using mock objects than that presented
> in the GOOS book, similar to the pattern explosion that is often seen
> after reading the GoF book.
>
> > Anything else is, I think, a disservice to the audience and just
> > continues the parlous state of debate and education the industry in
> > which technologies are described almost like fundamentalist religious
> > beliefs instead of in terms of engineering trade-offs.
>
> I agree, it's unfortunate that I had to use such a blunt instrument to
> get people to notice and to have this discussion. I'd really like to
> see more tools and techniques being upfront about their sweet spot/
> trade offs, it's what first attracted me to both XP and Ruby on Rails.
>
> Brian

philip schwarz

unread,
Jan 9, 2011, 6:52:07 PM1/9/11
to Growing Object-Oriented Software
Derek,

you said:

"Steve Freeman's position seems to be advocating the use of
strict-mocking as a feedback mechanism for being alerted that the
original context of the test may have changed and needs to be
revisited. From reading the book and watching Brian's presentation,
it seems both advocate using real objects when it doesn't concern the
subject of the test."

Have you read the following sections of GOOS:

#1 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Chapter 20 - Listening to the Tests
...
Section: Too Many Expectations
...
Page: 243
...
We can make our intentions clearer by distinguishing between stubs,
simulations
of real behavior that help us get the test to pass, and expectations,
assertions we
want to make about how an object interacts with its neighbors.
...
Subsection: Write few Expectations:
A colleague, Romilly Cocking, when he first started working with us,
was surprised
by how few expectations we usually write in a unit test. Just like
“everyone” has
now learned to avoid too many assertions in a test, we try to avoid
too many
expectations. If we have more than a few, then either we’re trying to
test too large
a unit, or we’re locking down too many of the object’s interactions.

#2 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Chapter 24 Test Flexibility
...
Section: Precise Expectations
...
Subsection: Allowances and Expectations
...
Page: 277
...
jMock insists that all expectations are met during a test, but
allowances may be
matched or not. The point of the distinction is to highlight
what matters in a particular test. Expectations describe the
interactions that are
essential to the protocol we’re testing: if we send this message to
the object, we
expect to see it send this other message to this neighbor.
Allowances support the interaction we’re testing. We often use them as
stubs
to feed values into the object, to get the object into the right state
for the behavior
we want to test. We also use them to ignore other interactions that
aren’t relevant to the current test.
...
The distinction between allowances and expectations isn’t rigid, but
we’ve
found that this simple rule helps:
###############################################################
ALLOW QUERIES; EXPECT COMMANDS
COMMANDS are calls that are likely to have side effects, to change the
world outside
the target object. When we tell the auditTrail above to record a
failure, we expect
that to change the contents of some kind of log. The state of the
system will be
different if we call the method a different number of times.
QUERIES don’t change the world, so they can be called any number of
times,
including none. In our example above, it doesn’t make any difference
to the
system how many times we ask the catalog for a price.
###############################################################

The rule helps to decouple the test from the tested object. If the
implementation
changes, for example to introduce caching or use a different
algorithm, the test
is still valid. On the other hand, if we were writing a test for a
cache, we would
want to know exactly how often the query was made.

#3 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
On page 242-242 they give an example of a test with too many
expectations:

one(firstPart).isReady(); will(returnValue(true) ) ;
one(organizer).getAdj udicator(); will( returnValue(adjudicator) ) ;
one(adj udicator). findCase(firstParty, issue);
will(returnValue(case));
one(thirdParty).proceedWith(case);

and rewrite it as follows to highlight that the first three
expectations are stubs (i.e. they are there to get the test through to
the interesting part of the behaviour):

allowing(firstPart). isReady(); will(returnValue(true));
allowing(organizer). getAdjudicator() ;
will(returnValue(adjudicator));
allowing(adjudicator) . findCase(firstParty, issue) ;
will(returnValue(case));
one(thirdParty).proceedWith(case);

Derek Greer

unread,
Jan 9, 2011, 8:24:33 PM1/9/11
to growing-object-o...@googlegroups.com
My apologies for my comments showing up twice (once here and once in a newly created thread entitled Mocks Suck Reloaded). After I submitted the original reply it didn't show up for quite some time, so I thought it might have something to do with the thread being renamed or something.
Reply all
Reply to author
Forward
0 new messages