what about testing dependencies are called correctly?

236 views
Skip to first unread message

philip schwarz

unread,
Apr 18, 2011, 3:48:02 PM4/18/11
to Growing Object-Oriented Software
Hi all, yesterday I saw the following exchange on twitter:

jasongorman Tests should be strong on the "what" and weak on the "how"
samwessel @jasongorman what about testing dependencies are called
correctly? That's kinda the "how" - collaborators in the CRC sense.
jasongorman @samwessel The short answer to that is that you shouldn't
really ;)
samwessel @jasongorman Interesting. For example @jbrains advocates
unit testing the messages between objects, which seems at odds with
this advice.
jasongorman @samwessel Interaction testing, you mean?
samwessel @jasongorman ah, it has a name :) usually by verifying a
call to a mock depency. I find it v useful, but others argue it tests
"how"/impl.
samwessel @jasongorman folks also argue it leads to brittle tests if
you change how objects interact. I tell them this is good - tests
design not impl
jasongorman @samwessel I think sometimes folk advocate techniques to
suit tools
jasongorm @samwessel But if code structure emerges in the classic TDD
style, then those method calls are products of refactoring
jasongorman @samwessel Specifically, how do you test-drive methods
that just call other methods ;)
samwessel @MatthewPCooke Indeed, but if design changes, these tests
break, and @jasongorman says we should always try to avoid brittle
tests.
jasongorman @samwessel The reality is somewhere in between those two
schools, of course :)
jasongorman @samwessel No, it's perfectly valid in cases where there
are pre-conditions being enforced. Alas, mocks are mostly abused.
jasongorman @samwessel Brittle tests are brittle tests, regardles of
the cause they're a bad thing
jasongorman @samwessel @MatthewPCooke Like I said, the reality is
somewhere between the two schools. I may lean more towards one side ;)
samwessel @jasongorman The problem is I agree with both :)
MatthewPCooke @samwessel The two methods are not contradictory, the
testing is usually at different levels. IMO this book is good:
http://bit.ly/aq82bA
jbrains @jasongorman @samwessel When I check collaborations between
objects, I get feedback on how complicated they are. I like that.
jbrains @jasongorman @samwessel If a method only delegates, then I
don't test it: it's too simple to break.

The book mentioned by MatthewPCooke is GOOS.

I am going to have a go at identifying GOOS content that may help
answering the issues debated in the tweets above. Here goes:

In Chapter 6 "Object-Oriented Style", GOOS says (p47): "We value code
that is easy to maintain over code that is easy to write (As the Agile
Manifesto might have put it", and then shows what it thinks is "good"
OO design, which helps in writing code that can easily grow and adapt
to meet user's changing needs.

Showing what good design is, relies on the definitions of
Encapsulation and Information Hiding:

"Encapsulation: Ensures that the behavior of an object can only be
affected through its API. It lets us control how much a change to one
object will impact other parts of the system by ensuring that there
are no unexpected dependencies between unrelated components."

"Information hiding: Conceals how an object implements its
functionality behind the abstraction of its API. It lets us work with
higher abstractions by ignoring lower-level details that are unrelated
to the task at hand."

Armed with these definitions, GOOS says that one element of good OO
design is deciding what an object's Peers are, and what its Internals
are:

"As we organize our system, we must decide what is inside and outside
each object, so that the object provides a coherent abstraction with a
clear API. Much of the point of an object...is to encapsulate access
to its internals through its API and to hide these details from the
rest of the system. An object communicates with other objects in the
system by sending and receiving messages. The objects it communicates
with directly are its peers...This decision matters because it affects
how easy an object is to use, and so contributes to the internal
quality of the system. If we expose too much of an object’s internals
through its API, its clients will end up doing some of its work. We’ll
have distributed behavior across too many objects (they’ll be coupled
together), increasing the cost of maintenance because any changes will
now ripple across the code."

GOOS then "categorizes an object’s peers (loosely) into three types of
relationship. An object might have":

"Dependencies: Services that the object requires from its peers so it
can perform its responsibilities. The object cannot function without
these services. It should not be possible to create the object without
them."

"Notifications: Peers that need to be kept up to date with the
object’s activity. The object will notify interested peers whenever it
changes state or performs a significant action."

"Adjustments: Peers that adjust the object’s behavior to the wider
needs of the system. This includes policy objects that make decisions
on the object’s behalf (the Strategy pattern in [Gamma94]) and
component parts of the object if it’s a composite. "

Other that by titling the section describing peer objects "Internals
vs. Peers", in this chapter GOOS doesn't actually define what an
object's Internals are (it defers this to chapter 7). Although it
seems kind of obvious from the title that the Internals of an object
are those collaborators that are not its Peers, I find it confusing
that earlier on, Peers were defined as "the objects it communicates
with directly", because that suggests that an object does not
communicate with its Internals directly, but I don't understand what
that means: arent' internals collaborators? Doesn't an object
communicate with all collaborators in the same way: by sending them
messages?

Another important element of good OO design is the rule of "Context
Independence" which helps one decide whether an object hides too much
information:

"A system is easier to change if its objects are context-independent;
that is, if each object has no built-in knowledge about the system in
which it executes. This allows us to take units of behavior (objects)
and apply them in new situations. To be context-independent, whatever
an object needs to know about the larger environment it’s running in
must be passed in....Context independence guides us towards coherent
objects that can be applied in different contexts, and towards systems
that we can change by reconfiguring how their objects are composed."

In chapter 6, GOOS has outlined the design principles used to "find
the right boundaries for an object so that it plays well with its
neighbors—a caller wants to know what an object does and what it
depends on, but not how it works."

In Chapter 7 (Achieving OO Design), GOOS says: "We also want an object
to represent a coherent unit that makes sense in its larger
environment. A system built from such components will have the
flexibility to reconfigure and adapt as requirements change."

In this chapter GOOS makes an important distinction between Interface
and Protocol: "...an interface describes whether two components will
fit together, while a protocol describes whether they will work
together....In languages such as Java, we can use interfaces to define
the available messages between objects, but we also need to define
their patterns of communication—their communication protocols. We do
what we can with naming and convention, but there’s nothing in the
language to describe relationships between interfaces or methods
within an interface, which leaves a significant part of the design
implicit."

And now the bit of information that is relevant to the twitter
conversation we saw above:

"We use TDD with mock objects as a technique to make these
communication protocols visible, both as a tool for discovering them
during development and as a description when revisiting the code...TDD
with mock objects also encourages information hiding. We should mock
an object’s peers—its dependencies, notifications, and adjustments...
not its internals. Tests that highlight an object’s neighbors help us
to see whether they are peers, or should instead be internal to the
target object. A test that is clumsy or unclear might be a hint that
we’ve exposed too much implementation, and that we should rebalance
the responsibilities between the object and its neighbors."

So it seems to me that GOOS balances the amount of Information Hiding
it practices in order to both allow it to achieve a healthy degree of
Context Independence, and to allow it to specify the communication
protocols that describe how objects work together.

Am I right? Is there more? This is just my first impression having
simply re-read the quoted passages from chapters 6 and 7.

If you have got this far, thank you for your patience. I hope you
found this useful.

Matt Wynne

unread,
Apr 18, 2011, 7:24:02 PM4/18/11
to growing-object-o...@googlegroups.com

J. B. Rainsberger

unread,
Apr 18, 2011, 9:10:17 PM4/18/11
to growing-object-o...@googlegroups.com
On Mon, Apr 18, 2011 at 15:48, philip schwarz
<philip.joh...@googlemail.com> wrote:

> So it seems to me that GOOS balances the amount of Information Hiding
> it practices in order to both allow it to achieve a healthy degree of
> Context Independence, and to allow it to specify the communication
> protocols that describe how objects work together.
>
> Am I right? Is there more? This is just my first impression having
> simply re-read the quoted passages from chapters 6 and 7.
>
> If you have got this far, thank you for your patience. I hope you
> found this useful.

I spent the afternoon today teaching about context independence, with
the simple and clear example of Enterprise Java framework extension
points. The Enterprise Java literature still preaches the use of
Service Locator, which ratchets up the context /dependence/ to 11.
Instead, I teach the "ring" architecture pattern, just as Steve and
Nat describe it in the book -- Great Minds, and all that -- which
encourages designing objects that know nothing about their runtime
environment and precious little about where data comes form or where
it goes. I don't even have to teach the ring: you can stick with the
Four Elements of Simple Design and try to test-drive with mock objects
and you'll discover it.
--
J. B. (Joe) Rainsberger :: http://www.jbrains.ca ::
http://blog.thecodewhisperer.com
Diaspar Software Services :: http://www.diasparsoftware.com
Author, JUnit Recipes
2005 Gordon Pask Award for contribution to Agile practice :: Agile
2010: Learn. Practice. Explore.

Rick Pingry

unread,
Apr 19, 2011, 2:53:56 PM4/19/11
to Growing Object-Oriented Software
So it sounds like, in order to have this distinction between peers and
internals, that you are talking about layers of abstraction? Can you
say that an object and its peers are all at the same layer of
abstraction, and that the object's internals are at a lower layer of
abstraction? Can you then assert certain rules about what makes an
internal versus a peer? Specifically, would it be valid to say that
an internal's only peers are the object instance that it is an
internal of and the other internals of that object instance?

It sounds like to have this kind of organization, this kind of class
hierarchy, that you kind of have to have a pretty good design up
front, at least for a given level of abstraction, like CRC cards
before you start coding. Is this ok? I thought TDD was supposed to
help you drive this design rather than having you do it up front.

Or is it that while developing the implementation of each level of
abstraction, that you are doing TDD for the next level down? Or
perhaps it is in the course of refactoring? How do you know when you
are ready to break out a new level of abstraction?

On another note ...

"We value code that is easy to maintain over code that is easy to
write ..."

hmph. that makes me sad. Why does it have to be one way or the
other? If code is not easy to write, but it is supposed to make it
easier "in the future", how do you know you are not gold-plating? I
have a hard time selling this idea, even to myself, much less trying
to explain it to other people. Can't we come up with a methodology
that is right and helpful NOW, not just in the future?

Nat Pryce

unread,
Apr 19, 2011, 6:03:17 PM4/19/11
to growing-object-o...@googlegroups.com
On 19 April 2011 19:53, Rick Pingry <rpi...@gmail.com> wrote:
> hmph.  that makes me sad.  Why does it have to be one way or the
> other?  If code is not easy to write, but it is supposed to make it
> easier "in the future", how do you know you are not gold-plating?  I
> have a hard time selling this idea, even to myself, much less trying
> to explain it to other people.  Can't we come up with a methodology
> that is right and helpful NOW, not just in the future?

We're not saying that it cannot be both. In fact, we hope that the
book demonstrates that it CAN be both.

What we mean is that if we have to make a choice -- maybe when
evaluating technologies -- we'll pick the sustainable approach and
work to achieve simplicity rather than build on something that very
fast gets you somewhere that is not sustainable in the long term.
(I'm sure we can all name a few technologies like that).

--Nat

--
http://www.natpryce.com

Nat Pryce

unread,
Apr 19, 2011, 6:04:58 PM4/19/11
to growing-object-o...@googlegroups.com
On 19 April 2011 19:53, Rick Pingry <rpi...@gmail.com> wrote:
> So it sounds like, in order to have this distinction between peers and
> internals, that you are talking about layers of abstraction?  Can you
> say that an object and its peers are all at the same layer of
> abstraction, and that the object's internals are at a lower layer of
> abstraction?

Not necessarily, because object structures can be self-similar --
internal parts can have the same type as the whole.

--
http://www.natpryce.com

Matteo Vaccari

unread,
Apr 20, 2011, 7:08:35 AM4/20/11
to growing-object-o...@googlegroups.com
On Wed, Apr 20, 2011 at 12:03 AM, Nat Pryce <nat....@gmail.com> wrote:
On 19 April 2011 19:53, Rick Pingry <rpi...@gmail.com> wrote:
> hmph.  that makes me sad.  Why does it have to be one way or the
> other?  If code is not easy to write, but it is supposed to make it
> easier "in the future", how do you know you are not gold-plating?  I
> have a hard time selling this idea, even to myself, much less trying
> to explain it to other people.  Can't we come up with a methodology
> that is right and helpful NOW, not just in the future?

We're not saying that it cannot be both.  In fact, we hope that the
book demonstrates that it CAN be both.

I read "We value code that is easy to maintain over code that is easy to
write ..." as "you can write maintainable code, but it will take work to
learn how to do it well" :-)  So it's not "easy" in the sense that I can't
just wish for GOOS-level skills, but I have to work for it.   :-)

Matteo

Rick Pingry

unread,
Apr 20, 2011, 5:01:48 PM4/20/11
to Growing Object-Oriented Software
JBrains,

My partner and I were just looking at a video you made a while back
about integration tests being snake oil. The GOOS book of course
talks about Acceptance Tests, but perhaps you are making a
differentiation between acceptance tests and integration tests. I
bring it up in this thread because I think it is relevant. In there
you take the approach that you should mock ALL collaborators. In a
bit of code we wrote recently, we did that very thing, but find that
making changes to how the thing works is hard. Refactoring becomes
harder. (I wrote about this before and got lots of great advice from
you guys, but I think I understand better about what is going on now
so I can speak <a little> more intelligently about it). The tests
become glue that makes any kind of change to HOW a class is
implemented difficult if you ever want to extract an internal. The
GOOS book and this thread talk about a difference between peers and
internals, and I get the impression that you should mock the peers and
not mock the internals. I am not so sure now after hearing your talk
about that. Am I missing something? If you are mocking out every
collaboration between every class in your system, how do you refactor
anything without breaking tests? Are you supposed to be able to
refactor without breaking tests? Could you provide an example of how
you do that?

Thanks very much,
-- Rick

Rick Pingry

unread,
Apr 20, 2011, 5:25:59 PM4/20/11
to Growing Object-Oriented Software
Oh, wait ... Before I really stick my foot in my mouth ...

I actually watched the video quite a while ago. My partner and I were
just talking about it. I decided to watch it again myself (its
amazing how you get new things out of stuff like this after a year of
trying to do it). I am just starting it and already I can see you
called it "Integration Tests Are a Scam", and you are defining for me
what an integration test is, etc... So don't worry about that, I want
to digest it again.

I am hoping I get a better idea of how this distinction between
internals and peers helps me in knowing what tests I need to write for
a given class, and the video probably will help with that too. I will
reply again once I digest it better. Chip in if you feel the urge ;)

Thanks
-- Rick

Steve Freeman

unread,
Apr 20, 2011, 5:46:25 PM4/20/11
to growing-object-o...@googlegroups.com
On 19 Apr 2011, at 19:53, Rick Pingry wrote:
> So it sounds like, in order to have this distinction between peers and
> [...]

> internal of and the other internals of that object instance?

I got lost in the middle of that last sentence. One way to look at it is in terms of lifecycle. Internal objects are created and released by the owner. If an object needs to be passed in, then it's probably a collaborator. Of course, that begs the question of what the lifecycle /should/ be.

> It sounds like to have this kind of organization, this kind of class

> [...]


> help you drive this design rather than having you do it up front.

we find we can do this incrementally, and that part of the TDD process involves the discovery of internal and collaborating objects. Of course, we have to have some design sense while we're doing it. It's not automatic.

> Or is it that while developing the implementation of each level of
> abstraction, that you are doing TDD for the next level down? Or
> perhaps it is in the course of refactoring? How do you know when you
> are ready to break out a new level of abstraction?

I tend to think outside-in rather than top-down. I'm not sure I can explain the rules for breaking out abstractions except some experience and looking for complexities in the code and tests.

> "We value code that is easy to maintain over code that is easy to
> write ..."
>
> hmph. that makes me sad. Why does it have to be one way or the
> other? If code is not easy to write, but it is supposed to make it
> easier "in the future", how do you know you are not gold-plating? I
> have a hard time selling this idea, even to myself, much less trying
> to explain it to other people. Can't we come up with a methodology
> that is right and helpful NOW, not just in the future?

First, the two are not exclusive. It's a matter of prioritisation. What I do know is that code is read far more often than it's changed, and that I don't believe I've really understood the problem until the code is readable. I've seen design flaws become obvious by getting the name right. I've seen team after team crippled by code that slipped out of maintainability one line at a time.

S.

Philip Schwarz

unread,
Apr 20, 2011, 5:54:45 PM4/20/11
to growing-object-o...@googlegroups.com
Hi JBrains,

"The Enterprise Java literature still preaches the use of Service Locator, which ratchets up the context /dependence/ to 11." LOL. That sentence will come in handy. Thanks.

About sticking with the 4 elements of simple design... I just remembered you singling out two of them as axioms from which the SOLID theorems can be derived. Oh yes, that's it, a couple of days ago I was watching your InfoQ presentation "Integration Tests Are a Scam" http://www.infoq.com/presentations/integration-tests-scam and 1hr 21 min 15 sec into the presentation, someone asked a question on SOLID, and you said the following:

###########################################
"Part of what I am saying here reminds him of the SOLID principles, from robert martin's clean code book, and before that his ASD PPP book, and I actually take it one step further, I go right back to Kent Beck's four elements of simple design: pass the tests, minimise duplication, maximise clarity, all those things being equal: be small

Now, passing the tests is like breathing, so we don't need to do that anymore, we all do TDD and if we don't we wish we did, so I don't have to remember to pass the tests, and I have never seen a codebase that has little duplication and high clarity, but was unnecessarily big, so we throw away that rule too, that's useless, so we are left with two rules: remove duplication, and fix bad names.

I think you can take Bertrand Russel's (with all due respect to him), I think you can take his OOSC book, and replace it with remove duplication and fix bad names, and it is the same book, and I think that, I literally think that you can take everything in Clean Code, and reduce it down to remove duplication and fix bad names, so you .... remove duplication and fix bad names, and none of that stuff needs to be there any more, so why do we have a book like clean code...although you can take all those books and replace them with remove duplication and fix bad names, if I just say that, you are going to be reinventing all these theories that we have learned, you'll be like mathematician ... who never remembered theorems, who always proved everything from first principles... so the SOLID principles are the theorems, remove duplication and fix bad names are the axioms, to me they are the axioms of modular design, and whether or not you use objects is the parallel postulate, and you either get OO design, or you get modular structured design, which is why I don't even refer to it as OO design any more, I prefer to call it modular design, because that is all I really care about, these are the rules of modularity. " 

###########################################

Looking forward to learn more from you. Impertinent question: are you actually working on a book? (I think in the presentation you said one day you may write (another) one).

Philip

philip schwarz

unread,
Apr 20, 2011, 6:14:15 PM4/20/11
to Growing Object-Oriented Software
JBrains,

I think Kent Beck's "Lots of Little Pieces" is also important. Is it
an axiom? Or is it subsumed by the "be small" axiom? I look at the
following section of Kent Beck's "Smalltalk Best Practice Patterns" as
the very first steps of a train of deduction that takes us from the
axioms of DRY and "Lots of Little Pieces" to the theorems of modular
design:

"There are a few things I look for that are good predictors of whether
a project is in good shape. These are also properties I strive for in
my own code.

* Once and only once - If I only have one minute to describe good
style, I reduce it to a simple rule: In a program written with good
style, everything is said once and only once. This isn't much help in
creating good code, but it's a darned good analytic tool. If I see
several methods with the same logic, several objects with the same
methods, or several systems with similar objects, I know this rule
isn't satisfied. This leads us to the second property:

* Lots of little pieces - Good code invariably has small methods and
small objects. Only by factoring the system into many small pieces of
state and function can you hope to satisfy the "once and only once"
rule. I get lots of resistance to this idea, especially from
experienced developers, but no one thing I do to systems provides as
much help as breaking it into more pieces. When you are doing this,
however, you must always be certain that you communicate the big
picture effectively. Otherwise, you'll find yourself in a big bowl of
"Pasta a la Smalltalk," which is every bit as nasty as a dish of
"Fettuccine a la C."

* Replacing objects - Good style leads to easily replaceable objects.
In a really good system, every time the user says "I want to do this
radically different thing," the developer says, "Oh, I'll have to make
a new kind of X and plug it in." When you can extend a system solely
by adding new objects without modifying any existing objects, then you
have a system that is flexible and cheap to maintain. You can't do
this if you don't have lots of little pieces.

* Moving Objects - Another property of systems with good style is that
their objects can be easily moved to new contexts. You should be able
to say. "This object in this system does the same job in that
system. ... Then, if you have a system built with lots of little
pieces, you will be able to make the necessary modifications and
generalizations fairly easily.

* Rates of Change - A simple criterion I use all the time is checking
rates of change. I learned this criterion from something Brad Cox said
a long time ago. I've since generalized it to - don't put two rates of
change together. Don't have part of a method that changes in every
subclass with parts that don't change. Don't have some instance
variables whose value changes every second in the same object with
instance variables whose values change once a month. Don't have a
collection where some elements are added and removed every second and
some elements are added and removed once a month. Don't have code in
an object that has to change for every piece of hardware, and code
that has to change for every operating system. How do you avoid this
problem? You got it, lots of little pieces."

On Apr 19, 2:10 am, "J. B. Rainsberger" <m...@jbrains.ca> wrote:
> On Mon, Apr 18, 2011 at 15:48, philip schwarz
>

philip schwarz

unread,
Apr 20, 2011, 6:58:59 PM4/20/11
to Growing Object-Oriented Software
Hi Rick,

great questions. I would also like to hear more about internals from
Nat and Steve. I asked some questions in thread "DI fwks don't
distinguish between 'internals' & 'peers' and often violate 'composite
simpler than the sum of its parts' principle" (http://
groups.google.com/group/growing-object-oriented-software/msg/
1ff25a757b0fda20 and http://groups.google.com/group/growing-object-oriented-software/msg/00f946b2f6031012
), but did not get any answers. That could be because Luca Minudel had
a go at answering with http://groups.google.com/group/growing-object-oriented-software/msg/a4e500d0a3b325ef
. I wouldn't be surprised if Nat and Steve are just too busy to answer
every single question, and I perfectly understand that. How can they
find the time to write great books like GOOS if they are not allowed
to focus on essential work. It could also be that there is not much
more to be said about internals than has already been said in the
book. Alternatively, the reason why the book doesn't say much about
them could be because more work/thinking is under way on that subject.

Philip

J. B. Rainsberger

unread,
Apr 21, 2011, 10:55:53 AM4/21/11
to growing-object-o...@googlegroups.com
On Wed, Apr 20, 2011 at 17:01, Rick Pingry <rpi...@gmail.com> wrote:

> My partner and I were just looking at a video you made a while back
> about integration tests being snake oil.  The GOOS book of course
> talks about Acceptance Tests, but perhaps you are making a
> differentiation between acceptance tests and integration tests.  I
> bring it up in this thread because I think it is relevant.

Short version: Don't use end-to-end tests to avoid flaw in the basic
correctness of your system.

The crux of the problem: The Average Person™ conflates "Acceptance
test" (help the Customer feel good that the feature is present) with
"System test" (help the Programmer feel good that the system
components work together correctly) because they /tend/ both to be
end-to-end tests. As a result, the Average Person doesn't write enough
microtests.

GOOS uses Acceptance Tests to guide programming and help Programmers
know when they've built enough stuff. Because they choose to implement
those tests in Java, the Average Reader™ might interpret those tests
as System Tests, and believe that they serve the purpose of making
sure the whole system works. Even when GOOS /does/ use them as System
Tests, the book also shows many, many microtests, thereby avoiding the
logic error that the Average Person™ makes.

> In there
> you take the approach that you should mock ALL collaborators.  In a
> bit of code we wrote recently, we did that very thing, but find that
> making changes to how the thing works is hard.  Refactoring becomes
> harder.  (I wrote about this before and got lots of great advice from
> you guys, but I think I understand better about what is going on now
> so I can speak <a little> more intelligently about it).  The tests
> become glue that makes any kind of change to HOW a class is
> implemented difficult if you ever want to extract an internal.  The
> GOOS book and this thread talk about a difference between peers and
> internals, and I get the impression that you should mock the peers and
> not mock the internals.  I am not so sure now after hearing your talk
> about that.  Am I missing something?

No. I agree about using test doubles for peers, not internals. I
simply use the painful pressure from trying to use test doubles for
all collaborators to help me classify them as peers or internals.
Sometimes I guess well about that classification as a shortcut, but
when I don't guess well, I can always take the long route.

>  If you are mocking out every
> collaboration between every class in your system, how do you refactor
> anything without breaking tests?  Are you supposed to be able to
> refactor without breaking tests?  Could you provide an example of how
> you do that?

I tend more often to throw away tests than break them. If changing a
client leads to changing a peer interface, then I switch to revising
the contract tests for that interface. Sometimes this means throwing
tests away, because sometimes this means throwing an interface away.

I'm afraid I have no example to show you, because contrived examples
don't demonstrate the point adequately, and I don't own the IP rights
to the real-life examples I've used.

J. B. Rainsberger

unread,
Apr 21, 2011, 11:00:18 AM4/21/11
to growing-object-o...@googlegroups.com
On Wed, Apr 20, 2011 at 17:54, Philip Schwarz
<philip.joh...@googlemail.com> wrote:

> I think you can take Bertrand Russel's (with all due respect to him),

Meyer, not Russell. :)

> Looking forward to learn more from you. Impertinent question: are you
> actually working on a book? (I think in the presentation you said one day
> you may write (another) one).

I'm working on a book in the sense that most people are working on a
book: I have an idea, it might not suck, and I've pretended to think
about it a lot.

J. B. Rainsberger

unread,
Apr 21, 2011, 11:07:33 AM4/21/11
to growing-object-o...@googlegroups.com
On Wed, Apr 20, 2011 at 18:14, philip schwarz
<philip.joh...@googlemail.com> wrote:

> I think Kent Beck's "Lots of Little Pieces" is also important. Is it
> an axiom? Or is it subsumed by the "be small" axiom?

I have observed that removing duplication in tests drives me to lots
of little pieces, so I'd consider that a theorem.

> * Once and only once

Equivalent to Remove Duplication.

> * Lots of little pieces

Achieved by removing duplication in tests.

> * Replacing objects

Achieved by removing duplication in tests; contributes to "Lots of
little pieces".

> * Moving Objects

Equivalent to Context Independence, example of Dependency Inversion
Principle, achieved by removing duplication in tests.

> * Rates of Change

Expression of Cohesion principle (similar things together; different
things apart; we measure "similarity" or "difference" by rate of
change); low cohesion usually means writing tests with the same intent
on different parts of the system, so remove the duplication of intent.

Rick Pingry

unread,
Apr 21, 2011, 6:21:11 PM4/21/11
to Growing Object-Oriented Software
So, I made through the rest of your "Integration Tests Are a Scam"
from the Agile 2009. Like I said, I listened to it about a year ago,
but there is SO MUCH more that makes sense to me now that must have
just flew over my head before.
I understand much better about the role of acceptance tests and how
they complement rather than compete against focussed tests.
Thanks.

A couple questions though:.

1. You mention writing interface contract tests to make sure that a
particular return value is possible, because you had created a
collaboration test that shows that value returned from a stub. What
about errors and exception handling? How often should you be dealing
with problem situations that may be difficult to reproduce on their
own. In particular, I am thinking about dealing with dealing with 3rd
party libraries. In my case, I am interacting with COM and MS Word/
Excel in an ActiveX control written in C++. There are all kinds of
things that the interface says MIGHT go wrong, but I don't know if I
can cause a case where I can FORCE them to go wrong (for instance if I
have a corrupt document). Should I work really hard to handle these
errors?

2. As we are trying to practice this, we are finding that we are
creating super tiny classes. We are wondering if we are taking the
single responsibility principle to an extreme. I think we will get
better at it with practice, dealing closely with naming, watching for
the composite simpler than the sum of its parts idea. I can also see
the refactoring that removes the middle man. Still, we are struggling
with it I think. Is there a way for a test to tell us that we have
broken something up too small or for the interfaces we choose to tell
us that something is too small?

Thanks again
-- Rick

Steve Freeman

unread,
Apr 22, 2011, 5:34:11 AM4/22/11
to growing-object-o...@googlegroups.com
The only way to find the limits of a technique is to overdo it :)

It's hard to talk about this sort of thing in the abstract, perhaps you could post an example?

S.

philip schwarz

unread,
Apr 22, 2011, 12:30:43 PM4/22/11
to Growing Object-Oriented Software
Bertrand Russell...Doh...I have only been getting 5-6 hours sleep
lately...I reckon the phrase "working out everything from first
principles" made me think of "Principia Mathematica" which was co-
written by philosopher Bertrand Russel, and then I swapped Russel for
Meyer in Bertrand Meyer.

Even if it will take years, I will keep looking forward to your book.

Philip

On Apr 21, 4:00 pm, "J. B. Rainsberger" <m...@jbrains.ca> wrote:
> On Wed, Apr 20, 2011 at 17:54, Philip Schwarz
>

philip schwarz

unread,
Apr 22, 2011, 12:44:08 PM4/22/11
to Growing Object-Oriented Software
thanks for the analysis.

Do you agree that in the following, Kent Beck is talking about Uncle
Bob Martin's modernised version (http://www.objectmentor.com/resources/
articles/ocp.pdf) of Bertrand Meyer's Open Closed Principle (http://
en.wikipedia.org/wiki/Open/closed_principle) :

* Replacing objects - Good style leads to easily replaceable objects.
In a really good system, every time the user says "I want to do this
radically different thing," the developer says, "Oh, I'll have to
make
a new kind of X and plug it in." When you can extend a system solely
by adding new objects without modifying any existing objects, then
you
have a system that is flexible and cheap to maintain. You can't do
this if you don't have lots of little pieces.

Philip

On Apr 21, 4:07 pm, "J. B. Rainsberger" <m...@jbrains.ca> wrote:
> On Wed, Apr 20, 2011 at 18:14, philip schwarz
>

Rick Pingry

unread,
Apr 22, 2011, 1:22:11 PM4/22/11
to Growing Object-Oriented Software
well, I lied a little.

When I said that we are trying it in practice, I REALLY meant that we
were playing around with mind exercises about how it would really
work. We were thinking that we could walk through the exercise of
working this way to see if we could figure out, and while doing so, we
kept tying ourselves in knots and wondering if we were doing it
right. Well, as you can guess, we were completely wrong about that.

We decided instead to try a little project that was real and actually
do it. This time our focus has been on the Interface Contract tests
and the Peer Collaborator tests that JBrains referred to in
"Integration Tests Are a Scam". So far, this has made ALL the
difference for us. It really helped us understand better about what
tests we needed to write. We also came to the realization, because we
were focusing SO much on interface, about how important those
interfaces are to get right.

Another thing that focusing on testing around the interfaces has
taught us, is that when there is a testing problem, we focus on making
the interface more testable rather than just thinking that "TDD sucks
and is too hard". We have realized that the design direction that TDD
gives us is not about designing the IMPLEMENTATION, but rather more
about designing the interfaces between classes. I am sure that your
book and other books that we have read have said all of this, as a
matter of fact I am pretty sure I remember that on hind-sight, but for
some reason, it just really resonated this time, as our tests have
focussed on these interfaces.

So, another thing we found is that these interfaces, these classes
that we thought would be too small, once you get into making them
concrete, you realize there is aways much more involved. Dealing with
the various conditions and error cases makes even "simple" interfaces
have plenty enough to give them a reason to be.

We are a little nervous about learning just how big an appropriate
interface is, but we don't have a good example, so we will let you
know if we come up with a real example. I am sure that will come with
experience and we can later refactor to move the middle-man etc.

So, what to do when you are working with code that is under test, but
not under good tests? We still often have the temptation to "throw it
all away and start over", but we are experienced enough now to at
least know that is a bad idea. I know we are still learning, so I am
sure that in a year from now I will look back on what I am doing now
and think it is no good.

So now, I have a good bit of code OLDER code that I have written with
the wrong tests, as we have been learning. Places where I feel the
tests make it harder to refactor. Should I approach this body of code
like legacy code, even though it has tests? Is it ok to throw away
classes that are under test? I guess you would have to, if for not
other reason than as you code evolves you find the interfaces change
or whatever. Perhaps the problem is that we worked so dang hard to
get these tests working in the first place that it is REALLY hard to
throw them away.

Thanks again for all your help
-- Rick

J. B. Rainsberger

unread,
Apr 22, 2011, 1:49:27 PM4/22/11
to growing-object-o...@googlegroups.com
On Thu, Apr 21, 2011 at 18:21, Rick Pingry <rpi...@gmail.com> wrote:

> So, I made through the rest of your "Integration Tests Are a Scam"
> from the Agile 2009.  Like I said, I listened to it about a year ago,
> but there is SO MUCH more that makes sense to me now that must have
> just flew over my head before.
> I understand much better about the role of acceptance tests and how
> they complement rather than compete against focussed tests.
> Thanks.
>
> A couple questions though:.
>
> 1.  You mention writing interface contract tests to make sure that a
> particular return value is possible, because you had created a
> collaboration test that shows that value returned from a stub.  What
> about errors and exception handling?  How often should you be dealing
> with problem situations that may be difficult to reproduce on their
> own.  In particular, I am thinking about dealing with dealing with 3rd
> party libraries.  In my case, I am interacting with COM and MS Word/
> Excel in an ActiveX control written in C++.  There are all kinds of
> things that the interface says MIGHT go wrong, but I don't know if I
> can cause a case where I can FORCE them to go wrong (for instance if I
> have a corrupt document).  Should I work really hard to handle these
> errors?

If the Client handles Error X, then I expect to write a contract test
that shows an example of when the Supplier throws Error X; otherwise,
perhaps I don't need to handle X.

Since I expect to write such a test, if I decide not to write such a
test, then I expect to write a comment justifying the decision.

If the Client simply rethrows Error X, then I might or might not write
a test for that, contract or otherwise. I might leave a comment
explained the situation.

> 2. As we are trying to practice this, we are finding that we are
> creating super tiny classes.  We are wondering if we are taking the
> single responsibility principle to an extreme.  I think we will get
> better at it with practice, dealing closely with naming, watching for
> the composite simpler than the sum of its parts idea.  I can also see
> the refactoring that removes the middle man.  Still, we are struggling
> with it I think.  Is there a way for a test to tell us that we have
> broken something up too small or for the interfaces we choose to tell
> us that something is too small?

As Steve says, you had to step off the cliff to find the edge. Beyond
that, though, this technique encourages "Lots of Little Pieces", as
Kent Beck wrote, and finding duplication in the names of classes or
methods encourages putting tiny pieces together into
cohesive-yet-small pieces. Do both.

Good luck.

J. B. Rainsberger

unread,
Apr 22, 2011, 1:50:31 PM4/22/11
to growing-object-o...@googlegroups.com
On Fri, Apr 22, 2011 at 12:44, philip schwarz
<philip.joh...@googlemail.com> wrote:

> thanks for the analysis.

You're welcome.

> Do you agree that in the following, Kent Beck is talking about Uncle
> Bob Martin's modernised version (http://www.objectmentor.com/resources/
> articles/ocp.pdf) of Bertrand Meyer's  Open Closed Principle (http://
> en.wikipedia.org/wiki/Open/closed_principle) :
>
> * Replacing objects - Good style leads to easily replaceable objects.
> In a really good system, every time the user says "I want to do this
> radically different thing," the developer says, "Oh, I'll have to
> make
> a new kind of X and plug it in." When you can extend a system solely
> by adding new objects without modifying any existing objects, then
> you
> have a system that is flexible and cheap to maintain. You can't do
> this if you don't have lots of little pieces.

It sounds like OCP to me.

J. B. Rainsberger

unread,
Apr 22, 2011, 1:57:24 PM4/22/11
to growing-object-o...@googlegroups.com
On Fri, Apr 22, 2011 at 13:22, Rick Pingry <rpi...@gmail.com> wrote:

> Another thing that focusing on testing around the interfaces has
> taught us, is that when there is a testing problem, we focus on making
> the interface more testable rather than just thinking that "TDD sucks
> and is too hard".

I remember, about 6-12 months after starting to practise TDD, a change
in my thinking. Specifically, one day, when confronted with this
situation, I stopped blaming the tests and started blaming the
production code. I began to trust the tests. That has really benefited
me.

> We are a little nervous about learning just how big an appropriate
> interface is, but we don't have a good example, so we will let you
> know if we come up with a real example.  I am sure that will come with
> experience and we can later refactor to move the middle-man etc.

I believe this is largely an issue of personal style: some prefer to
break up large things; some prefer to unify little things. I belong to
the second group: I tend to build little things, then gather them
together when patterns in names suggest it.

> So, what to do when you are working with code that is under test, but
> not under good tests?  We still often have the temptation to "throw it
> all away and start over", but we are experienced enough now to at
> least know that is a bad idea.  I know we are still learning, so I am
> sure that in a year from now I will look back on what I am doing now
> and think it is no good.

I recommend simply writing one better test right now.

> So now, I have a good bit of code OLDER code that I have written with
> the wrong tests, as we have been learning.  Places where I feel the
> tests make it harder to refactor.  Should I approach this body of code
> like legacy code, even though it has tests?  Is it ok to throw away
> classes that are under test?  I guess you would have to, if for not
> other reason than as you code evolves you find the interfaces change
> or whatever.  Perhaps the problem is that we worked so dang hard to
> get these tests working in the first place that it is REALLY hard to
> throw them away.

I never mind throwing away code, but then, I didn't always think that way.

Steve Freeman

unread,
Apr 23, 2011, 2:07:35 PM4/23/11
to growing-object-o...@googlegroups.com
A nice story...

It's hard to say what to do in these circumstances without seeing the "damage". My guess is that you could extract smaller pieces out of it that are better structured within the existing tests. In the end you have to make a judgement call as to what is pulling its weight. For example, one option might be to rely on tests a level up and rework the implementation cleanly. If you don't expect to touch this code again, then it might be best to just leave it. Otherwise, let your new requirements drive what you do next.

S.

On 22 Apr 2011, at 19:22, Rick Pingry wrote:
[...]

Dale Emery

unread,
Apr 23, 2011, 3:12:51 PM4/23/11
to growing-object-o...@googlegroups.com
Hi Joe,

No. I agree about using test doubles for peers, not internals. I simply use the painful pressure from trying to use test doubles for all collaborators to help me classify them as peers or internals. Sometimes I guess well about that classification as a shortcut, but when I don't guess well, I can always take the long route.

Seems to me there oughtta be criteria for cleanly distinguishing peers from internals, but I can't figure out what the criteria are. My usual tactic of stepping back and thinking about the class's responsibilities (which I define as triplets of context+stimulus+result) often helps me, but I can't articulate how and why it helps. Bah.

Dale

--
Dale Emery
Consultant to software teams and leaders
Web: http://dhemery.com

Dale Emery

unread,
Apr 23, 2011, 3:21:57 PM4/23/11
to growing-object-o...@googlegroups.com
Hi Rick,

Another thing that focusing on testing around the interfaces has taught us, is that when there is a testing problem, we focus on making the interface more testable rather than just thinking that "TDD sucks and is too hard".  We have realized that the design direction that TDD gives us is not about designing the IMPLEMENTATION, but rather more about designing the interfaces between classes.  I am sure that your book and other books that we have read have said all of this, as a matter of fact I am pretty sure I remember that on hind-sight, but for some reason, it just really resonated this time, as our tests have focussed on these interfaces.

With Steve and Nat and Joe, I'm forever having those "ahh.... so /that's/ what they meant!" moments.
 

J. B. Rainsberger

unread,
Apr 24, 2011, 1:04:47 PM4/24/11
to growing-object-o...@googlegroups.com
On Sat, Apr 23, 2011 at 15:12, Dale Emery <da...@dhemery.com> wrote:
> Hi Joe,
>
>> No. I agree about using test doubles for peers, not internals. I simply
>> use the painful pressure from trying to use test doubles for all
>> collaborators to help me classify them as peers or internals. Sometimes I
>> guess well about that classification as a shortcut, but when I don't guess
>> well, I can always take the long route.
>
> Seems to me there oughtta be criteria for cleanly distinguishing peers from
> internals, but I can't figure out what the criteria are. My usual tactic of
> stepping back and thinking about the class's responsibilities (which I
> define as triplets of context+stimulus+result) often helps me, but I can't
> articulate how and why it helps. Bah.

Clearly that means that we're Experts!

Rick Pingry

unread,
May 4, 2011, 5:15:19 PM5/4/11
to Growing Object-Oriented Software
A follow up question ...

As we have been working on this, we have noticed that if we are
following the "Tell Don't Ask" philosophy, that there is not much to
our interface contract tests. Without looking at the collaborators
(which we do in the collaboration tests), there is nothing to assert.
Is this what we should be expecting? Should we even write an
interface contract test when the class just has a public member
function that has no return value or state to check?

Thanks Again,
-- Rick

J. B. Rainsberger

unread,
May 5, 2011, 4:27:49 PM5/5/11
to growing-object-o...@googlegroups.com

I wouldn't expect that, no. If a method returns no value and has no
observable side-effect, then what does it do?

Can you post some manner of example?

Rick Pingry

unread,
May 6, 2011, 11:45:13 AM5/6/11
to Growing Object-Oriented Software
Well, it has a side-effect through its collaborators. But that should
be in the collaboration tests right? Not the interface contract.

in my specific example, I am writing code that manipulates MS Word
through its COM interface, say that I am telling WORD to save the
document. My interface might be as simple as :

void SaveWordDoc();

and the class will have the COM interface. I can test that the
appropriate WORD functions got called (say there are several of them
to get WORD to do just what I want, this composite is simpler than the
sum of its parts), I can mock the interface to WORD and make
collaboration tests that show the relevant calls are being made.

BUT

.. from the perspective of the client of this class, only this
function gets called. So the interface contract should have this
function ... AND ... what else?

On May 5, 2:27 pm, "J. B. Rainsberger" <m...@jbrains.ca> wrote:

J. B. Rainsberger

unread,
May 6, 2011, 12:18:34 PM5/6/11
to growing-object-o...@googlegroups.com
On Fri, May 6, 2011 at 11:45, Rick Pingry <rpi...@gmail.com> wrote:

> Well, it has a side-effect through its collaborators.  But that should
> be in the collaboration tests right?  Not the interface contract.
>
> in my specific example, I am writing code that manipulates MS Word
> through its COM interface, say that I am telling WORD to save the
> document.  My interface might be as simple as :
>
> void SaveWordDoc();
>
> and the class will have the COM interface.  I can test that the
> appropriate WORD functions got called (say there are several of them
> to get WORD to do just what I want, this composite is simpler than the
> sum of its parts),  I can mock the interface to WORD and make
> collaboration tests that show the relevant calls are being made.
>
> BUT
>
> .. from the perspective of the client of this class, only this
> function gets called.  So the interface contract should have this
> function ... AND ... what else?

I understand better, thanks.

In this case, there is only a single, implicit contract: SaveWordDoc()
either throws an exception or doesn't. The interesting part might be
which exceptions it might throw. If it translates exceptions by
rewrapping them, then I would put the translated exceptions in the
contract. Beyond that, you're right, there's not much to check here.

Some methods have contracts you can't really express as code, like
this one. In my Point of Sale exercise, we have a Display with
displayPrice(amount) and displayMessage(text), and it's tough to
describe the contract with code, so we simply say in words that it's
the implementation's responsibility to try to displayPrice or
displayMessage appropriately (text, graphics, whatever) and not blow
up. This bothers me, but given that there are other, more interesting
contracts to check, I let it go.

I apologise for not having a more satisfying answer. :)

Rick Pingry

unread,
May 6, 2011, 2:24:29 PM5/6/11
to Growing Object-Oriented Software
No, that's pretty much what I was thinking too, so I'm glad to get
your confirmation. Your response does remind me that even with this
simple interface, that I DO need to concern myself with whether or not
I am throwing exception.

Thanks
-- Rick

On May 6, 10:18 am, "J. B. Rainsberger" <m...@jbrains.ca> wrote:

Nat Pryce

unread,
May 6, 2011, 3:51:12 PM5/6/11
to growing-object-o...@googlegroups.com
When code is heavily oriented around "tell don't ask" interactions, I
think of it more in terms of event producers and consumers than
procedures and callers. In that event-driven style, I don't think of
the receiver of an event conforming to a contract that specifies how
the event handler behaves. Instead I think of the sender conforming to
a contract that specifies how valid events are emitted -- e.g.
constraining parameter values and the order of outgoing events if
applicable. I use mock objects to describe that contract.

--Nat

www.natpryce.com

J. B. Rainsberger

unread,
May 6, 2011, 7:00:16 PM5/6/11
to growing-object-o...@googlegroups.com
On Fri, May 6, 2011 at 15:51, Nat Pryce <nat....@gmail.com> wrote:

> When code is heavily oriented around "tell don't ask" interactions, I
> think of it more in terms of event producers and consumers than
> procedures and callers. In that event-driven style, I don't think of
> the receiver of an event conforming to a contract that specifies how
> the event handler behaves. Instead I think of the sender conforming to
> a contract that specifies how valid events are emitted -- e.g.
> constraining parameter values and the order of outgoing events if
> applicable. I use mock objects to describe that contract.

+1

I wonder, Rick, how to redesign your system to avoid the save() method
entirely. I don't necessarily think it would be better, but the
exercise might bear fruit.

David Stanek

unread,
May 6, 2011, 8:53:11 PM5/6/11
to growing-object-o...@googlegroups.com
On Fri, May 6, 2011 at 3:51 PM, Nat Pryce <nat....@gmail.com> wrote:
When code is heavily oriented around "tell don't ask" interactions, I
think of it more in terms of event producers and consumers than
procedures and callers. In that event-driven style, I don't think of
the receiver of an event conforming to a contract that specifies how
the event handler behaves. Instead I think of the sender conforming to
a contract that specifies how valid events are emitted -- e.g.
constraining parameter values and the order of outgoing events if
applicable. I use mock objects to describe that contract.


Where is the best place to find an example of this usage of mock objects? 

--
David
blog: http://www.traceback.org
twitter: http://twitter.com/dstanek

Steve Freeman

unread,
May 7, 2011, 6:00:48 AM5/7/11
to growing-object-o...@googlegroups.com

er, the book?

S

J. B. Rainsberger

unread,
May 7, 2011, 10:00:44 AM5/7/11
to growing-object-o...@googlegroups.com

Yes. And eventually at http://github.com/jbrains/goos

Rick Pingry

unread,
May 7, 2011, 10:17:43 AM5/7/11
to Growing Object-Oriented Software
In those kids of situations, when there is not much for the caller to
do, but you are wanting to specify the valid values for any
parameters, to use contract asserts within the actual production code
to specify and describe the valid parameters, rather than mocks and
specifications? Perhaps both?

I suppose that the mocks on the collaboration-test end of the tests
show that the class under test is not breaking the contract, and that
would be important.

What if the interface itself, rather than being a true interface, were
an abstract class? The base class functions validated function
parameters with contract asserts. Classes that implemented the
interface should call the base class first to validate the parameters,
then implement its own functionality. That way, wherever you had
stubs or mocks, they would also inherit this contract, and would throw
whenever an offending client broke the contract.

The up-side to this I can see is that you only need to specify the
contract for the interface in one place, in the definition for the
interface itself and you can do it in code as well.

The down-side is that collaboration tests will fail when they break
the contract, even if that is not specifically the action you are
testing.

So far I think the up-side wins.

On May 6, 5:00 pm, "J. B. Rainsberger" <m...@jbrains.ca> wrote:

Steve Freeman

unread,
May 7, 2011, 11:23:06 AM5/7/11
to growing-object-o...@googlegroups.com
In what situation could the class under test pass bad parameters to a collaborator? If you've got this sorted out then why would you want to test the validity of the outbound parameters?

The trouble with implementation by inheritance in the languages we usually work in is that it blocks the use of inheritance for other shared implementation.

Over the years, I've gone right off this kind of defensive programming within my code, especially now I don't get memory smashes in VM languages. I only use it at the boundaries where my code interacts with the outside world.

S.

philip schwarz

unread,
May 7, 2011, 11:38:23 AM5/7/11
to Growing Object-Oriented Software
@JBrains

>> jasongorman @samwessel Specifically, how do you test-drive methods that just call other methods ;)
>>...
>>jbrains @jasongorman @samwessel When I check collaborations between objects, I get feedback on how complicated they are. I like that.
>>jbrains @jasongorman @samwessel If a method only delegates, then I don't test it: it's too simple to break.

I have just watched Integration Tests are a Scam
http://www.infoq.com/presentations/integration-tests-scam (yes,
again!). Here is my summary of collaboration tests and contract tests
(assuming command/query separation):

c = client = Subject Under Test;
s = server):

#######################
# COLLABORATION TESTS #
#######################

###
#1#
###
Does c ask its collaborator s the right questions?
Does c invoke the right method on s, with the right parameters and for
the right reasons?

1.a
Does c send s the right command at the right time?
given(facts).when(conditions).then(c.orders(s).toCarryOut(action))

1.b
Does c sends s the right query at the right time?
given(facts).when(conditions).then(c.tells(s).toSupplyFooUsing(data))

###
#2#
###
Can c handle all its collaborator s' Responses?
Does c handle s' response as expected?
given(facts).when(conditions).then(c.correctlyHandles(foo).returnedBy(s).whenAskedToSupplyFooUsing(data))

##################
# CONTRACT TESTS #
##################

###
#1#
###
Does s even try to answer the question c is asking?
Can s handle c's question in the first place?

1.a
Can s handle c's command?
s.canObeyOrderToCarryOut(action)

1.b
Can s handle c's query?
s.canSupplyFooUsing(data)

###
#2#
###
Does s give c the answer we think it can?
Does s actually respond to c's query the way we think it does, at all?
s.respondsWith(foo).whenToldToSupplyFooUsing(data)

############
# Question #
############

When you say 'If a method only delegates, then I don't test it: it's
too simple to break', do you mean the following:

i) You dont bother with collaboration test 1.a, i.e you don't bother
with
given(facts).when(conditions).thenAssert(c.orders(s).toCarryOut(action))
ii) You do bother with contract test 1.a, i.e. you do bother with
s.canObeyOrderToCarryOut(action)

On May 7, 3:00 pm, "J. B. Rainsberger" <m...@jbrains.ca> wrote:
> On Sat, May 7, 2011 at 06:00, Steve Freeman <st...@m3p.co.uk> wrote:
> > On 7 May 2011, at 01:53, David Stanek wrote:
> >> On Fri, May 6, 2011 at 3:51 PM, Nat Pryce <nat.pr...@gmail.com> wrote:
> >> Instead I think of the sender conforming to
> >>> a contract that specifies how valid events are emitted -- e.g.
> >>> constraining parameter values and the order of outgoing events if
> >>> applicable. I use mock objects to describe that contract.
>
> >> Where is the best place to find an example of this usage of mock objects?
>
> > er, the book?
>
> Yes. And eventually athttp://github.com/jbrains/goos

J. B. Rainsberger

unread,
May 7, 2011, 1:54:58 PM5/7/11
to growing-object-o...@googlegroups.com
On Sat, May 7, 2011 at 10:17, Rick Pingry <rpi...@gmail.com> wrote:

> In those kids of situations, when there is not much for the caller to
> do, but you are wanting to specify the valid values for any
> parameters, to use contract asserts within the actual production code
> to specify and describe the valid parameters, rather than mocks and
> specifications?  Perhaps both?
>
> I suppose that the mocks on the collaboration-test end of the tests
> show that the class under test is not breaking the contract, and that
> would be important.

If A calls B.foo(x,y,z) and I want to check that A gets x,y,z right,
then I write collaboration tests that show how A chooses
characteristic values for x,y,z. For example, which inputs to A or
which system states cause A to choose x=12 instead of x=15 or x=20?

If B.foo(x,y,z) rejects x<0, y<10, z>50, then I write contract tests
on B.foo() for the boundary cases.

> What if the interface itself, rather than being a true interface, were
> an abstract class?  The base class functions validated function
> parameters with contract asserts.  Classes that implemented the
> interface should call the base class first to validate the parameters,
> then implement its own functionality.  That way, wherever you had
> stubs or mocks, they would also inherit this contract, and would throw
> whenever an offending client broke the contract.

Every abstract class is the union of a concrete class and an
interface. I often split abstract classes up into their two pieces,
then proceed as normal. The abstract portion of an abstract class
provides a policy that the concrete part uses, so why not separate
them?

In the case you cite, if the concrete class validates the parameters,
that's easy to test. Then the interface can assume that the parameters
are valid. I might add a comment to the interface saying, "If you
connect me to Foo, then you get parameter validation free. No need to
program defensively in the implementation." I might even add an empty
contract test saying "Don't validate parameters in the implementation;
use Foo instead." Hamcrest uses a method for this in their Matcher
interface as a reminder not to implement Matcher directly, but rather
to subclass BaseMatcher.

> The up-side to this I can see is that you only need to specify the
> contract for the interface in one place, in the definition for the
> interface itself and you can do it in code as well.
>
> The down-side is that collaboration tests will fail when they break
> the contract, even if that is not specifically the action you are
> testing.
>
> So far I think the up-side wins.

Perhaps it does, but I wonder aloud whether separating the concrete
bit from the abstract bit wins even more.

J. B. Rainsberger

unread,
May 7, 2011, 1:58:50 PM5/7/11
to growing-object-o...@googlegroups.com
On Sat, May 7, 2011 at 11:38, philip schwarz
<philip.joh...@googlemail.com> wrote:

> When you say 'If a method only delegates, then I don't test it: it's
> too simple to break', do you mean the following:
>
> i) You dont bother with collaboration test 1.a, i.e you don't bother
> with
> given(facts).when(conditions).thenAssert(c.orders(s).toCarryOut(action))
> ii) You do bother with contract test 1.a, i.e. you do bother with
> s.canObeyOrderToCarryOut(action)

I mean specifically this:

class A:
def foo(x,y,z):
peer.foo(x,y,z)

I don't worry about testing A.foo(), but I almost always test
peer.foo(). If A.foo() breaks, it's peer.foo()'s fault.

If peer is an interface, then I /might/ write the corresponding
collaboration test for A using peer, and there would almost certainly
be a contract test for peer.foo().

Does that help?

philip schwarz

unread,
May 7, 2011, 2:41:22 PM5/7/11
to Growing Object-Oriented Software
Yes, that answers my question, thank you.

On May 7, 6:58 pm, "J. B. Rainsberger" <m...@jbrains.ca> wrote:
> On Sat, May 7, 2011 at 11:38, philip schwarz
>
Reply all
Reply to author
Forward
0 new messages