> So it seems to me that GOOS balances the amount of Information Hiding
> it practices in order to both allow it to achieve a healthy degree of
> Context Independence, and to allow it to specify the communication
> protocols that describe how objects work together.
>
> Am I right? Is there more? This is just my first impression having
> simply re-read the quoted passages from chapters 6 and 7.
>
> If you have got this far, thank you for your patience. I hope you
> found this useful.
I spent the afternoon today teaching about context independence, with
the simple and clear example of Enterprise Java framework extension
points. The Enterprise Java literature still preaches the use of
Service Locator, which ratchets up the context /dependence/ to 11.
Instead, I teach the "ring" architecture pattern, just as Steve and
Nat describe it in the book -- Great Minds, and all that -- which
encourages designing objects that know nothing about their runtime
environment and precious little about where data comes form or where
it goes. I don't even have to teach the ring: you can stick with the
Four Elements of Simple Design and try to test-drive with mock objects
and you'll discover it.
--
J. B. (Joe) Rainsberger :: http://www.jbrains.ca ::
http://blog.thecodewhisperer.com
Diaspar Software Services :: http://www.diasparsoftware.com
Author, JUnit Recipes
2005 Gordon Pask Award for contribution to Agile practice :: Agile
2010: Learn. Practice. Explore.
We're not saying that it cannot be both. In fact, we hope that the
book demonstrates that it CAN be both.
What we mean is that if we have to make a choice -- maybe when
evaluating technologies -- we'll pick the sustainable approach and
work to achieve simplicity rather than build on something that very
fast gets you somewhere that is not sustainable in the long term.
(I'm sure we can all name a few technologies like that).
--Nat
Not necessarily, because object structures can be self-similar --
internal parts can have the same type as the whole.
On 19 April 2011 19:53, Rick Pingry <rpi...@gmail.com> wrote:We're not saying that it cannot be both. In fact, we hope that the
> hmph. that makes me sad. Why does it have to be one way or the
> other? If code is not easy to write, but it is supposed to make it
> easier "in the future", how do you know you are not gold-plating? I
> have a hard time selling this idea, even to myself, much less trying
> to explain it to other people. Can't we come up with a methodology
> that is right and helpful NOW, not just in the future?
book demonstrates that it CAN be both.
I got lost in the middle of that last sentence. One way to look at it is in terms of lifecycle. Internal objects are created and released by the owner. If an object needs to be passed in, then it's probably a collaborator. Of course, that begs the question of what the lifecycle /should/ be.
> It sounds like to have this kind of organization, this kind of class
> [...]
> help you drive this design rather than having you do it up front.
we find we can do this incrementally, and that part of the TDD process involves the discovery of internal and collaborating objects. Of course, we have to have some design sense while we're doing it. It's not automatic.
> Or is it that while developing the implementation of each level of
> abstraction, that you are doing TDD for the next level down? Or
> perhaps it is in the course of refactoring? How do you know when you
> are ready to break out a new level of abstraction?
I tend to think outside-in rather than top-down. I'm not sure I can explain the rules for breaking out abstractions except some experience and looking for complexities in the code and tests.
> "We value code that is easy to maintain over code that is easy to
> write ..."
>
> hmph. that makes me sad. Why does it have to be one way or the
> other? If code is not easy to write, but it is supposed to make it
> easier "in the future", how do you know you are not gold-plating? I
> have a hard time selling this idea, even to myself, much less trying
> to explain it to other people. Can't we come up with a methodology
> that is right and helpful NOW, not just in the future?
First, the two are not exclusive. It's a matter of prioritisation. What I do know is that code is read far more often than it's changed, and that I don't believe I've really understood the problem until the code is readable. I've seen design flaws become obvious by getting the name right. I've seen team after team crippled by code that slipped out of maintainability one line at a time.
S.
> My partner and I were just looking at a video you made a while back
> about integration tests being snake oil. The GOOS book of course
> talks about Acceptance Tests, but perhaps you are making a
> differentiation between acceptance tests and integration tests. I
> bring it up in this thread because I think it is relevant.
Short version: Don't use end-to-end tests to avoid flaw in the basic
correctness of your system.
The crux of the problem: The Average Person™ conflates "Acceptance
test" (help the Customer feel good that the feature is present) with
"System test" (help the Programmer feel good that the system
components work together correctly) because they /tend/ both to be
end-to-end tests. As a result, the Average Person doesn't write enough
microtests.
GOOS uses Acceptance Tests to guide programming and help Programmers
know when they've built enough stuff. Because they choose to implement
those tests in Java, the Average Reader™ might interpret those tests
as System Tests, and believe that they serve the purpose of making
sure the whole system works. Even when GOOS /does/ use them as System
Tests, the book also shows many, many microtests, thereby avoiding the
logic error that the Average Person™ makes.
> In there
> you take the approach that you should mock ALL collaborators. In a
> bit of code we wrote recently, we did that very thing, but find that
> making changes to how the thing works is hard. Refactoring becomes
> harder. (I wrote about this before and got lots of great advice from
> you guys, but I think I understand better about what is going on now
> so I can speak <a little> more intelligently about it). The tests
> become glue that makes any kind of change to HOW a class is
> implemented difficult if you ever want to extract an internal. The
> GOOS book and this thread talk about a difference between peers and
> internals, and I get the impression that you should mock the peers and
> not mock the internals. I am not so sure now after hearing your talk
> about that. Am I missing something?
No. I agree about using test doubles for peers, not internals. I
simply use the painful pressure from trying to use test doubles for
all collaborators to help me classify them as peers or internals.
Sometimes I guess well about that classification as a shortcut, but
when I don't guess well, I can always take the long route.
> If you are mocking out every
> collaboration between every class in your system, how do you refactor
> anything without breaking tests? Are you supposed to be able to
> refactor without breaking tests? Could you provide an example of how
> you do that?
I tend more often to throw away tests than break them. If changing a
client leads to changing a peer interface, then I switch to revising
the contract tests for that interface. Sometimes this means throwing
tests away, because sometimes this means throwing an interface away.
I'm afraid I have no example to show you, because contrived examples
don't demonstrate the point adequately, and I don't own the IP rights
to the real-life examples I've used.
> I think you can take Bertrand Russel's (with all due respect to him),
Meyer, not Russell. :)
> Looking forward to learn more from you. Impertinent question: are you
> actually working on a book? (I think in the presentation you said one day
> you may write (another) one).
I'm working on a book in the sense that most people are working on a
book: I have an idea, it might not suck, and I've pretended to think
about it a lot.
> I think Kent Beck's "Lots of Little Pieces" is also important. Is it
> an axiom? Or is it subsumed by the "be small" axiom?
I have observed that removing duplication in tests drives me to lots
of little pieces, so I'd consider that a theorem.
> * Once and only once
Equivalent to Remove Duplication.
> * Lots of little pieces
Achieved by removing duplication in tests.
> * Replacing objects
Achieved by removing duplication in tests; contributes to "Lots of
little pieces".
> * Moving Objects
Equivalent to Context Independence, example of Dependency Inversion
Principle, achieved by removing duplication in tests.
> * Rates of Change
Expression of Cohesion principle (similar things together; different
things apart; we measure "similarity" or "difference" by rate of
change); low cohesion usually means writing tests with the same intent
on different parts of the system, so remove the duplication of intent.
It's hard to talk about this sort of thing in the abstract, perhaps you could post an example?
S.
> So, I made through the rest of your "Integration Tests Are a Scam"
> from the Agile 2009. Like I said, I listened to it about a year ago,
> but there is SO MUCH more that makes sense to me now that must have
> just flew over my head before.
> I understand much better about the role of acceptance tests and how
> they complement rather than compete against focussed tests.
> Thanks.
>
> A couple questions though:.
>
> 1. You mention writing interface contract tests to make sure that a
> particular return value is possible, because you had created a
> collaboration test that shows that value returned from a stub. What
> about errors and exception handling? How often should you be dealing
> with problem situations that may be difficult to reproduce on their
> own. In particular, I am thinking about dealing with dealing with 3rd
> party libraries. In my case, I am interacting with COM and MS Word/
> Excel in an ActiveX control written in C++. There are all kinds of
> things that the interface says MIGHT go wrong, but I don't know if I
> can cause a case where I can FORCE them to go wrong (for instance if I
> have a corrupt document). Should I work really hard to handle these
> errors?
If the Client handles Error X, then I expect to write a contract test
that shows an example of when the Supplier throws Error X; otherwise,
perhaps I don't need to handle X.
Since I expect to write such a test, if I decide not to write such a
test, then I expect to write a comment justifying the decision.
If the Client simply rethrows Error X, then I might or might not write
a test for that, contract or otherwise. I might leave a comment
explained the situation.
> 2. As we are trying to practice this, we are finding that we are
> creating super tiny classes. We are wondering if we are taking the
> single responsibility principle to an extreme. I think we will get
> better at it with practice, dealing closely with naming, watching for
> the composite simpler than the sum of its parts idea. I can also see
> the refactoring that removes the middle man. Still, we are struggling
> with it I think. Is there a way for a test to tell us that we have
> broken something up too small or for the interfaces we choose to tell
> us that something is too small?
As Steve says, you had to step off the cliff to find the edge. Beyond
that, though, this technique encourages "Lots of Little Pieces", as
Kent Beck wrote, and finding duplication in the names of classes or
methods encourages putting tiny pieces together into
cohesive-yet-small pieces. Do both.
Good luck.
> thanks for the analysis.
You're welcome.
> Do you agree that in the following, Kent Beck is talking about Uncle
> Bob Martin's modernised version (http://www.objectmentor.com/resources/
> articles/ocp.pdf) of Bertrand Meyer's Open Closed Principle (http://
> en.wikipedia.org/wiki/Open/closed_principle) :
>
> * Replacing objects - Good style leads to easily replaceable objects.
> In a really good system, every time the user says "I want to do this
> radically different thing," the developer says, "Oh, I'll have to
> make
> a new kind of X and plug it in." When you can extend a system solely
> by adding new objects without modifying any existing objects, then
> you
> have a system that is flexible and cheap to maintain. You can't do
> this if you don't have lots of little pieces.
It sounds like OCP to me.
> Another thing that focusing on testing around the interfaces has
> taught us, is that when there is a testing problem, we focus on making
> the interface more testable rather than just thinking that "TDD sucks
> and is too hard".
I remember, about 6-12 months after starting to practise TDD, a change
in my thinking. Specifically, one day, when confronted with this
situation, I stopped blaming the tests and started blaming the
production code. I began to trust the tests. That has really benefited
me.
> We are a little nervous about learning just how big an appropriate
> interface is, but we don't have a good example, so we will let you
> know if we come up with a real example. I am sure that will come with
> experience and we can later refactor to move the middle-man etc.
I believe this is largely an issue of personal style: some prefer to
break up large things; some prefer to unify little things. I belong to
the second group: I tend to build little things, then gather them
together when patterns in names suggest it.
> So, what to do when you are working with code that is under test, but
> not under good tests? We still often have the temptation to "throw it
> all away and start over", but we are experienced enough now to at
> least know that is a bad idea. I know we are still learning, so I am
> sure that in a year from now I will look back on what I am doing now
> and think it is no good.
I recommend simply writing one better test right now.
> So now, I have a good bit of code OLDER code that I have written with
> the wrong tests, as we have been learning. Places where I feel the
> tests make it harder to refactor. Should I approach this body of code
> like legacy code, even though it has tests? Is it ok to throw away
> classes that are under test? I guess you would have to, if for not
> other reason than as you code evolves you find the interfaces change
> or whatever. Perhaps the problem is that we worked so dang hard to
> get these tests working in the first place that it is REALLY hard to
> throw them away.
I never mind throwing away code, but then, I didn't always think that way.
It's hard to say what to do in these circumstances without seeing the "damage". My guess is that you could extract smaller pieces out of it that are better structured within the existing tests. In the end you have to make a judgement call as to what is pulling its weight. For example, one option might be to rely on tests a level up and rework the implementation cleanly. If you don't expect to touch this code again, then it might be best to just leave it. Otherwise, let your new requirements drive what you do next.
S.
On 22 Apr 2011, at 19:22, Rick Pingry wrote:
[...]
No. I agree about using test doubles for peers, not internals. I simply use the painful pressure from trying to use test doubles for all collaborators to help me classify them as peers or internals. Sometimes I guess well about that classification as a shortcut, but when I don't guess well, I can always take the long route.
Another thing that focusing on testing around the interfaces has taught us, is that when there is a testing problem, we focus on making the interface more testable rather than just thinking that "TDD sucks and is too hard". We have realized that the design direction that TDD gives us is not about designing the IMPLEMENTATION, but rather more about designing the interfaces between classes. I am sure that your book and other books that we have read have said all of this, as a matter of fact I am pretty sure I remember that on hind-sight, but for some reason, it just really resonated this time, as our tests have focussed on these interfaces.
Clearly that means that we're Experts!
I wouldn't expect that, no. If a method returns no value and has no
observable side-effect, then what does it do?
Can you post some manner of example?
> Well, it has a side-effect through its collaborators. But that should
> be in the collaboration tests right? Not the interface contract.
>
> in my specific example, I am writing code that manipulates MS Word
> through its COM interface, say that I am telling WORD to save the
> document. My interface might be as simple as :
>
> void SaveWordDoc();
>
> and the class will have the COM interface. I can test that the
> appropriate WORD functions got called (say there are several of them
> to get WORD to do just what I want, this composite is simpler than the
> sum of its parts), I can mock the interface to WORD and make
> collaboration tests that show the relevant calls are being made.
>
> BUT
>
> .. from the perspective of the client of this class, only this
> function gets called. So the interface contract should have this
> function ... AND ... what else?
I understand better, thanks.
In this case, there is only a single, implicit contract: SaveWordDoc()
either throws an exception or doesn't. The interesting part might be
which exceptions it might throw. If it translates exceptions by
rewrapping them, then I would put the translated exceptions in the
contract. Beyond that, you're right, there's not much to check here.
Some methods have contracts you can't really express as code, like
this one. In my Point of Sale exercise, we have a Display with
displayPrice(amount) and displayMessage(text), and it's tough to
describe the contract with code, so we simply say in words that it's
the implementation's responsibility to try to displayPrice or
displayMessage appropriately (text, graphics, whatever) and not blow
up. This bothers me, but given that there are other, more interesting
contracts to check, I let it go.
I apologise for not having a more satisfying answer. :)
--Nat
> When code is heavily oriented around "tell don't ask" interactions, I
> think of it more in terms of event producers and consumers than
> procedures and callers. In that event-driven style, I don't think of
> the receiver of an event conforming to a contract that specifies how
> the event handler behaves. Instead I think of the sender conforming to
> a contract that specifies how valid events are emitted -- e.g.
> constraining parameter values and the order of outgoing events if
> applicable. I use mock objects to describe that contract.
+1
I wonder, Rick, how to redesign your system to avoid the save() method
entirely. I don't necessarily think it would be better, but the
exercise might bear fruit.
When code is heavily oriented around "tell don't ask" interactions, I
think of it more in terms of event producers and consumers than
procedures and callers. In that event-driven style, I don't think of
the receiver of an event conforming to a contract that specifies how
the event handler behaves. Instead I think of the sender conforming to
a contract that specifies how valid events are emitted -- e.g.
constraining parameter values and the order of outgoing events if
applicable. I use mock objects to describe that contract.
er, the book?
S
The trouble with implementation by inheritance in the languages we usually work in is that it blocks the use of inheritance for other shared implementation.
Over the years, I've gone right off this kind of defensive programming within my code, especially now I don't get memory smashes in VM languages. I only use it at the boundaries where my code interacts with the outside world.
S.
> In those kids of situations, when there is not much for the caller to
> do, but you are wanting to specify the valid values for any
> parameters, to use contract asserts within the actual production code
> to specify and describe the valid parameters, rather than mocks and
> specifications? Perhaps both?
>
> I suppose that the mocks on the collaboration-test end of the tests
> show that the class under test is not breaking the contract, and that
> would be important.
If A calls B.foo(x,y,z) and I want to check that A gets x,y,z right,
then I write collaboration tests that show how A chooses
characteristic values for x,y,z. For example, which inputs to A or
which system states cause A to choose x=12 instead of x=15 or x=20?
If B.foo(x,y,z) rejects x<0, y<10, z>50, then I write contract tests
on B.foo() for the boundary cases.
> What if the interface itself, rather than being a true interface, were
> an abstract class? The base class functions validated function
> parameters with contract asserts. Classes that implemented the
> interface should call the base class first to validate the parameters,
> then implement its own functionality. That way, wherever you had
> stubs or mocks, they would also inherit this contract, and would throw
> whenever an offending client broke the contract.
Every abstract class is the union of a concrete class and an
interface. I often split abstract classes up into their two pieces,
then proceed as normal. The abstract portion of an abstract class
provides a policy that the concrete part uses, so why not separate
them?
In the case you cite, if the concrete class validates the parameters,
that's easy to test. Then the interface can assume that the parameters
are valid. I might add a comment to the interface saying, "If you
connect me to Foo, then you get parameter validation free. No need to
program defensively in the implementation." I might even add an empty
contract test saying "Don't validate parameters in the implementation;
use Foo instead." Hamcrest uses a method for this in their Matcher
interface as a reminder not to implement Matcher directly, but rather
to subclass BaseMatcher.
> The up-side to this I can see is that you only need to specify the
> contract for the interface in one place, in the definition for the
> interface itself and you can do it in code as well.
>
> The down-side is that collaboration tests will fail when they break
> the contract, even if that is not specifically the action you are
> testing.
>
> So far I think the up-side wins.
Perhaps it does, but I wonder aloud whether separating the concrete
bit from the abstract bit wins even more.
> When you say 'If a method only delegates, then I don't test it: it's
> too simple to break', do you mean the following:
>
> i) You dont bother with collaboration test 1.a, i.e you don't bother
> with
> given(facts).when(conditions).thenAssert(c.orders(s).toCarryOut(action))
> ii) You do bother with contract test 1.a, i.e. you do bother with
> s.canObeyOrderToCarryOut(action)
I mean specifically this:
class A:
def foo(x,y,z):
peer.foo(x,y,z)
I don't worry about testing A.foo(), but I almost always test
peer.foo(). If A.foo() breaks, it's peer.foo()'s fault.
If peer is an interface, then I /might/ write the corresponding
collaboration test for A using peer, and there would almost certainly
be a contract test for peer.foo().
Does that help?