TDD, where did it all go wrong?

1,161 views
Skip to first unread message

philip schwarz

unread,
Jul 28, 2013, 2:35:53 PM7/28/13
to growing-object-o...@googlegroups.com
Hello everyone,

I found the following NDCOslo 2013 talk by Ian Cooper very interesting: 'TDD, where did it all go wrong?' http://vimeo.com/68375232

What do you think?
Do you find value in some of his ideas?
Which ones do you disagree with?

Philip

philip schwarz

unread,
Jul 28, 2013, 2:38:55 PM7/28/13
to growing-object-o...@googlegroups.com

philip schwarz

unread,
Jul 28, 2013, 2:57:10 PM7/28/13
to growing-object-o...@googlegroups.com
Cooper says a couple of problems he sees in test suites are that dozens of tests break when implementation details change (didn't refactoring promise change without breaking tests?), and that there is much more test code than implementation code (do we really need to write all these tests? Couldn't we write less tests? Couldn't we go faster?)

He reckons these happen when TDD goes wrong as follows:

1) people write tests against operations and classes rather than against behaviours
2) people couple implementation details to tests because they don't understand the refactoring process cycle correctly

He says if people address these then they'll find they have smaller test suites that are much more self explanatory and much less painful to own.

He calls this TDD rebooted. He says: 

"What I suggest we do collectively is go back to the beginning and look at what Kent was actually talking about TDD...so try to unlearn some of those bad habits, and start again and look at the core practices again if we can figure out what it is we have been doing wrong."

I pointed Kent Beck to the video and he tweeted back the following: "interesting to hear what someone got of what I said"

Philip

On Sunday, 28 July 2013 19:35:53 UTC+1, philip schwarz wrote:

philip schwarz

unread,
Jul 28, 2013, 3:05:52 PM7/28/13
to growing-object-o...@googlegroups.com
Here are Cooper's words for a slide called 'What is a Unit Test?':

"Kent used the word unit test to mean something entirely different from anybody else in the industry. I think he may admit that was a mistake.

Kent only means one thing and that is a test that runs in isolation of other tests.

A lot of the bad advice about TDD has come from believing that it means testing a module in isolation from other modules, so that you essentially can only have one class  under test and you have to have everything else mocked out.

That's bad, wrong type of advice, please don't follow that.

It's just that the test when it runs shouldn't have any side effects which means that it shouldn't impact other tests in the suite, it is the test that is isolated, not the thing under test that is isolated. And the classical unit test model would be that the thing under test is isolated, but from Ken't perspective, it's the test itself that is isolated.

So it is an important distinction, at least quite a lot of bad wrong kind of thinking with mocks, where everyone is testing every single collaborator of this class as a mock, and those mocks essentially dig into implementation details, your tests become what I call overspecified, knows far too much about your implementation, it breaks as soon as you change implementation, all those mocks break, ok, DON'T DO THAT. The only things you shold be mocking essentially are things that prevent your test from being isolated, not your class under test from being isolated."

Philip

On Sunday, 28 July 2013 19:35:53 UTC+1, philip schwarz wrote:

philip schwarz

unread,
Jul 28, 2013, 3:27:25 PM7/28/13
to growing-object-o...@googlegroups.com
Cooper makes one of his key points when he explains 'when' (in the red-green-refactor cycle) we make the code clean.

Here are his words (my emphases and ellipses):

"...given that now I am green, and I have some terrible hacky implementation which I would be publicly embarassed by if anyone were to actually see it, but, it works, when do I make clean code?

and the answer is the refactoring step is when you basicaly do clean code. It's when, if you want to do design patterns...that is when you actually think about implementing it. One of the reasons you wait is because you know, until you have solved that problem it would be easy to be distracted and believe that the design pattern was the solution to your problem, what it is the behaviour that you are trying to solve, before you really know that. Once you are green, you know what the solution is, now you can tell whether or not, genuinely, a design pattern is a more appropriate way of presenting that to make maintainable s/w. This is when you go back and you look at your SOLID principles and say which ones of those am I violating? How do I refactor my code so that I am observing the SOLID principles correctly? That is when you remove duplication, it is basically when you go and look at Martin Fowler's code smells, feature envy, inappropriate intimacy, shotgun surgery, and you fix all that.

BUT HERE IS THE KEY POINT: YOU DON'T WRITE NEW UNIT TESTS AT THIS STAGE

Let's say for example I have several lines of code, and there are definite violations of the SRP and I think, you know what, I need to break out a class with a couple of methods from this basically large transaction script, 30 line of code, to hold that responsibility separately from this class. That is actually a safe refactoring: you can do a couple of extract methods...very straightforward stuff, I NEED NO NEW TESTS. I haven't added any code that isn't covered by an existing test. The test that is basically currently green against my public API is actually protecting this code and covering it, if I run a code coverage tool...I ought to find that I am still covered across my codebase. IF I ADD NEW TESTS AT THIS POINT I AM COUPLING MY TESTS AND MY IMPLEMENTATION DETAILS, AND WHEN I CHANGE MY IMPLEMENTATION DETAILS IN FUTURE THOSE TESTS WILL BREAK. THESE SHOULD BE USUALLY INTERNALS OR PRIVATES BECAUSE THEY ARE EXPRESSING HOW WE IMPLEMENT THE API, THEY ARE NOT THE API THAT IS ORIGINALLY UNDER TEST.

SO THIS IS WHY YOU ARE WRITING LESS TESTS. Because what happens at this point is people say oh, you know what, I need a new class here, to basically deal with these 30 lines  of code, I'll go and write some test for that class. DON'T DO THAT. BECAUSE THAT'S WHAT WRITES TESTS AGAINST IMPLEMENTATION DETAILS. THE API IS YOUR CONTRACT, THE UNCHANGING THING, AND THAT IS WHAT IS COVERED IN YOUR TEST. THESE ARE IMPLEMENTATION DETAILS AND SUBJECT TO CHANGE. You might refactor it and say this is a perfect example of how to use the composite pattern. Someone may come along later and say you know hat, composite was really a bad choice, let' change the way we do that. It's an implementation detail, that is the whole point of refactoring. You can change that implementation, make it clearer, make it better, but the API remains the same. As soon as you write tests at this point, against those refactoring opportunities, you bind your implementation details to your tests. COUPLING IS THE BIGGEST PROBLEM IN S/W DEV. IT IS YOUR ENEMY. PEOPLE TALK ABOUT DRY. FORGET DRY. COUPLING WILL KILL YOU. DRY IS REALLY A SPECIAL CASE OF COUPLING IN A WAY. COUPLING IS WHAT KILLS YOU AND DO NOT COUPLE YOUR TESTS TO YOUR IMPLEMENTATION DETAILS.  

..as soon as we are going to basically couple things together, that's when we are going to break. It's going to mean you'll write less tests, it's going to mean you move faster because you are going quickly to green. You've got the code working and you are now refactoring, and you can keep refactoring endlessly ... keep moving on, make the code cleaner, but because you are writing a few less tests you are making better forward progress and they don't seem to slow you down so much. ...one of the guys I worked with, he was a really smart developer, he refused to write tests, instinctively when he was writing these tests, years ago he was writing these tests against behaviour in the outer API and saying to me, why do I need to test any more than that? This tests the behaviour, and covers it, and he was right, and I was wrong because I fought him basically and said, no no no no, WE ARE TESTING ALL THE METHODS ON ALL OUR CLASSES, AND THAT IS REALLY BAD, IT KILLS YOU, and I say that as someone who has made all these mistakes, right, I wrote a codebase that was so heavily mocked that any time we changed anything we had to rewrite about 50 mocks, crazy, yeah, so, test behaviours, not implementations.

DON'T BAKE YOUR IMPLEMENTATION DETAILS INTO YOUR TESTS, THAT GIVES YOU THE PROMISE OF REFACTORING IMPLEMENTATIONS WITHOUT BREAKING TESTS, AND THAT IS WHAT WE ARE LOOKING FOR IN TDD."

Philip



On Sunday, 28 July 2013 19:35:53 UTC+1, philip schwarz wrote:

philip schwarz

unread,
Jul 28, 2013, 3:38:29 PM7/28/13
to growing-object-o...@googlegroups.com
It is worth adding that at the point where he says:

     YOU DON'T WRITE NEW UNIT TESTS AT THIS STAGE (i.e. once green)

the slide says:

You do not write new tests here
  • you are not introducing public classes
  • it is likely if you feel the need, you need collaborators that fulfill a role 
Philip

Philip Schwarz

unread,
Jul 28, 2013, 5:31:39 PM7/28/13
to growing-object-o...@googlegroups.com
Cooper explains the Ice Cream Cone antipattern (http://watirmelon.com/2012/01/31/introducing-the-software-testing-ice-cream-cone/) and advocates going for the Test Pyramid (http://martinfowler.com/bliki/TestPyramid.html).

He then talks about the Ports and Adapters Architecture (sometimes called Hexagonal Architecture), and uses it to explain where the different types of testing are done:

"We unit test at the port boundary. Because that is our API. That is our interface to our code. That is where use cases, stories, scenarios are expressed in a hexagonal architecture. So this is where the behaviour of our system is expressed. It is our contract with the world. And it is what we should try and make consistent because all these things are dependent on it. It's outgoing and ingoing btw, there are explicit contracts (we are serving things at the front) and implicit contracts (e.g. database), and our tests can be thought of as just another adaptor driving our code...Inside here, are implementation details. We are not testing those because those may change, they are mutable, it's only these ones at the perimeter we care about.

Integration tests happen between our ports and our adapters. They figure out wheter or not we are essentially expressing ourselves correctly to the outside world. In most cases these adaptors will be 3-rd party components and we don't want to test those. One of the key rules is, don't test things you basically don't own. We don't want to test for example our DB technology like in Hibernate, we don't want to test our REST API framework... What we actually want to do is test that we use them correctly, so it is an integration test in that we are checking the integration between our port and an adapter.

System tests are what you have on the outside. Small in number, a few of them, essentially we are saying just check everything works. And that is where you find the testing pyramid works, because the bulk of your tests are these (unit tests), testing my application at the port layer, smaller number of my tests are these (integration tests), checking the integration with these adaptors, and the top layer (system tests) where I am actually checking that, yes I have put it all together and it works and you can get hold of it."

Philip



Philip

--
 
---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriente...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Philip Schwarz

unread,
Jul 28, 2013, 5:36:05 PM7/28/13
to growing-object-o...@googlegroups.com
Cooper asks: where should we use mocks?

He answers referring to the hexagonal architecture again:

"Generally, in this kind of model, what you are mocking is not these internals. Don't mock internals. Do not mock your implementation details, because then you couple your tests to your implementation details through your mocks. You have gone to all the trouble of testing basically here. Don't undermine that by mocking out your internals. Mock other ports and mock other publics, so things at the perimeter, where effectively I am mocking, I am dealing with this particular part of my API. I am not dealing with this part of my API. This is covered by other tests, so I'll mock that because it is not being covered by this test. Don't ever mock these internals. They are behind the curtain: don't deal with them. And don't mock adaptors either. Mock your port to the adapter. Don't mock things you don't own. Mock your port. Your outgoing port to the adapter."

Philip

Ben Biddington

unread,
Jul 28, 2013, 5:48:29 PM7/28/13
to growing-object-o...@googlegroups.com

"And don't mock adaptors either. Mock your port to the adapter."

This is slightly how serialseb's article strikes me (one of your earlier links) when he recommends using (as I interpreted it) a faster adapter - why not ignore the adapter entirely?

I like the idea of snipping off a program at its ports and acceptance testing that - ignoring Bob Martin's "irrelevant appendices".

Dale Emery

unread,
Jul 29, 2013, 2:50:48 PM7/29/13
to growing-object-o...@googlegroups.com
Hi all,

> • My Unit Testing Epiphany Continued - http://www.stevefenton.co.uk/Content/Blog/Date/201305/Blog/My-Unit-Testing-Epiphany-Continued/

In that blog post, Nick Lanng is quoted as saying:

> "The thing that convinced me is per-class unit tests mock all dependencies. If the behaviour of that dependency changes then everything that mocks it will be incorrect but still pass the tests.”

I see an assumption behind Nick’s concern: That isolated/mocked tests are the /only/ tests we’re doing.

In general, the thing I most want unit tests to do when they fail is to point me as quickly and directly as possible to the fault in my code. For that reason, I highly value isolation in my unit tests.

But when I test only in isolation, there’s a chance that my fake collaborators don’t behave the way real collaborators do. That’s the danger Nick refers to. So that’s why I want larger-scope tests, to help me determine whether the collaborating classes agree about their responsibilities, and whether the collaboration itself satisfies /its/ responsibilities.

If one of those larger-scope tests fails, that tells me that there is some disagreement among the collaborators, and therefore my fine-scope tests are either incomplete or incorrect.

By testing at multiple scopes, I get the best of both worlds: Fast identification of faults, and reasonable confidence that the collaborators mutually agree about their responsibilities. Of course, this comes at the cost of maintaining tests at multiple levels.

Here’s an old blog post: http://cwd.dhemery.com/2005/04/isolation/

Dale

--
Dale Emery
Consultant to software teams and leaders
http://dhemery.com

Attila Magyar

unread,
Jul 29, 2013, 6:01:57 PM7/29/13
to growing-object-o...@googlegroups.com
I don't really understand the concept here and the term implementation detail. If I have an object A which communicates with an object B, and B can be replaced with other objects either in the code or in the test, then B is not an implementation detail but a collaborator object. When I'm testing the outgoing messages from A, I'm not testing the implementation details of A but the communication protocol between A and B. But the nice thing about OO is that everything is an object, including the whole application. So we can look at the problem on a large scale, and from the application point of view, B will be an implementation detail. If I'm not allowed to mock B then I end up having end to end tests only. And this doesn't solves the problem, because the application may communicates with other applications forming a system of applications. From the system point of view my entire app will be an implementation detail.


Rick Pingry

unread,
Jul 29, 2013, 7:09:19 PM7/29/13
to growing-object-o...@googlegroups.com
It seems to me that the word "Behavior" is the word in question here.  We can write tests that test the Behavior of any given boundary, and those tests would be coupled to implementation details of the Boundary just above us.  So, where is the best boundary to test across?

I have been struggling with this a LOT.  Right now I have class level isolation tests, AND I have these BDD tests at my domain boundary that are much easier to understand and are better at catching regression errors, but don't help so much with debugging or design.

I honestly am not worried about the debugging part.  I actually like working with a debugger better than I like deducing what is going on in my class isolation tests anyway (perhaps that is a sign my tests suck, but I am still trying :P  I just don't like reading my own tests).

I like what is being proposed in the video in a lot of ways.  It resonates with me.  There have been MANY times I thought my class isolation tests were wasting my time to write and to maintain, but I have been afraid to throw them away.  Maybe I should?

My biggest question with all of this is that proponents of TDD have always said that the tests are about design, not catching errors.  For something to be about design, at whatever level, it seems like it MUST be coupled to that design, right?  From what I have heard and read here, design is not driven by the tests (in the RED or GREEN parts), but rather in the REFACTOR, which have no tests at all.  I have made a nice little abstraction layer on top of my app that I do all of my domain testing from.  I can refactor to my hearts content without breaking my tests.  So how have the tests helped to drive my design?  Is it that having my code be TESTABLE, the very fact that I CAN write an abstraction layer is what that is about?  Or is it that I look for places to "drop in a lower gear" and test implementation details to help me drive design at that resolution ONLY when I reach a sticky spot?

Mike Stockdale

unread,
Jul 29, 2013, 10:30:37 PM7/29/13
to growing-object-o...@googlegroups.com
Didn't have time to watch the video, but I agree with the main points you summarized.  People always complain "TDD hurts when I do this" to which I reply "Then don't do that"
--
 
---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriente...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

--
Cheers,
Mike Stockdale

fitSharp
Syterra Software Inc.

Torbjörn Kalin

unread,
Jul 30, 2013, 2:39:44 AM7/30/13
to growing-object-o...@googlegroups.com
@Attila:

Regarding your example with A and B, here's how I think about it: How well does your test map towards the observable behaviour of the system? Can you reason about your test with someone from the business side? Does she understand what A and B are? Does she agree with how A and B communicates? If yes, then you are testing the behaviour of the system. If no, it's an implemenation detail.

The problem I think that he describes, and that I have encountered many times, is when you want to make a change to how A and B interacts and you have a test that stops you. It could be that a part of B should be in A or that A should communicate with B through C. You want to redefine the boundary, and therefore you need to change the test and the code at the same time.

I think there's a strong correlation between this scenario and how well the test and the mocks are (not) understood by the business side.

Attila Magyar

unread,
Jul 30, 2013, 6:12:28 PM7/30/13
to growing-object-o...@googlegroups.com
That's interesting, thanks for your response. Although I find difficult to map those object collaborations to some business behaviour, because they often play only a small part in the overall behaviour. And those objects were brought into existence by certain design considerations, not directly by business requirements.
I use mocks as a design tool a lot in a "tell don't ask" oriented application and I try pay attention not to couple the current implementation to the test. I think there is a common misunderstanding that mock based test are more fragile than tests without mocks. Coupling tests to the implementation makes tests fragile, but this can be done either with mocks or without mocks. 
When I have a test for an object (A), I can freely change its implementation, but not the implementation of an object (B) that uses my object (A), because it may change the protocol (B-A). So I may choose to test a cluster of objects (which includes both A and B and maybe others), and I free to refactor everything inside the cluster. But if B is used by other clusters as well, then I have brittle tests, because when I change B multiple tests will break.

Torbjörn Kalin

unread,
Jul 31, 2013, 2:48:00 AM7/31/13
to growing-object-o...@googlegroups.com
@Attila:

I realise that I used the word "mock" in my previous answer when I actually meant "boundary". A mock is only one type of boundary, and you can get into the same trouble whether you are using mocks or not. A test without mocks also interacts with the system through some kind of boundary, and if that boundary wants to change you have problems. So I agree with you that mocks do not make tests more fragile (nor do they make them more robust).

Also, let me refine my previous answer. You can probably succeed even if your tests interact with boundaries not understood by the business, and you can probably end up with a mess even if your tests only interact with boundaries understood by the business. But I think finding good boundaries are vital for having robust tests, and the business side can help you discover such boundaries.

You say that object collaborations are brought into existence by design considerations. When does this happen? Is it something you decide when writing your test? Or is it something you discover through refactorings?


Attila Magyar

unread,
Aug 1, 2013, 6:40:41 PM8/1/13
to growing-object-o...@googlegroups.com
On Wed, Jul 31, 2013 at 8:48 AM, Torbjörn Kalin <torb...@kalin-urena.com> wrote:
@Attila:

I realise that I used the word "mock" in my previous answer when I actually meant "boundary". A mock is only one type of boundary, and you can get into the same trouble whether you are using mocks or not. A test without mocks also interacts with the system through some kind of boundary, and if that boundary wants to change you have problems. So I agree with you that mocks do not make tests more fragile (nor do they make them more robust).

Also, let me refine my previous answer. You can probably succeed even if your tests interact with boundaries not understood by the business, and you can probably end up with a mess even if your tests only interact with boundaries understood by the business. But I think finding good boundaries are vital for having robust tests, and the business side can help you discover such boundaries.

I'm wondering where would you put those boundaries in a hexagonal application (if its matter)? In the inner hexagon or are those related to the ports or adapters from the outer hexagon? And how well the business should understand them? 

You say that object collaborations are brought into existence by design considerations. When does this happen? Is it something you decide when writing your test? Or is it something you discover through refactorings?

Mostly during writing tests, through the interface discovery. Sometimes through refactoring. 

Torbjörn Kalin

unread,
Aug 2, 2013, 2:59:33 AM8/2/13
to growing-object-o...@googlegroups.com
On Fri, Aug 2, 2013 at 12:40 AM, Attila Magyar <m.ma...@gmail.com> wrote:

I'm wondering where would you put those boundaries in a hexagonal application (if its matter)? In the inner hexagon or are those related to the ports or adapters from the outer hexagon? 


So I will describe how I do things, well aware of that it's not the only way. But it works for me.

The hexagonal architecture view is very helpful. To start with, I typically let my test code act as the adapters: For incoming calls the tests use the same interface as the adapter would, and for outgoing calls I replace the adapter with a test double.

(Sometimes I test at the port level or even include externals, such as user interface and database, to start with. Then I discover that it's not practically possible to do so, and I refactor my tests so that they act as the adapters, as described above. A bit naive, perhaps, but by testing something larger I have discovered the adapters through refactorings rather than designing them up front.)

After a while, I might discover that, in some cases, testing at the application boundary (adapters) is too cumbersome. By then I have, through refactorings, discovered modules separated by inner boundaries. I can now refactor my tests so that they interact with these boundaries instead.

 
And how well the business should understand them? 


The boundaries are typically something that the business understands. The tests interact with modules whose names are part of the domain language. If I end up testing an implementation detail, it's probably some kind of algorithm that has no dependencies.

My test doubles are mostly adapters, ports (for testing outgoing adapters) or the system (for testing incoming adapters). It could also be a module that is being used by another module, but that's rare.

 
You say that object collaborations are brought into existence by design considerations. When does this happen? Is it something you decide when writing your test? Or is it something you discover through refactorings?

Mostly during writing tests, through the interface discovery. Sometimes through refactoring. 


So you (mostly) design these boundaries, rather than discover them through refactoring. (And if I'm not mistaken, most people on this list does it that way.)

For me, this doesn't work. I really tried to do it this way some five or six years ago, after reading the "Mock roles, not objects" paper, but I ended up choosing the wrong collaborators (boundaries). And when I discovered that they were suboptimal and I wanted to change the design, I couldn't, because the tests didn't allow me to.

But I guess it is, just like everything, a skill you have to learn.


Steve Freeman

unread,
Aug 2, 2013, 6:04:50 AM8/2/13
to growing-object-o...@googlegroups.com
On 2 Aug 2013, at 07:59, Torbjörn Kalin wrote:
>> You say that object collaborations are brought into existence by design
>>> considerations. When does this happen? Is it something you decide when
>>> writing your test? Or is it something you discover through refactorings?
>>
>> Mostly during writing tests, through the interface discovery. Sometimes
>> through refactoring.
>>
> So you (mostly) design these boundaries, rather than discover them through
> refactoring. (And if I'm not mistaken, most people on this list does it
> that way.)
>
> For me, this doesn't work. I really tried to do it this way some five or
> six years ago, after reading the "Mock roles, not objects" paper, but I
> ended up choosing the wrong collaborators (boundaries). And when I
> discovered that they were suboptimal and I wanted to change the design, I
> couldn't, because the tests didn't allow me to.
>
> But I guess it is, just like everything, a skill you have to learn.

Certainly was for me. It's like the thing about learning to start with features rather than parts of the implementation.

S

Steve Freeman

Winner of the Agile Alliance Gordon Pask award 2006
Book: http://www.growing-object-oriented-software.com

+44 797 179 4105
Twitter: @sf105
Higher Order Logic Limited
Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
Company registered in England & Wales. Number 7522677



Douglas Waugh

unread,
Aug 2, 2013, 8:41:14 AM8/2/13
to growing-object-o...@googlegroups.com
I spoke to Ian after he gave a shortened version of the talk at the London .NET User Group about whether he felt his approach contradicted the advice given by Steve and Nat and he didn't think it did.  I suppose what you really lose by refactoring underneath one top-level test is the information the tests are giving you about the design of your classes, and the communication protocol between them (the whole listen to your tests thing).  It seemed like Ian was saying if you need to write tests to drive out the design (in the 'gears' analogy that is like driving in first or second gear), go ahead.  However, when you're satisfied with the design perhaps you should delete those lower level tests, leaving only the port-level tests in place.  The other part of the 'gear's analogy is that you'll go faster if you spend more time driving in the higher gears.

From personal experience I can say that I feel incredibly productive when I'm refactoring the way the that Ian describes.  Although one thing I don't understand is how I can inject in newly created collaborators without changing the tests, but that's probably a question I should address to Ian rather than this group.


--

Steve Freeman

unread,
Aug 2, 2013, 10:41:57 AM8/2/13
to growing-object-o...@googlegroups.com
Unfortunately, I missed his talk at 7digital. In practice, I don't write many interaction tests for single classes these days, but for small clusters of a few objects. This became clearer to me while working out the exercise in the book.

This is one of those things where I'd really like to spend a day working with someone to see what they actually do (and vice-versa). I'm thinking particularly of someone else who has a very strong position in public which is softer when you talk through the detail.

Apart from the feedback argument, one claim about reasonable lower-level tests is that build on reliable layers allows us to avoid the combinatorial explosion. To me, deleting lower-level tests without some compensation in place risks false confidence. But this is all speculative...

Incidentally, I don't object to deleting/rewriting tests in principle (just like code) as long as it's done with purpose. Just deleting to go faster might mean losing feedback.

S


On 2 Aug 2013, at 13:41, Douglas Waugh wrote:
> I spoke to Ian after he gave a shortened version of the talk at the London
> .NET User Group about whether he felt his approach contradicted the advice
> given by Steve and Nat and he didn't think it did. I suppose what you
> really lose by refactoring underneath one top-level test is the information
> the tests are giving you about the design of your classes, and the
> communication protocol between them (the whole listen to your tests thing).
> It seemed like Ian was saying if you need to write tests to drive out the
> design (in the 'gears' analogy that is like driving in first or second
> gear), go ahead. However, when you're satisfied with the design perhaps
> you should delete those lower level tests, leaving only the port-level
> tests in place. The other part of the 'gear's analogy is that you'll go
> faster if you spend more time driving in the higher gears.
>
> From personal experience I can say that I feel incredibly productive when
> I'm refactoring the way the that Ian describes. Although one thing I don't
> understand is how I can inject in newly created collaborators without
> changing the tests, but that's probably a question I should address to Ian
> rather than this group.

Torbjörn Kalin

unread,
Aug 2, 2013, 11:10:37 AM8/2/13
to growing-object-o...@googlegroups.com
On Fri, Aug 2, 2013 at 12:04 PM, Steve Freeman <st...@m3p.co.uk> wrote:

Certainly was for me. It's like the thing about learning to start with features rather than parts of the implementation.


Not sure if I agree with the comparison. Starting with features rather than the implementation is The Right Way, as close to a best practice you could come. Designing the collaborators up front is only one way among many: a good practice.

Or have I misunderstood you?

/T
 


Steve Freeman

unread,
Aug 2, 2013, 11:22:22 AM8/2/13
to growing-object-o...@googlegroups.com
well finding that the tests are a block to making progress is probably the wrong way :)

I'm not sure what you mean about designing collaborators up front, are you talking about interface discovery?

S

On 2 Aug 2013, at 16:10, Torbjörn Kalin wrote:
> On Fri, Aug 2, 2013 at 12:04 PM, Steve Freeman <st...@m3p.co.uk> wrote:
>> Certainly was for me. It's like the thing about learning to start with
>> features rather than parts of the implementation.
>>
> Not sure if I agree with the comparison. Starting with features rather than
> the implementation is The Right Way, as close to a best practice you could
> come. Designing the collaborators up front is only one way among many: a
> good practice.
>
> Or have I misunderstood you?
>

James Richardson

unread,
Aug 2, 2013, 11:58:50 AM8/2/13
to growing-object-o...@googlegroups.com
TDD Where did it all go wrong?

Well, for me it didn't. Maybe I'm doing it wrong?

James



Torbjörn Kalin

unread,
Aug 2, 2013, 12:54:04 PM8/2/13
to growing-object-o...@googlegroups.com
Design up front was a bit loaded, I guess :)

Yes, I mean interface discovery: deciding the collaborators in the test phase as opposed to, for instance, discovering them in the refactoring phase.

/T



Esko Luontola

unread,
Aug 2, 2013, 5:51:29 PM8/2/13
to growing-object-o...@googlegroups.com
On Tuesday, 30 July 2013 02:09:19 UTC+3, Rick Pingry wrote:
It seems to me that the word "Behavior" is the word in question here.

 When I started practicing TDD (in early 2007), the original BDD article http://dannorth.net/introducing-bdd/ influenced me very much. (The article was quite new back then and I heard about BDD from Uncle Bob at the Object Mentor blog.) The biggest point I got out of that article was "Test method names should be sentences". Also this example of JDave was a standard for me to look up to: http://jdave.org/examples.html

Then I focused about 6 months on naming my tests, until I felt I had reached the standard set by those examples. I regularly used TestDox to print the names of a test class I had written, and then refactored the test names to produce a more readable output. It took that 6 months to find a style of organizing the tests that I was happy with. (I codified that style then as https://github.com/orfjackal/tdd-tetris-tutorial and later I also wrote http://blog.orfjackal.net/2010/02/three-styles-of-naming-tests.html)

I think thanks to that one article fron Dan North, I've managed to avoid all the problems that Ian Cooper mentioned in his presentation. :)

Ben Biddington

unread,
Aug 2, 2013, 5:56:54 PM8/2/13
to growing-object-o...@googlegroups.com

I feel a bit like that too. Some examples is perhaps what I need.

Ben Biddington

unread,
Aug 2, 2013, 6:06:09 PM8/2/13
to growing-object-o...@googlegroups.com

Are those clusters more likely the product of refactoring, then?

Steve Freeman

unread,
Aug 3, 2013, 6:16:23 AM8/3/13
to growing-object-o...@googlegroups.com
Sometimes... I'm not sure I have a single path.

S

On 2 Aug 2013, at 23:06, Ben Biddington wrote:
> Are those clusters more likely the product of refactoring, then?


> On 3/08/2013 2:41 AM, "Steve Freeman" <st...@m3p.co.uk> wrote:
>> Unfortunately, I missed his talk at 7digital. In practice, I don't write
>> many interaction tests for single classes these days, but for small
>> clusters of a few objects. This became clearer to me while working out the
>> exercise in the book.

Reply all
Reply to author
Forward
0 new messages