It seems like the explosion in combinations comes from the 8 permutations that determine anonymous access, so perhaps that behaviour should be pulled into a separate object. That might be a collaborator, and hence mockable, if it pulls in external resources, or an embedded policy object if it just works with values. For a mocked collaborator, you only have to test its true and false results in the client, and test its implementation separately. Similarly, for a policy object, you can write high level tests for its true and false cases, and then write direct tests that work through its 8 combinations. Alternatively, perhaps the whole anonymous decision should be extracted to a helper object.
I prefer not to extract and override (although many people recommend it). Over time, I find it makes it hard to see the boundaries of the class and it's harder for the IDE to provide help. I tend to keep methods either public or private, and limit the use of protected. I prefer to write nested helper classes that implement these decisions. Normally, they're only used by the containing class, and I can write direct tests for them without leaking structure.
I get your point about deferring everything. My style is to have more smaller objects than many people are used to. That said, a lot of them are supporting classes that are local to a containing class, which avoids some of the mock-tastic codebases I've seen (and have sometimes been responsible for).
Does that help at all?
A public example would be a good idea.
S.
On 26 Aug 2011, at 00:26, Rick Pingry wrote:
> We have a situation that seems painful that we seem to come up with
> constantly. It involves dealing with any kind of boolean logic that
> is even moderately complex. The examples that we read about, even in
> GOOS, seem to have little or no boolean logic to deal with. We are
> not sure if we are doing something wrong, or perhaps if we have a
> problem at a fundamental level.
>
> Please allow me to give our real-world example:
> [...]
> Thanks for any help. It seems that this should be pretty common. Are
> there any KATAs or TDD examples out there that cover this kind of code?
>
Steve Freeman
Winner of the Agile Alliance Gordon Pask award 2006
Book: http://www.growing-object-oriented-software.com
+44 797 179 4105
Twitter: @sf105
Higher Order Logic Limited
Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
Company registered in England & Wales. Number 7522677
I have a strong dislike for lots of local classes unless they are anonymous. My preference is to recognise a new package, make the original class public in that new package, and make all of the supporting classes package visible, top-level classes in that package.
So I tend to get lots of little classes / interfaces organised into lots of little packages.
I don't think that changes the amount of mocking, or how mocking is used.
Regards,
Lance
I am imagining a test that looks like a truth table.
Does anyone have a good experience with writing a tabular test like
this in nunit that, when it fails, shows the failed permutation(s)?
If that were written concisely enough, we could provide all 64
options. I think I do like the extract method mechanism when I can
come up with a good name for the method. I can keep it in the class
if it makes sense as a part of the responsibilities of the class. The
question then would be whether or not to override. I have run into
trouble on some occasions when trying to do this, but I think in all
cases it was where the overridden method changed state or called a
collaborator. I think I would be ok in situations where I am merely
querying for a value.
Thanks again for all the help. I will post the results after I get it
done.
FITnesse et. al.
QuickCheck et. al.
Subtext et. al.
sincerely.
I probably wouldn't mock the policy object. I'd make sure it was tested properly, and then just exercise a couple of examples of its use at the higher level. Don't forget that these are all just examples of use to help us understand the domain and to give us a degree of confidence. That's why we don't test every possible integer value.
I don't mind many smaller classes (to some limit). Each one describes a chunk of behaviour that I can forget about once it's packaged up. What I /can't/ deal with is behaviour scattered throughout discursive code that is difficult to understand and to factor.
Tools: you /are/ using Resharper right? I wouldn't even consider working in .Net without it.
> I love that TDD helps me tease out responsibilities. When I listen to
> how hard it is to write the test, I can name a new responsibility and
> move from there. In this case, my gut tells me that TDD is going
> overboard, making so many classes I fear would be harder to maintain,
> not easier. It seems to be artificially breaking up classes not based
> on responsibility, but due to our inability to concisely express tests
> that have so many permutations.
in the end there's some judgement involved and you have to find your own balance. The permutation techniques others have described might be just the job here. I'm just wondering if there's a type missing here that describes these 8 permutations and that is relevant elsewhere. Could it have a meaningful name?
> I also have the situation where my partner and I are the only ones in
> the company even attempting TDD. We are trying to get our processes
> down so we can become advocates. Having several hundred lines of code
> + test code to show what we could have done in a dozen or so is not
> going to sell it for anyone (not even us).
It shouldn't be several hundred. I find I distinguish these days between "outer" types, which have all the official code paraphernalia, and "inner" types, which are more terse and sometimes break a few rules.
> @Alex
> I really like your idea. I know that it breaks the letter of the law
> to have a single assertion per test
see other responses. This is a training-wheel rule, to get people started. It's better interpreted as one concept per test.
S.
I thought you already had :)
> I have a strong dislike for lots of local classes unless they are anonymous. My preference is to recognise a new package, make the original class public in that new package, and make all of the supporting classes package visible, top-level classes in that package.
sometimes I do that. In Java I find myself writing static inner classes that might one day be candidates for promotion. The outer class is like the nested scope that packages should have been.
> So I tend to get lots of little classes / interfaces organised into lots of little packages.
I often find that the interfaces end up in different ("higher") packages than their implementations. They represent a protocol and so are more general.
S
> REALLY? How in the world does all of that extra work (probably 20X as
> much work or more) really help me in this? Does it help me write
> better code?
Not being rude, I promise: it's a 64-row truth table. It takes 10
minutes with a spreadsheet. What's hard about it? Occasionally it
happens that a single boolean expression is complicated and you have
to decide whether to test it.
If you don't test it, then extract the function f that maps n booleans
onto a single boolean, put it behind an interface, then the rest of
the system only cares about two cases: f answers true, f answers
false.
If you want to feel comfortable changing the implementation of f
without changing its observable behavior, then you need to test it at
some point. Test it now or test it later; that's a true technical debt
question.
--
J. B. (Joe) Rainsberger :: http://www.jbrains.ca ::
http://blog.thecodewhisperer.com
Author, JUnit Recipes
Free Your Mind to Do Great Work :: http://www.freeyourmind-dogreatwork.com
Find out what others have to say about me at http://nta.gs/jbrains
> Without something like parameterized truth tables and resharper, you
> are foced to write 64 little test functions, or break out interfaces
> and mocks all the way down and have 20 tests.
I don't know what Resharper does in this situation, but I would use
the Parameterised Test Case pattern to write one test with 64 cases as
rows in a table. (Worst case, as literal rows in a 2d array.)
> So, JBrains... How does doing it in Excel help me? Is there a testing
> strategy I am not aware of? Are you talking about using FitNesse or
> some acceptance testing? I thought it all needed to be unit tested
> anyway? or perhaps you were just making a point that even 64
> permutations is not all that large.
One option: Excel, export to CSV, write a single test to load the data
and generate a test for each row. Another option: Fit ColumnFixture --
using Fit doesn't make it an acceptance test.
64 permutations is tedious, but not difficult. We have tools to make
that less tedious.
I think that opening Excel and cranking through the 64 cases in a
truth table would reveal this information within a few minutes, which
explains why, when in doubt, I just start doing that. :) ("Hey, look
at all these duplicate rows…")
if your company is even dithering, they don't understand software development! Frankly, unless you're hard up, if they won't, I'd buy my own copy and take with me to the next job.
> Without something like parameterized truth tables and resharper, you
> are foced to write 64 little test functions, or break out interfaces
> and mocks all the way down and have 20 tests. Either way, easily 20x
> the work.
Sorry, I don't see that. As we said before, the one-assert-per-test is only a good idea to break old habits. You can write parameterised tests in good ol' fashioned code by calling a helper method. The real point, however, is to use the test writing to help you think about the domain and make sure you've covered it properly. It's amazing how much addressing code from it's externally visible effects clarifies understanding.
> Still hard to sell to my friends who are used to the "code
> and pray" mentality (which I suppose I am still trying to break myself
> of). In the moment, that line of logic does not seem complex enough
> to feel like you could break. Famous last words I know, but as you
> all well know, this is the kind of thing you are up against when you
> are trying to sell yourself and your team on the program (when it is
> hard enough just to get them to buy you resharper). Even 10 minutes
> feels like way too much work when you want to get on to doing the next
> feature.
I know you're doing your best, but this doesn't make sense to me. Development cost must include all the way to fixing the bugs. A little bug in a condition like this can cost hours and hours to find in a deployed system.
In the meantime, I wouldn't necessarily try to convert the world. One alternative is to adopt the practice yourself until you have some experience and start showing that it's never your code that breaks just before the big demo.
S.
> On 28 Aug 2011, at 16:41, Rick Pingry wrote:
>> As for tools, I have seen the use of resharper, and it is amazing. I
>> am just now making the cross between C++ desktop app & ActiveX control
>> development and C# ASP.NET Web App, so I have not made the purchase
>> yet, but I am sure I will (trying to talk the company into the value
>> of it).
>
> if your company is even dithering, they don't understand software development! Frankly, unless you're hard up, if they won't, I'd buy my own copy and take with me to the next job.
+1
Working in visual studio without resharper is like trying to cut someone's hair with nail scissors. You owe it to yourself to work with decent tools.
>> Without something like parameterized truth tables and resharper, you
>> are foced to write 64 little test functions, or break out interfaces
>> and mocks all the way down and have 20 tests. Either way, easily 20x
>> the work.
>
> Sorry, I don't see that. As we said before, the one-assert-per-test is only a good idea to break old habits. You can write parameterised tests in good ol' fashioned code by calling a helper method. The real point, however, is to use the test writing to help you think about the domain and make sure you've covered it properly. It's amazing how much addressing code from it's externally visible effects clarifies understanding.
>
>> Still hard to sell to my friends who are used to the "code
>> and pray" mentality (which I suppose I am still trying to break myself
>> of). In the moment, that line of logic does not seem complex enough
>> to feel like you could break. Famous last words I know, but as you
>> all well know, this is the kind of thing you are up against when you
>> are trying to sell yourself and your team on the program (when it is
>> hard enough just to get them to buy you resharper). Even 10 minutes
>> feels like way too much work when you want to get on to doing the next
>> feature.
>
> I know you're doing your best, but this doesn't make sense to me. Development cost must include all the way to fixing the bugs. A little bug in a condition like this can cost hours and hours to find in a deployed system.
>
> In the meantime, I wouldn't necessarily try to convert the world. One alternative is to adopt the practice yourself until you have some experience and start showing that it's never your code that breaks just before the big demo.
And practice the practice. Work on an open-source project, or just build something for fun in your spare time. That way when you do it at work you'll be good at it, and you won't feel guilty like you're learning a new skill on the job. Though as jbrains can tell you, you shouldn't feel at all bad about learning on the job.
cheers,
Matt
--
Freelance programmer & coach
Author, http://pragprog.com/book/hwcuc/the-cucumber-book (with Aslak Hellesøy)
Founder, http://relishapp.com
+44(0)7974430184 | http://twitter.com/mattwynne
> In the meantime, I wouldn't necessarily try to convert the world. One alternative is to adopt the practice yourself until you have some experience and start showing that it's never your code that breaks just before the big demo.
>
--
Maybe she awoke to see the roommate's boyfriend swinging from the
chandelier wearing a boar's head.
Something which you, I, and everyone else would call "Tuesday", of course.
S.
Would anyone be able to provide a (small) concrete example of this in action?
Cheers
Seb
--
ACCU - Professionalism in Programming - http://accu.org
http://www.claysnow.co.uk
http://twitter.com/#!/sebrose
http://uk.linkedin.com/in/sebrose
"They're automated assertion; you know, like design by contract. You
*have* read Meyer haven't you? You haven't? Eiffel? Really? Where did
you get your degree?!"
That should shut them up.
--
Abraços,
Josué
http://twitter.com/josuesantos
Erlis Vidal wrote:
> my biggest issue is the misunderstanding around TDD, nowadays I use the
> terminology "verification code" more than "unit tests", I was told once
> that I'm forbidden to write any tests, test is the job of tester and
> that the test code was twice long in LoC than the code in production
> (the manager was actually reading my tests wow). Just remembering days
> like that, makes me really sad.
In case it ever happens to you again:
http://www.infoq.com/news/2011/03/Ensuring-Product-Quality-Google
I commented there. I'll copy the comment here.
Extract-and-override is almost always a step towards introducing a new
Collaborator. See "Replace Inheritance with Delegation" in Fowler's
_Refactoring_. Introducing the Collaborator inverts the dependency,
increasing context independence by pushing dependency on the runtime
environment up a level of the call stack. That all sounds good to me.
S.
On 8 Sep 2011, at 20:34, J. B. Rainsberger wrote:
> On Wed, Sep 7, 2011 at 18:54, philip schwarz
> <philip.joh...@googlemail.com> wrote:
>> Hi Steve,
>>
>> you wrote: "I prefer not to extract and override (although many people
>> recommend it)"
>>
>> A few days ago someone on twitter said: "TDD fans: is designing code
>> to simply facilitate easy testing a bad thing? See https://gist.github.com/1186095
>> for example of what I'm struggling with."
>
> I commented there. I'll copy the comment here.
>
> Extract-and-override is almost always a step towards introducing a new
> Collaborator. See "Replace Inheritance with Delegation" in Fowler's
> _Refactoring_. Introducing the Collaborator inverts the dependency,
> increasing context independence by pushing dependency on the runtime
> environment up a level of the call stack. That all sounds good to me.
Steve Freeman
I find with this I end up with systems composed of very small but focused objects that can be easily swapped and reused.
S.
er, yes.
> 2 questions...
>
> 1. Do you move the tests from the old test to the new class?
if it's a supporting class, it's tested through the outer. If it grows big enough to be promoted, then write some detail tests to go with it and leave a few coherence tests to make sure everything fits together.
> 2. How does having a zillion one-method classes make the code any
> better? Sounds to me like it would make it harder to maintain, harder
> to read. Can't responsibilities be broken up too much? Is there not
> a level when a class is too small to be considered a class?
probably, but I see more of the opposite problem. Complex code that could be clarified by pulling out a concept. A classic example is building up and manipulating a collection of objects. In Java, this usually results in an explosion of angle brackets, whereas a better solution is to create a specialised collection object and move some of the control there.
S.
You may need to do this by moving some of the fields and passing one or two as parameters. This may lead to observations of data clumps. Create new classes and see if methods should move again.
You often see a lot of behaviour around one field in a class. That definitely needs to be recognised with it's own class.
Where no fields are used by a method, use your judgement about whether to extract it. Err on the side of lots of small classes for a while.
Regards,
Lance
2. How does having a zillion one-method classes make the code any
better? Sounds to me like it would make it harder to maintain, harder
to read. Can't responsibilities be broken up too much? Is there not
a level when a class is too small to be considered a class?
Given that anonymous functions (aka closures) is merely a convenient syntax for creating an object with one method, I guess my code does have many more one-method objects than any other kind. But I don't usually create a named class to define them. I'll introduce a named class to represent a concept that has a bunch of polymorphic operations.
In this case, this is "assembly" code which we tend not to unit test, but rely on integration or even acceptance testing. What we want to know is that this part of the system hangs together, rather than that there's an object that responds in a particular way to an event.
S.
I think there's some relevant material in the "teasing apart Main" chapter>
> Could you define some of the rules for when code is considered
> "assembly" code? From the second sentence, I am getting the
> impression that there are classes that are "wiring" whereas other
> classes might be more about logic. Should we be breaking classes up
> that way? Hmm, my gut tells me something is wrong with that, but I
> cannot put my finger on it.
a lot of the code we build has most of the behaviour implemented by a combination of reasonably small objects with collaborators and immutable value objects -- "classic" OO. Then we usually have a layer or so on top of that which tries to be a declarative declaration of what the overall behaviour will be. jMock is an example of this. The advantage is that we end up with a system that is made up of multiple independent components which we can reconfigure to support new features.
What troubles you about this approach? How do you think it might fail?
I'm working with web technologies and there is big arguing about the
jQuery interface.
Most people find jQuery so great because of most outer API and refuse
to learn about other frameworks that provide more
modular and separated design and imho are much better structured then
jQuery. And when I told them that you can always create
your own layer that mimics the jQuery API, they never seems to hear it.
I think maybe this is more about psychological barrier than actual
technical arguments against such modular approaches.
presumably top-down could also be done in a TDD fashion since the top
is the DSL that is basically the acceptance tests for the underlying
low-level implementation layer, and one can keep refactoring and
evolving the levels?
I tend not to unit-test an outer assembly layer because what I really want to know is if everything works together, not just what's plugged into what. If I'm going outside-in, this tends to happen naturally.
Not sure if I'm answering you, I kinda lost the thread.
S.
Steve Freeman
Winner of the Agile Alliance Gordon Pask award 2006
Book: http://www.growing-object-oriented-software.com
+44 (0) 797 179 4105
M3P Limited. http://www.higherorderlogic.com
Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
Company registered in England & Wales. Number 03689627
Personally I don't think this is correct. TDD is (or at least was)
about designing and exercising the interface of the class. BDD extends
this to say we are specifying the behaviour of the class. If we do a
good job of this we will have
1. A class that does what it says (what it says is its public interface)
2. An executable specification that describes/documents what the class
says it does
3. An executable specification that when run and green proves that the
class does what it says
In a way none of this really has anything to do with how the class
collaborates. But classes written in this way are much easier to
collaborate with. So an emergent benefit of doing good TDD is the
creation of better collaborators.
"Designing INTERFACES at each level that show how classes collaborate"
is something else I think, and implies thinking about more than one
class at a time. I would see this as more of a refactoring activity,
but some might want to 'architect' it. It isn't the point or a primary
focus of TDD as I understand it
All best
Andrew
--
------------------------
Andrew Premdas
blog.andrew.premdas.org
tdd is whatever you want it to be, probably :-)
i think there are plenty of people out there who see OO as being more
about the mu, the communications, the collaborations, than about
one-interface-at-a-time. presumably they could use tdd in their
approach.
sincerely.
Well physically a class has one interface, in that it has a number of
methods that can be called, and you can easily write unit tests for.
The design part of TDD early on was focused on making this better by
using it in the creation of unit tests before implementing it. So the
process is about first calling the code in a test - not first calling
the code in some interface design activity that is focused on more
than one class. Now Robert Martin and GOOS could be talking about this
in which case your implication a) is not correct. You have said the
interface belongs to the user of the class - (not users) if there is
only one user to consider then there is one interface, one role and
this should be exercised first by the creation of unit tests.
As to part b) a fundamental part of TDD is design the interface
through the mechanism of writing unit tests. The unit test doesn't
refer to the code that will call the interface, it is the "first" code
to call the interface.
One point I'm trying to highlight is that the core activity of TDD is
writing a unit test that exercises the interface of an object 'before'
writing the code that implements that functionality. This activity is
as much a design activity as a test activity. My final, perhaps more
controversial point, is that when you are doing this you shouldn't be
doing anything else, in particular thinking about layers, other
objects, hierarchies, exposing interfaces etc. etc. save that for
later
All best
Andrew
That makes it meaningless. TDD has a history, and pretty much was a
simple practice of writing unit tests first then writing
implementation code. It would be sad of that initial idea was lost as
it still has great value.
> i think there are plenty of people out there who see OO as being more
> about the mu, the communications, the collaborations, than about
> one-interface-at-a-time. presumably they could use tdd in their
> approach.
>
Absolutely, but TDD is not OO. Its a practice that is useful in
"Growing" objects, but you could grow functions with it also.
Personally I think when you are doing TDD you should do TDD and not do
anything else. Having written your unit test, and perhaps created a
first simple implementation then you can switch context to doing
something else. Just to clarify this, I would see this point of focus
being a few minutes. If you want to spend time doing other OO things
before or after this then thats fine by me.
Allowing TDD to become some sort of global term that can mean anything
devalues it. Perhaps keeping a very simple definition of TDD and
seeing it as a tool/technique/practice that is part of a development
workflow makes it more useful
All best
Andrew
> sincerely.
Matteo was talking about Interfaces in Java meaning (called also
protocols in other languages).
You're talking here about class interface in C++like meaning, that is
the list of public methods and fields correct?
If not why do you say "physically a class has one interface"?
> The design part of TDD early on was focused on making this better by
> using it in the creation of unit tests before implementing it. So the
> process is about first calling the code in a test - not first calling
> the code in some interface design activity that is focused on more
> than one class. Now Robert Martin and GOOS could be talking about this
> in which case your implication a) is not correct. You have said the
> interface belongs to the user of the class - (not users) if there is
> only one user to consider then there is one interface, one role and
> this should be exercised first by the creation of unit tests.
you completely lost me here. can you make your point with some code example?
>
> As to part b) a fundamental part of TDD is design the interface
> through the mechanism of writing unit tests. The unit test doesn't
> refer to the code that will call the interface, it is the "first" code
> to call the interface.
actually if we're talking about interfaces in Java meaning, you start
exactly with creating the interface during testing on some other class
which need the interface to work.
I'm bit confused so let's do an example:
RiskCalculator -> your interface with method calculateRisk()
CustomerRiskCalculator -> your concrete class which accept a customer
in the constructor.
Ideally, which come first in your opinion?
And when you create RiskCalculator the first time?
> One point I'm trying to highlight is that the core activity of TDD is
> writing a unit test that exercises the interface of an object 'before'
> writing the code that implements that functionality. This activity is
> as much a design activity as a test activity.
ok but how do you write code to "exercise" the functionality?
you can create the first time the RiskCalculator interface when you're
writing some class (in TDD) which need it or you can start from
scratch with the test TestRiskCalculator
> My final, perhaps more
> controversial point, is that when you are doing this you shouldn't be
> doing anything else, in particular thinking about layers, other
> objects, hierarchies, exposing interfaces etc. etc. save that for
> later
later when? once you finalized the interface?
the whole point I think is to let your design to emerge by strictly
testing the behaviour and the expected state changes of your objects
using TDD.
Uberto
Was he? That wasn't clear to me. Nor was it particularly clear that
the references to GOOS and Robert Martin were about interfaces in the
java sense
> You're talking here about class interface in C++like meaning, that is
> the list of public methods and fields correct?
> If not why do you say "physically a class has one interface"?
>
Yes thats what I'm talking about.
>> The design part of TDD early on was focused on making this better by
>> using it in the creation of unit tests before implementing it. So the
>> process is about first calling the code in a test - not first calling
>> the code in some interface design activity that is focused on more
>> than one class. Now Robert Martin and GOOS could be talking about this
>> in which case your implication a) is not correct. You have said the
>> interface belongs to the user of the class - (not users) if there is
>> only one user to consider then there is one interface, one role and
>> this should be exercised first by the creation of unit tests.
>
Does this make sense now that you know I am not talking about java
interfaces and am assuming that neither are GOOS or Robert Martin?
> you completely lost me here. can you make your point with some code example?
>
>
>>
>> As to part b) a fundamental part of TDD is design the interface
>> through the mechanism of writing unit tests. The unit test doesn't
>> refer to the code that will call the interface, it is the "first" code
>> to call the interface.
>
> actually if we're talking about interfaces in Java meaning, you start
> exactly with creating the interface during testing on some other class
> which need the interface to work.
>
Well I wasn't, hence the confusion. There was TDD before interfaces in
Java. I see interfaces in java as a bit of a 'cludge' to allow a form
of multiple inheritance. They are more of a technical implementation
detail than a fundamental OO construct. The idea of programming to an
interface not an implementation and jave interfaces themselves often
get confused. The first is a general OO idiom. the second is a java
specific implementation detail. I considered it more likely that we
were talking about the first.
> I'm bit confused so let's do an example:
>
> RiskCalculator -> your interface with method calculateRisk()
> CustomerRiskCalculator -> your concrete class which accept a customer
> in the constructor.
>
> Ideally, which come first in your opinion?
>
> And when you create RiskCalculator the first time?
>
>
Incidentally I now program in Ruby, so that changes my perspective and
my java is very rusty. I remember that in java it was difficult to
test interfaces and I used to write interfaces first which I think is
a mistake. I would suggest you initially make a unit test for a
RiskCalculator concrete class and then implement the calculateRisk()
method. I'll do my code examples in RSpec
describe RiskCalculator
describe CalculateRisk
it should ... do
RiskCalculator.new.calculateRisk.should wibble
end
Already this exposes flaws with the example. Why does a RiskCalculator
have a CalculateRisk method - what else is it going to calculate? Why
should I put up with this repetitive interface! What about
RiskCalculator.calculate or Risk.new.calculate? How do I put the risk
into the calculator etc.
All of the above is the design part of TDD, and if you miss it out
then you'll end up with poor designs like
RiskCalculator.calculateRisk().
However assuming the example is lovely once you have done a unit test
and a concrete implementation that passes then you could stop doing
TDD and do another activity that involves pulling out an interface if
thats appropriate. This could be pertinent refactoring, or it could be
pointless twiddling about with unnecessary interfaces, it all depends
on context and timing. However its not TDD (under the definition I'm
promoting).
>> One point I'm trying to highlight is that the core activity of TDD is
>> writing a unit test that exercises the interface of an object 'before'
>> writing the code that implements that functionality. This activity is
>> as much a design activity as a test activity.
>
> ok but how do you write code to "exercise" the functionality?
>
By not thinking about java 'interfaces' when doing TDD. Exercise a
concrete class first.
> you can create the first time the RiskCalculator interface when you're
> writing some class (in TDD) which need it or you can start from
> scratch with the test TestRiskCalculator
>
>> My final, perhaps more
>> controversial point, is that when you are doing this you shouldn't be
>> doing anything else, in particular thinking about layers, other
>> objects, hierarchies, exposing interfaces etc. etc. save that for
>> later
>
> later when? once you finalized the interface?
no after you have written at least one unit test and one concrete
implementation that passes.
> the whole point I think is to let your design to emerge by strictly
> testing the behaviour and the expected state changes of your objects
> using TDD.
>
Yes but java interfaces are not objects. So they have no place in my
narrowly defined TDD. Pausing TDD and doing a refactor to extract an
interface is perfectly valid. Writing an interface first isn't!
Writing interfaces before you have tested concrete implementations is
contrary to a TDD workflow.
>
> Uberto
Pausing TDD and doing a refactor to extract an
interface is perfectly valid.
I see the side of the car that has refactoring. I see it this way:
test-first programming: tests first, then code
test-driven development: test-first programming + refactoring + choose
tests based on current situation
--
J. B. (Joe) Rainsberger :: http://www.jbrains.ca ::
http://blog.thecodewhisperer.com
Author, JUnit Recipes
Free Your Mind to Do Great Work :: http://www.freeyourmind-dogreatwork.com
Find out what others have to say about me at http://nta.gs/jbrains
for whatever it is worth, have you seen DCI a la Coplien and Trygve?
Yes, and I'm still struggling to see quite what point they're making.
Looks like this is the nub of the disagreement. It's true that interfaces can be implicit or explicit, but the interface construct is one the few things that Java really got right (that and GC). Of course they're not full-on protocols, but they give you a way of talking about them.
> Yes but java interfaces are not objects. So they have no place in my
> narrowly defined TDD. Pausing TDD and doing a refactor to extract an
> interface is perfectly valid. Writing an interface first isn't!
> Writing interfaces before you have tested concrete implementations is
> contrary to a TDD workflow.
Interestingly, Kent Beck likes classes too.
Personally, I couldn't disagree more, and I don't see where we can take the discussion from here.
S.
Would you clarify whether you've read GOOS? The purpose of this list for discussions around the book and its ideas. I have no problem with disagreement as long as I feel confident that we have enough common understanding to know what we're disagreeing about.
my take on it is that they are trying to get people to "put down the
polymorphism and walk away" a little bit. i think they feel there are
different ideas of what OO means, just like there are for TDD or any
term on earth, and that the class-based inheritance-using style is not
so good. then they go into their own particular favourite approach to
dialing in the architecture and end up with their DCI stuff. i was
just pointing it out as possible food for thought for Matteo; pointing
out yet another set of people who agree that interfaces and roles are
context sensitive.
sincerely.
Quite the opposite I think. Most statically typed OO languages
conflate the two concerns of inheritance and subtyping into a single
mechanism. That's a kludge. Interfaces decouple the two concerns.
Even in dynamically typed languages you have the *design* concept of a
role that an object plays, comprising subset of the messages it can
respond to, and those roles are unrelated to where the class sits in
the inheritance hierarchy. There's no explicit representation of
roles in the language. Smalltalk organises the methods of a class
into protocols, but those protocols are just categories in the
browser, not in the type system. Interfaces make those roles explicit
in statically typed languages.
--Nat
I had a very interesting discussion with Richard Öberg (of qi4j fame) about DCI.
Now the idea from what he explained to me, is kind of appealing,
Usecase Driven Design I would call it. :)
You define a Context object for every UC (or group of similar UC).
At the start of Context life (constructor or a bind method) you inject
in it the Data (Value Object in GOOS).
The Context will wrap them in their Roles for the UC and the business
logic is contained fully in those Roles (mostly) and in Context
methods (one method for UC outcome).
For example (in pseudocode) for a move balance UC:
public class MoveBalance implements Context {
public MoveBalance(Customer giver, Customer receiver) {
this.giver = giver as Giver;
this.receiver = receiver as Receiver;
}
public moveAmount(MoneyAmount amount) {
giver.decreaseBalance(amount + commisionFee);
receiver.increaseBalance(amount);
}
etc.
---
The main benefit is that as far you move around your Context you have
all the data and the function (aka Roles) needed to finish the UC and
nothing more. Moreover the Context offer you a nice way to control the
flow of the application (aka Interactions).
For sure qi4j Rest architecture is quite a tour-de-force.
I see two problems here: a technical one about weird syntax needed in
static languages to express this paradigm.
But the main one is about the idea you need some "magical" framework
to make an application.
Anyway qi4j is very interesting and I think I will steal an idea or two. :)
Uberto
Uberto
- No I haven't read GOOS
- Its sort of on my reading list but not that high up due to its java'ness
- I programmed in Java for a long time but now I program in Ruby
- I wasn't disagreeing with anything in GOOS, read the OP
- I read the GOOS mailing list because it has some very interesting
posts on OO. If not reading the book means I'm not welcome please let
me know
My points were about TDD and in reply to what I perceived as a
misunderstanding about TDD. I hoped to gain insight and maybe add a
little too - there is no flaming going on here.
All best
Andrew
>
> S.
>
>
> Steve Freeman
>
> Winner of the Agile Alliance Gordon Pask award 2006
> Book: http://www.growing-object-oriented-software.com
>
> +44 797 179 4105
> Twitter: @sf105
> Higher Order Logic Limited
> Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
> Company registered in England & Wales. Number 7522677
>
>
>
>
--
------------------------
Andrew Premdas
blog.andrew.premdas.org
Well maybe you misunderstood me - I wasn' quite precise enough, I
should have said "Writing an interface first isn't TDD". However if
you think that writing interfaces first is TDD then yes we have a
semantic disagreement about what TDD is. But hey its only semantics,
and I did state I was using a very narrow definition of the term.
All best
Andrew
------------------------
Andrew Premdas
blog.andrew.premdas.org
> Even in dynamically typed languages you have the *design* concept of a
> role that an object plays, comprising subset of the messages it can
> respond to, and those roles are unrelated to where the class sits in
> the inheritance hierarchy. There's no explicit representation of
> roles in the language. Smalltalk organises the methods of a class
> into protocols, but those protocols are just categories in the
> browser, not in the type system. Interfaces make those roles explicit
> in statically typed languages.
>
So you would agree then that interfaces are
1. Specific to a small number of OO languages
2. Particular to static OO languages
3. A technical implementation of a more general OO concept of role/protocol
Because I used kludge you probably think I'm slagging of interfaces -
my fault for a bad choice of words. What I'm saying is that the
concept of role is more fundamental than the implementation. I think
also that you can apply the concept of role doing TDD in java before
you specifically write an interface, and that this is beneficial when
doing TDD as you can exercise the test earlier.
all best
Andrew
> --Nat
>
> --
> http://www.natpryce.com
java is our implementation language, we'd be interested in hearing how other languages affect its ideas.
> - I programmed in Java for a long time but now I program in Ruby
> - I wasn't disagreeing with anything in GOOS, read the OP
I think you are...
> - I read the GOOS mailing list because it has some very interesting
> posts on OO. If not reading the book means I'm not welcome please let
> me know
Glad you find the list useful.
Of course you're welcome. That said, we could move on to the next level in the discussion faster if you were familiar with out material.
> My points were about TDD and in reply to what I perceived as a
> misunderstanding about TDD. I hoped to gain insight and maybe add a
> little too - there is no flaming going on here.
understood. Likewise.
S.
yes, we have a disagreement, which is why I suggested reading the book before going on with this discussion, so at least you know where we stand.
S.
You unit test a class and it has collaborators. These are described as interfaces and mocked/stubbed in this test, as you are interested in the behavior of the class under test. Eventually these interfaces will be implemented by classes and some integration code will bind the collaboratirs to some of these implementations, but nothing says the implementations and interfaces will be one to one. In the end you may realize a class satisfies many related interfaces all described as collaborators when you were testing other classes.
> As to part b) a fundamental part of TDD is design the interface
> through the mechanism of writing unit tests. The unit test doesn't
> refer to the code that will call the interface, it is the "first" code
> to call the interface.
>
> One point I'm trying to highlight is that the core activity of TDD is
> writing a unit test that exercises the interface of an object 'before'
> writing the code that implements that functionality. This activity is
> as much a design activity as a test activity. My final, perhaps more
> controversial point, is that when you are doing this you shouldn't be
> doing anything else, in particular thinking about layers, other
> objects, hierarchies, exposing interfaces etc. etc. save that for
> later
>
> All best
>
> Andrew
>
> --
> ------------------------
> Andrew Premdas
> blog.andrew.premdas.org
Best regards,
Daniel
> On 3 Oct 2011, at 18:36, Raoul Duke wrote:
>> On Sun, Oct 2, 2011 at 10:28 PM, Matteo Vaccari <vac...@pobox.com> wrote:
>>> a) a class often implements different interfaces for different roles,
>>
>> for whatever it is worth, have you seen DCI a la Coplien and Trygve?
>
> Yes, and I'm still struggling to see quite what point they're making.
>
> S.
>
> Steve Freeman
I'm glad to hear I'm not the only one who doesn't get DCI. I thought it was me :)
Dion
What you talk about here sounds so complicated! Perhaps we are talking
at different levels of granularity. Personally I never unit test a
class, I work in much smaller chunks, so I will unit test one method
in a class. Once I have a test, and the simplest possible
implementation then I'll stop doing TDD and refactor for a bit. After
that I'll come back to TDD and work on the method a bit more. The best
illustration of this I've seen is Robert Martin's prime factors kata.
This is definitely worth doing at least 10 times. Other good resources
are the beginning of the rspec book. The thing that I think happens
alot in Java is that people spend to much time thinking at too high a
level, and doing to much at once, and that stops you getting the
fundamentals correct. I have nothing against pulling an interface out
of a class when a second implementation comes along, but think most of
the time YAGNI applies and you should save that thinking for later.
I'd be worried by a class satisfying many related interfaces, as that
suggests it doesn't really know what its doing (SRP?), so maybe some
refactoring and higher level thought would be appropriate.
Overall I think programming should be a serious of simple activities
done with precision and care - with occasional breaks to think a
lilttle higher, review and sort out messes. That simplicity, which Bob
Martin so beautifully demonstrates in his kata, is something we can
easily loose
all best
Andrew
Philip,
Thankyou for pointing out that post. What that post does for me is
explain why alot of the stuff I see in this group seems over my head -
too hard and to complicated. But without reading GOOS I can't really
go much further than that. It does strike me though that perhaps from
my point of view London School TDD isn't really TDD at all - its more
in the area of Object Orientated Design. I must point out that this is
a semantic argument about the use of the TDD monikor, not a judgement
about the value of the London School approach.
I think part of the art of programming is doing the right thing at the
right time and I would argue that there are many times when what you
should be doing is classic TDD focused on a single method and test.
Then there are other times when you should be doing different
activities e.g. refactoring, OOD etc. Realising that these activities
are separate different things is useful as it allows you to think
about less and consequently do each task better. I would hope GOOS has
room for doing this classic TDD as one of the many activities that
make up its flavour.
All best
Andrew
Many of your posts have indicated that you see refactoring as a separate activity from TDD. How can we square this with the fundamental TDD cycle of RED-GREEN-REFACTOR, described by Kent Beck when he invented the term 'TDD'?
It feels that what you are talking about is 'unit test first' rather than TDD.
Regards,
Lance
Philip, I think you've taken my comments a bit out of context. The
kludge comment which I later apologised for,was about the historical
development of Java and the way the technical implementation of Java
interfaces came about and how the idea of an objects interface is more
fundamental to OO than this particular implementation. As I previously
stated I have no problem with using interfaces but suggest that maybe
it would be better if they came out of a refactoring of a concrete
object developed by TDD. In this way the method signature would have
been used in a concrete test earlier and the benefits that TDD brings
to this simple but fundamental task of designing a method signature
would not be lost.
Thanks for all the quotes
All best
Andrew
Touche Lance. First of all I subscribe very strongly to
RED-GREEN-REFACTOR, and its at the core of how I program, which is
pretty much following the two red-green circles described in the
beginning of the RSpec book. I believe that there is a clear
separation between the red-green part and the refactoring part.
Refactoring is a fundamentally different activity which requires a
context shift
My memories of earlier days of TDD say 1998 1999 are that it was write
a test and implement it and that RED-GREEN-REFACTOR is a later
refinement of this. Martin Fowler's refactoring book came out in 1999,
and kickstarted the refactoring movement. Kent Beck's
extremeProgramming 2000 sees refactoring as one of its 12 engineering
practices. I can't remember when I first heard red-green-refactor, but
I'm sure it was several years later. Are you sure Kent Beck invented
the term TDD and described RGF at the same time?
Alot of this started of with a comment about what TDD is, and one
poster saying that TDD could be defined as pretty much anything. So is
TDD is red/green/refactor or GOOS style development or ...
Kent Beck came up with extreme Programming to tie together a series of
engineering practices into a coherent whole. Is XP TDD?
I was suggesting we could think of TDD as what you've called unit test
first. In that way the term would get back a precise definition. Then
we could see this as a core part of a programming iteration. One such
iteration would be TDD refactor (equivalent to red/green/refactor or
unit test/refactor). The motivation behind this was not to lose the
design benefits gained by using a method signature in an applicable
context (the test) before writing the method signature
But as you've pointed out most people think of TDD as being a term
describing the whole iteration. This is problematic as there are a
wide variety of iterations and a danger of losing the narrow focus
that is required to write a good method signature.
All best
Andrew