The underlying issue, which I only really understood while writing the book, is that there are different approaches to OO. Ours comes under what Ralph Johnson calls the "mystical" school of OO, which is that it's all about messages. If you don't follow this, then much of what we talk about doesn't work well.
That said, I would also expect the balance of practices to be different in a dynamic language where we can't rely on the language to keep interfaces in sync--although the original ideas came from a(nother) bunch of Smalltalkers.
S.
My approach is, I hope, much more akin to what you'd pick up from reading GOOS. I'd say a significant part of Brian's scepticism about mocks is mostly because he hasn't ever worked with anyone who knows how to use them properly. He certainly seemed to be intrigued by the idea from watching my session that you didn't need to keep scurrying off down a rabbit warren each time you needed a collaborator class.
If you watch the videos, you'll also see we ended up with quite different designs, even for such a simple problem.
cheers,
Matt
ma...@mattwynne.net
07974 430184
Looking at the presentation, it reminds me of a classical dialect
techniques: make your opponent ideas idiotic and then have fun to
demolish them.
It's used by politician everywhere, I hope this is not the start of a
new trend in tech conference.
I think it's completely ok dislike mocks, it's also perfectly ok to
ignore them, but if you present a session about mocks and you
misrepresent them in this way either you do on purpose or you didn't
any preparation for the session.
Said that, I agree that sometimes it's difficult (at least for me) to
choose how much mocking and where cut the layers... but you can always
refactor later.
cheers
Uberto
1. where exactly stubs and spies are better than mocks and why.
2. (more important) everything can be misused, so do you see something
wrong with the "correct" way to use mocks (let's take GOOS as
example).
> style much of the mock advice won't work well. What I find in practice
> is that many teams don't recognise this and use mocks inappropriately.
In my experience, most of teams don't use TDD at all. Most of the
misuses of mocks I saw they were on teams who wrote the test later or
in teams who were forced to use overcomplex architectures.
Said that, taking extreme positions is fine for me, but then what else
could you expect if not extreme reactions? ;)
cheers
Uberto
"Hammers suck. They strip the threads from my screws."
Yes, a lack of skill will lead to brittle code and tests. And there are plenty of broken TDD codebases that don't use mocks. And even more broken codebases that don't use TDD. No technique can survive inadequately trained developers.
I only had the slides to go on, so I don't know if the words altered the position. Personally, I'm very, very tired of "mocks are bad because X", "Yes, that's why we never do X" arguments. There's a lot of misunderstanding in the world, much of it perpetrated by Big Names, which leads to exactly the sort of mess you've been seeing. If the teams that you were working with had been Listening to the Tests, rather than just typing furiously, then they wouldn't have gone down that hole.
It might have been more productive (if less noticeable) to have done a symptoms and cures talk. To me, taking an extreme anti-mocks position is getting a bit old.
S.
personally, I don't mind that much whether interactions are specified before or after. I prefer before because that's what I'm used to and it looks a bit like a poor man's Design by Contract.
What does concern me is that all the interactions should be verified unless I turn them off explicitly. If the behaviour of the object under test changes, than I want my test to have changed too (preferably beforehand). If I need to default or ignore a significant proportion of the interactions then that's a clue that something's too big.
I would guess that there's an interesting project in the dynamic languages to write a tool that reads the expectations and cross-checks the signatures with candidate classes. In Smalltalk these could be defined as protocols. Ruby's less regular, so it would be harder to be consistent.
S.
I'd be happy to discuss why I dislike using mocks and prefer to use
stubs and spies in the cases where I want to verify that specific
messages were sent to collaborating objects.
I've been doing TDD for over 10 years and have seen numerous
situations where overusing mocks leads to brittle, hard to read test
cases.
Having moved from Java to Ruby over that last couple of years
I'm seeing a lot of the problems I saw in Java code bases reappearing
in Ruby code bases. The goal of the presentation was to highlight
these problems, albeit by taking an extreme position.
I don't believe I particularly misrepresented mockist TDD in the
presentation but certainly emphasised the aspects that I dislike.
I completely agree with Steve's comment above that there are different
styles of OO (and TDD) and that if you don't follow the GOOS OO design
style much of the mock advice won't work well. What I find in practice
is that many teams don't recognise this and use mocks inappropriately.
By taking an extreme position in my presentation I hope that people
will discuss both views and as a result refine the advice on when to
use (or not use) mocks.
That's what we set out to do in the book.
When writing the book we were very careful to avoid any absolute
statements and to describe *our experience* of using a variety of TDD
tools (including mocks), and the context in which we found the tools
useful. We deliberately did not write in absolute terms ("mocks are
great" or "mocks suck", for example), because any tool is applicable
in some context. Outside that context any tool becomes less helpful
and, far enough away from the sweet spot, a hindrance. We wanted to
let the readers understand enough about the contexts we have
experienced that they could hopefully map them to their own contexts
and make informed decisions as to which tools are appropriate for
them. Anything else is, I think, a disservice to the audience and just
continues the parlous state of debate and education the industry in
which technologies are described almost like fundamentalist religious
beliefs instead of in terms of engineering trade-offs. No technology
is bad, it's all context dependent -- even Spring might be useful
somewhere, maybe*...
--Nat
* only kidding, Spring fans!
That, I think, is the key difference in thinking. You're designing in
terms of inputs and outputs, mock objects (and jMock in particular)
aid designing in terms of state machines that communicate by message
passing protocols. In that model, a state transition is triggered by
an event (incoming message) and causes some observable effects
(outgoing messages). The transition rule -- trigger, effects, new
state -- is a single "chunk" of behaviour.
> (although I notice that JMock has added 'allowing' to add more distinction).
JMock has always had that feature, although it was called "stubs" in jMock 1.
--Nat
Sorry to repeat myself again (and again (and again ...))
THERE IS NO SUCH THING AS MOCKIST TDD!
There are different ways of designing how a system is organised into
modules and the interfaces between those modules. In one design, you
might split the system into modules that communicate by leaving data
in shared data structures. In another, you might split the system
into modules that communicate by synchronous message-passing. In
another, modules might communicate by lazily calculated streams, or by
async messaging between concurrent actors, or by content-based pub/sub
events, or...
Depending on how you design inter-module interfaces, you'll need
different tools to unit-test modules (whether TDD or not).
Mock Objects are designed for test-driving code that is modularised
into objects that communicate by "tell, don't ask" style message
passing. In that style, there is little visible state to assert about
because state is an implementation detail -- housekeeping used to
coordinate the message-passing protocols.
In other styles of code, procedural or functional, assertions on
queried state works fine. (In asynchronous code, they do not because
of race conditions).
Even in a system that uses Mock Objects appropriately in its
unit-tests, a significant part is purely functional in my experience,
and so will not be tested with mock objects.
Talking about Mockist TDD vs Statist TDD leads people to think in
terms of adopting a "TDD style" and rigidly following a process,
rather than understanding how to select from a wide array of tools
those that are appropriate for the job at hand.
Rant over (this time!).
--Nat
--
http://www.natpryce.com
I find where the expectations are set to affect the readability of the test and that asserting on a spy after the action to be more natural and readable.
In addition a mock will fail if the implementation invokes a method on the collaborator that isn't specified in the test, whether or not I care about checking it.
This is an aspect in the use of mocks that I genuinely don't "get", the influence on the design that comes from using mocks.
What I'd like to find is the qualified advice on when to use mocks, which lies somewhere between always and never.
I have found that using verifying test doubles (spies and mocks both) as opposed to non verifying test doubles (stubs) creates positive pressure on your design by highlighting unhealthy dependencies.
Then that might a symptom of unbalanced relationships between the object and its collaborators, or that you should be using real objects in the test. One clue is if you're mocking out setters and getters, which suggests there isn't really an action in the collaboration.
>> If I need to default or ignore a significant proportion of the interactions then that's a clue that something's too big.
>
> I agree that people don't "listen to the tests" enough. In response to
> problem tests the options to consider also include changing the
> testing style to a more stateful style.
That might be an answer, or perhaps you're looking at the wrong interactions. I often find that tests that involve mocks are actually exercising a small cluster of objects: a small container and some helper and value objects. The focus should be on interactions that trigger changes in the environment.
S.
Steve Freeman
Winner of the Agile Alliance Gordon Pask award 2006
Book: http://www.growing-object-oriented-software.com
+44 (0) 797 179 4105
M3P Limited. http://www.m3p.co.uk
Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
Company registered in England & Wales. Number 03689627
That's a reasonable position. I prefer to think in terms of protocols between objects, so I like to keep the interactions together. Not a show stopper either way.
> (although I notice that JMock has added 'allowing' to add more distinction).
allowing has been there since the beginning. It was called stubbing in jMock 1, but Nat changed the terminology. I'm not sure where he took it from, but I know that it's used in Syntropy in event specifications. We've been banging on about "Stub queries, mock actions" for some years.
I think this also why Easymock has had such a pernicious effect, since it makes that distinction at the end of the line where it's hard to see.
S.
still not sure what you mean with "disadvantages of mocks".
Also there are differences between (for example) mockito and JMock in
practical use.
> I also prefer that spies allow me to put asserts after the invocation
> of the method being tested, I consider it to be much more readable and
> again makes a clearer distinction between input and output.
I need to see the code here to understand what you mean.
Generally speaking I'm not very concerned by the "distinction between
input and output", I'm interested in testing the behaviour as a whole.
> I don't see anything wrong with the GOOS 'correct' way of using mocks
> but I think what people generally don't appreciate is the relationship
> between using mocks and the GOOS design style. What I find is that if
> your design style differs, or is constrained by using a framework,
> then mocking can lead to the brittle, unreadable test problems that I
> have often seen.
As Steve put it: "No technique can survive inadequately trained developers."
Anyway my target is improving my team skills to go near the GOOS
design style model.
So I want for them to learn to use mocks in the correct way, nothing less. ;)
Cheers
Uberto
technically speaking yes, but given its name, all its method names I
though it was included in the "mocks suck".
So at the end it's all about JMocks vs. Mockito? how boring...
>> I need to see the code here to understand what you mean.
>> Generally speaking I'm not very concerned by the "distinction between
>> input and output", I'm interested in testing the behaviour as a whole.
>
> Here is an example:
> https://github.com/orfjackal/tdd-tetris-tutorial/blob/beyond/src/test/java/tetris/RemovingFullRowsTest.java
I don't see the point. Why using JMocks instead of mockito in that
class would result in less brittle tests and better design?
cheers
Uberto
You didn't. The discussion has been going on for over ten years. You
could have just joined in.
--Nat
I'm really struggling to see how a team that doesn't have the skills to write mocks properly will do better with spies. I can understand that they will get away with inadequate design for longer, but is that really a good thing?
S.
Is it really? I'm pretty sure that 90% of programmers cannot tell the
difference.
I saw your examples, in the first example first case I think you're
also testing RoR, that could be a good thing or a bad thing depending
on the contest.
In the second example I really don't understand why the second case is
less brittle than the first one.
>
>>
>> So at the end it's all about JMocks vs. Mockito? how boring...
>>
>
> Not at all, it's about the misuse of mocks and making people aware of
> test spies as an alternative which may be more suitable depending on
> your design style.
More suitable => yes. More solid => no.
I can provide you with lots of tests using mockito in the wrong way,
but what's the point?
cheers
Uberto
This one has a singleton and it's expecting on a query, rather than stubbing, which we don't recommend. Also, I might factor out the book params to show that a value is being passed through.
> Perhaps instead of input and output I should have said setup and
> asserts. So when I read a test I like to be able to clearly see what
> is setup code and what are asserts and I find that spies make that
> clearer. Here is another example that I think demonstrates that.
>
> https://gist.github.com/761110
Here the expectation is buried in the setup, it should come afterwards. One of the advantages of the jmock initialiser hack is that it makes the protocol stand out. And the problem with the spy version is I haven't made explicit that I'm ignoring other calls rather than I've forgotten about them.
S.
That's because a bunch of well-known names mischaracterised the technique from the very beginning, and we were too busy to write it up properly (and too unimportant to get heard). Dave Astels was the only author who took the trouble to understand. The damage continues to propagate, especially in the .Net world. That's why I'm so tired of these arguments.
> Would you say that using mock objects leads to
> the design style you describe or that you favour that design style and
> use mocks because they enable testing that design style?
There's a postscript in the book, copied on the mockobject.com website, that describes how we got there. In the spirit of Nat's recent rant, mocks arise naturally from doing responsibility-driven OO. All these mock/not-mock arguments are completely missing the point. If you're not writing that kind of code, people, please don't give me a hard time.
S.
In some frameworks, you can specify never(), which is the same as not specifying but makes the intention clear.
I think it's reasonable for a set of tests to break because an expectation fails, especially if you can have it pop up the debugger at the right time. Controlling for single failures at this granularity seems a bit too precise for me.
Again, what's missing from this is making explicit which unspecified calls can be ignored and which have just been forgotten about.
S.
perhaps saying that explicitly at the time would have been more helpful...
Here's to the New Year.
S.
if the unit tests are getting large enough that they have to rely on defaults and implicit ignores, then that seems wrong to me. Not least because I can't see the relationship between all the inputs and outputs. One of the frustrations is that Mockito was written to help with a codebase that was going sour, it has many fine qualities but it was fixing the wrong problem.
> What I think I'm getting at is that there is a relationship between
> design style and use of mocks that most people aren't aware of. When
> people try to apply mocks to a different type of design they have
> problems. Basically what you and Nat have said earlier about using a
> tool outside of it's sweet spot, just that people don't know what the
> sweet spot is.
So, perhaps you could help to spread the word? None of our design practices are new.
S
On Fri, Dec 31, 2010 at 19:39, Brian Swan <bgs...@gmail.com> wrote:
>
> What I think I'm getting at is that there is a relationship between
> design style and use of mocks that most people aren't aware of. When
> people try to apply mocks to a different type of design they have
> problems. Basically what you and Nat have said earlier about using a
> tool outside of it's sweet spot, just that people don't know what the
> sweet spot is.
Also, I think there's a relationship between the type of system and
suitable design style -- I think Nat commented at some point that the
GOOS style works well for behavior-heavy reactive systems and not so
well for more data-oriented batch systems. I'd love to hear Steve and
Nat expand more on that, especially if I'm mischaracterizing...
I was really enthusiastic about the book, but I've found it hard to
use some of the ideas. I wonder if that's because I work in a
different family of systems or because I haven't tried hard enough.
Thanks,
- Kim
What got in the way of things working for you?
S.
Steve Freeman
On Sat, Jan 1, 2011 at 18:13, Steve Freeman <st...@m3p.co.uk> wrote:
> I know of at least one batch system it's been used for. And, after all, a batch can be viewed as one really large event :)
>
> What got in the way of things working for you?
Nothing specific. Old habits, I think, and the relative lo-fi-ness of
C++, both wrt test- and mock frameworks and the constraints the
language puts on design that make things like value objects, role
interfaces, etc, less idiomatic and more costly.
Thanks,
- Kim
S.
Ugh!
Well, folks. I have to claim full responsibility for this
transgression. I wrote the rspec-rails generator that creates rspec
code examples (ok, some people like to call them tests) that are built
around code generated by Rails. I recently changed the generator so
that instead of this:
mock_book = mock(:book)
Book.should_receive(:find).with("37") { mock_book }
mock_book.should_receive(:update_attributes).with({:these =>
'params'})
put :update, :id => "37", :book => {:these => 'params'}
it now does this:
mock_book = mock(:book)
Book.stub(:find).with("37") { mock_book }
mock_book.should_receive(:update_attributes).with({:these =>
'params'})
put :update, :id => "37", :book => {:these => 'params'}
The difference is that the query against the singleton becomes a stub,
but there is still a message expectation that the instance receives a
command. I think this is closer to "goosher" (pronounced GO-sher to
rhyme with closer, and means kosher in a GOOS world. With any luck,
this won't stick), but I still have a question:
The stub still constrains the arguments ("37") on the query, so it
will fail if the implementation fails to call Book.find("37"). The
resulting binding between the test and the implementation is no
different, except now a failure will not point the developer to the
right problem. If the implementation uses a different argument, the
message expectation will fail but there won't be a direct indication
as to why. Similarly, if the implementation changes the method it uses
to find the instance, this will fail, but again, the developer won't
know exactly why. Leaving the message expectation on the query makes
it such that either of these changes will result in a test failure
pointing directly to the change that was just made, and that strikes
me as a good thing. Two questions come from that:
1. do you agree that is a good thing?
2. if so, how much weight does it carry considering all the other
context here?
re: the binding to the implementation, if we loosen the stub so that
it doesn't constrain the argument(s):
Book.stub(:find) { mock_book }
then we're in the same boat. A failure will not tell the developer
what changed to cause the failure.
Now a completely different approach would be to not use stubs or
message expectations at all:
book = Book.create!(:title => "Mocks suck!")
put :update, :id => book.id, :book => {:title => "Mocks rock!"}
book.reload.title.should eq("Mocks rock!")
That's all fine and dandy, but presents some interesting problems for
a code generator because it has to know something about the
attributes. It also presents new problems because now it's bound
tightly to the validation rules (and other behavior) of the Book,
whereas using a test double for the book isolates you from all of
that. Now this is all solvable, but it starts to get quite convoluted.
Imagine:
book = Book.create!(valid_attributes)
put :update, :id => book.id, :book =>
valid_attributes.merge(valid_attributes.first =>
variant_of(valid_attributes.first))
book.reload.title.should eq(variant_of(valid_attributes.first)))
Eeeek, I say! This has the benefit of being very black-boxy re:
implementation, but it also has the drawback of being black-boxy re:
the test itself.
Another common solution is to use any_instance:
book = Book.create!(valid_attributes)
Book.any_instance.expects(:update_attributes).with({'these' =>
'params'})
put :update, :id => "37", :book => {'these' => 'params'}
The thing that strikes me as odd about this one is that we create an
instance of book, and then say "any instance" but we _really mean this
instance_. Unfortunately we can't put that expectation on the instance
in the test, because ActiveRecord does not use an identity map, so the
object returned by Book.find("37") is not the same object as the one
in the test.
I could go on and on here, but I'll stop at this point. I welcome any
thoughts, suggestions, criticisms, praise, or money, if that suits
you. My goal here is for the generator to generate code that does a
few different things:
* demonstrates "tests as documentation"
* demonstrates isolation between layers
* demonstrates the difference between stubs and message expectations
* completely covers the code generated by Rails
* is readable
* points developers in the right direction when the wheels fall off
* makes Nat and Steve smile
Not necessarily in that order.
Thanks for listening, and I look forward to ensuing lambasting.
I'm struggling a little with this because I'm not sure what this is testing. It looks like it's exercising persistence, in which case it might be best to write integration tests that persist and retrieve objects for real.
Alternatively, if 'book' is really just a struct with a bit of updating, then perhaps it would be better to just use a real one and check its contents afterwards.
Further, I'm not sure that I understand the point of code-genning unit tests. What are they trying to prove? Wouldn't it be better to test the hell out of some examples (Nat calls these Guinea Pig objects) and then assume that the framework works? Also, there's no opportunity for the tests to influence the design since that's fixed already.
So, what failures are common in this world?
S.
Steve Freeman
- Kim
Hi Joe,I have found that using verifying test doubles (spies and mocks both) as opposed to non verifying test doubles (stubs) creates positive pressure on your design by highlighting unhealthy dependencies.Can you give an example or two?
Yeah, Brian, maybe if you'd have called it "Mocks are a scam" it would have been better... :)
How about we all agree to make the title say what we mean?
How about we all agree to make the title say what we mean?
Hi Lance,How about we all agree to make the title say what we mean?Or append each deliberately provocative title with "ha ha j/k"Dale ha ha j/k
>Spies and mocks do the same thing but in different ways. I find where
> Spies and mocks do the same thing, so I don't understand this statement.
> Spies only differ from mocks in when you set the expectations: spies after
> the action, mocks before.
the expectations are set to affect the readability of the test and
that asserting on a spy after the action to be more natural and
readable. In addition a mock will fail if the implementation invokes a
method on the collaborator that isn't specified in the test, whether
or not I care about checking it. With a spy I can make the choice of
what I want to assert in the test, making the test less brittle in the
face of refactoring the implementation.
> Overusing spies results in the same problems.I find less so as a result of the asserts being limited to those
methods on the collaborator that I care about checking rather that
having to specify all (or explicitly ignore) the methods invoked by
the implementation on the mock.
> I have found that using verifying test doubles (spies and mocks both) as
> opposed to non-verifying test doubles (stubs) creates positive pressure on
> your design by highlighting unhealthy dependencies.
This is an aspect in the use of mocks that I genuinely don't "get",
the influence on the design that comes from using mocks. I've always
had an interest in different techniques for software design and would
like to understand this aspect more. What I'm not sure about is how
much mocks influence the design versus mocks are useful when testing
code written in a certain style.
> As a result, I posit two central inappropriate uses of verifying testI like this advice. I remember something from Bertrand Meyers Object
> doubles:
>
> 1. When trying to change code with existing, deep design problems,
> 2. When modeling queries, rather than commands
Oriented Software Construction, his 'Advice to the advisors' about
giving 'always' or 'never' or qualified 'always' or 'never' advice.
What I'd like to find is the qualified advice on when to use mocks,
which lies somewhere between always and never.
> By taking an extreme position in my presentation I hope that peopleIt's not made clear in the presentation.
>
> > will discuss both views and as a result refine the advice on when to
> > use (or not use) mocks.
>
> How clear have you made that intent in your presentation? I haven't read it,
> so don't interpret that as a passive aggressive question, because I
> genuinely don't know.
I'm sure you can appreciate the value of a provocative title to a
presentation. I don't think there was anything in the presentation
itself that mis-represented mock objects. I was essentially presenting
the poor use of mocks that I've seen many times in practice.
without seeing this in depth, my only observation is that perhaps the generated code should be more extensible so that it doesn't need to be changed. Then the generated could Just Work, and the devs can put their effort into their extensions.
I don't really have an answer, but it looks like the tests are revealing tensions in the design :)
S.
There are extension libraries that do that, but I'm dealing with the
code generated by the Rails framework itself, and doubt that I'd have
any influence on that code.
>
> I don't really have an answer, but it looks like the tests are revealing tensions in the design :)
Agreed. I'm actually playing with an idea to improve this situation.
If it goes well, I'll post back w/ info.
>
> S.
>
>
> On Tue, Jan 4, 2011 at 7:46 AM, Steve Freeman <st...@m3p.co.uk> wrote:
>> On 2 Jan 2011, at 23:05, David Chelimsky wrote:
>>>> So, what failures are common in this world?
>>>
>>> The main problem is that there is a fair amount of code that gets
>>> generated. Without thorough coverage, refactoring can lead to
>>> functional failures that are not exposed by tests. So I think the
>>> generator has a responsibility to generate tests sufficient to cover
>>> the generated code.
>>>
>>> Beyond that responsibility, we want the generated tests to demonstrate
>>> how to use the tools. And this is where it gets sticky, because it's
>>> very likely that if we were using the tools to write actual code, that
>>> the resulting code would look nothing like the code that is actually
>>> generated :)
>>>
>>> OK - maybe that's enough for the 2nd day of the year.
>>
>> without seeing this in depth, my only observation is that perhaps the generated code should be more extensible so that it doesn't need to be changed. Then the generated could Just Work, and the devs can put their effort into their extensions.
>
> There are extension libraries that do that, but I'm dealing with the
> code generated by the Rails framework itself, and doubt that I'd have
> any influence on that code.
Steve has hit the nail on the head here, I think. There are at least 5 different ways to ask ActiveRecord to fetch a given row of data from the database, and you can't easily write a stub that could simulate all of them at the same time.
The way I've solved this before was to write an abstraction in the tests, delegating the query stub to a single method. As long as all the controllers use a consistent protocol for fetching a single row from the database, then you have a single place to change the tests if you want to change that protocol. So my code would look something like this instead:
https://gist.github.com/762553
I've also extracted the params which were noise, IMO.
>
>>
>> I don't really have an answer, but it looks like the tests are revealing tensions in the design :)
>
> Agreed. I'm actually playing with an idea to improve this situation.
> If it goes well, I'll post back w/ info.
Pray tell?
cheers,
Matt
ma...@mattwynne.net
07974 430184
Not until I have something useful :)
Not sure it was me, just listen to the tests :)
S.
I agree. I am leaning more and more towards using InheritedResources for my controllers in Rails, which means they have almost no bespoke code in them at all, and I can then concentrate on the domain model objects having the logic to process the parameters they might receive from web requests. The problem then is that you end up with Obese Models which ignore the SRP. Next step is to factor out those responsibilities from the main model objects themselves and leave them as facades that delegate work to other classes.
I haven't had a chance to try them in earnest, but the newer ORMs like MongoMapper do use composition instead of inheritance, and I'd imagine they help enormously with building a proper domain model in app/models, rather than just a database persistence layer.
> I'm sure you all know Jamis Bucks, Fat Model, Skinny Controller blog
> post. I think you have to do this in rails, and forget about unit
> testing controller actions, and exercise the controllers through
> features or maybe integration tests. If you try and test controllers
> by mocking find you will inevitably end up with brittle code that is
> intruding into areas that are none of its business. But this is not
> the fault of mocking it is a fault of Rails.
I still find some value in simple controller unit tests to test, for example, what to do when there's an XHR request vs what to do when there's an HTML request, or testing a flash message. But I think you're right about the overall root cause.
>
> All best
>
> Andrew
Listen to the tests! (again) :)
S.