[rspec-users] respond_to? check in rspec-mocks

121 views
Skip to first unread message

Myron Marston

unread,
Aug 27, 2010, 8:18:08 PM8/27/10
to rspec...@rubyforge.org
One of the primary dangers of using mocks is that your unit tests may
be testing against an interface that is different from that of your
production objects. You may simply have misspelled the method (e.g.
object.should_receive(:methd_name) rather than method_name), or you
may have changed the interface of your production object without
updating your tests.

Obviously, you should have some integration coverage that will catch
these kinds of errors (and I do), but it's nice when they're caught by
your unit tests since they're so much faster than integration tests.
I've been using a pattern to help with this for a while:

it "safely mocks a method" do
object.should respond_to(:foo)
object.should_receive(:foo).and_return(:bar)
object.do_something_that_calls_foo
end

Basically, I add a respond_to? check before mocking or stubbing a
concrete object (obviously, I don't do this for a pure mock object).
If/when I rename the mocked method, I'll get a test failure. I think
it'd be nice to add this to rspec-mocks itself. A few additional
thoughts about this potential feature:

* This would only apply when you're mocking/stubbing concrete objects;
on a pure mock or stub it wouldn't do the check.
* Should this print a warning or raise an error so the test fails?
* Should it be configurable? I could see some people not wanting this
feature, particularly if you're strictly following the outside-in BDD
process where the specs on the outer layers (say, a controller in a
rails app) mock methods that have not yet been defined on the inner
layers (say, a model in a rails app).
* This feature could potentially take things a step further and when
you specify mock arguments using `with`, it could check the arity of
the method and be sure that the method accepts that number of
arguments.

What do people think about this idea?

Myron
_______________________________________________
rspec-users mailing list
rspec...@rubyforge.org
http://rubyforge.org/mailman/listinfo/rspec-users

Justin Ko

unread,
Aug 27, 2010, 8:24:25 PM8/27/10
to rspec...@rubyforge.org

> rspec-us...@rubyforge.orghttp://rubyforge.org/mailman/listinfo/rspec-users

What defines a "concrete object"?

Justin Ko

unread,
Aug 27, 2010, 8:42:32 PM8/27/10
to rspec...@rubyforge.org

> rspec-us...@rubyforge.orghttp://rubyforge.org/mailman/listinfo/rspec-users

Anything that is not an RSpec stub object?

Myron Marston

unread,
Aug 27, 2010, 9:07:36 PM8/27/10
to rspec...@rubyforge.org
> What defines a "concrete object"?
> Anything that is not an RSpec stub object?

Yep, that's how I'm using the term. It doesn't make sense to do a
respond_to? check on a pure mock or stub object, since it doesn't
initially have a defined interface (outside of the interface of
Object).

Justin Ko

unread,
Aug 27, 2010, 11:52:49 PM8/27/10
to rspec...@rubyforge.org

On Aug 27, 8:18 pm, Myron Marston <myron.mars...@gmail.com> wrote:

> rspec-us...@rubyforge.orghttp://rubyforge.org/mailman/listinfo/rspec-users

I'm confused on why you need to use respond_to. The purpose of a stub
is to change the behavior of an object. Therefore, if the behavior is
*not* being changed, your specs should fail. If the spec does not
fail, then the stub was worthless in the first place.

I suspect that your specs are not doing their job correctly. Could you
provide a real world example and maybe I can provide some pointers?

David Chelimsky

unread,
Aug 28, 2010, 10:36:25 AM8/28/10
to rspec-users

This idea has come up numerous times on this list over the years, but have yet to see the suggestion or patch that makes it work for me. It's definitely not something I've ever felt RSpec was missing, probably because I tend to write specs at multiple levels and I don't recall ever deploying something to production that failed due to an API getting misaligned. Not saying it's never come up in the development process, but the restrictions imposed by such a feature, in my view, would cost me more than the safety net it provides.

My other objection is that we're dealing with a dynamic language here, and there are going to be cases in which methods are defined dynamically. For average users, this is likely not a problem (as long as the check is done at the time the stub is invoked rather than when the stub is defined), but for anyone starting to explore meta-programming this would make things more confusing, IMO.

I've also seen plenty of cases where respond_to fails to handle a case that method_missing handles. In these cases, users would get misleading information back, making things more confusing.

With all this, there is one idea that I've floated that I'd be open to: an audit flag which could be used to generate a report separate from the spec run. Specs would not fail due to any misalignment, but you'd simply get report saying something like:

###############################
Spec: Account#deposit adds deposited amount to its balance # ./spec/bank/account_spec.rb:37
Stub: ledger.record_deposit # ./lib/bank/account.rb:42
- ledger object did not respond to #record_deposit
- ledger.methods => [ ... list of public instance methods that are not already part of Object ... ]
###############################

This would be disconnected enough from the examples that it would stay out of the way, and it would make misalignments (the common case) easy to see.

Thoughts?

David Chelimsky

unread,
Aug 28, 2010, 10:56:27 AM8/28/10
to rspec-users

Not necessarily. In the case of a stub on a real object, the purpose is to control the environment in which the example runs. Consider a method on an object that returns one value before noon and a different value at noon and after. In an example for another object that depends on the time-dependent object, we might stub that method to respond as though it were before noon in one example, and after noon in another. This means that when you run this spec in the morning, the example that stubs morning-like behaviour is not changing anything or needing to fail. Same for the other example in the afternoon.

Cheers,
David

Myron Marston

unread,
Aug 28, 2010, 12:32:59 PM8/28/10
to rspec...@rubyforge.org
> My other objection is that we're dealing with a dynamic
> language here, and there are going to be cases in which methods
> are defined dynamically. For average users, this is likely not
> a problem (as long as the check is done at the time the stub is
> invoked rather than when the stub is defined)

I was originally thinking the checking would happen when the stub is
defined, but I think you're right about it being better to check when
the stub is invoked.

> I've also seen plenty of cases where respond_to fails to handle
> a case that method_missing handles. In these cases, users would
> get misleading information back, making things more confusing.

That's a valid point, but IMHO an object that overrides method_missing
but not respond_to? is pretty poorly behaved object. You're
essentially breaking the semantics of how objects are expected to work
in ruby. I'd personally want an error or warning when I did this, so
that I'm alerted to the problem and can go fix it by properly defining
respond_to?.

> With all this, there is one idea that I've floated that I'd be
> open to: an audit flag which could be used to generate a report
> separate from the spec run.

Would this report wind up in a separate file that I'd have to open and
look at separately? That would reduce the usefulness of this a lot, I
think.

Instead, could we make it configurable? That way people can set it up
to fit their development workflow. I've been thinking that this
should be configurable, since people have a variety of development
styles. Here's some thoughts about how that configuration could work:

* In the RSpec.configure block, you set a default. Something like
config.undefined_method_stubs = :error/:warn/nil.
* In a spec, you can disable this checking for any object, using
something like object.allow_undefined_method_stubs!. This would work
well for an object that overrides method_missing, if you really don't
want to also override respond_to?.
* In a before(:each) block, you can modify the configuration for that
example group using something like RSpec::Mocks.undefined_method_stubs
= :error/:warn/nil.

Note that your separate report idea could easily be accommodated here;
it could be an additional allowed value to undefined_method_stubs
(maybe :separate_report ?).

Myron

Justin Ko

unread,
Aug 28, 2010, 12:59:18 PM8/28/10
to rspec...@rubyforge.org

> Not necessarily. In the case of a stub on a real object, the purpose is to control the environment in which the example runs. Consider a method on an object that returns one value before noon and a different value at noon and after. In an example for another object that depends on the time-dependent object, we might stub that method to respond as though it were before noon in one example, and after noon in another. This means that when you run this spec in the morning, the example that stubs morning-like behaviour is not changing anything or needing to fail. Same for the other example in the afternoon.
>
> Cheers,
> David

This is true, but the "morning" spec would fail in the afternoon (or/
and the afternoon spec would fail in the morning).

If the stub is not being touched, it should fail *at some point in
time*.

If the stub is not being touched, yet your specs are passing, then
technically there is nothing wrong with your app.

David Chelimsky

unread,
Aug 28, 2010, 2:01:59 PM8/28/10
to rspec-users

On Aug 28, 2010, at 11:32 AM, Myron Marston wrote:

>> My other objection is that we're dealing with a dynamic
>> language here, and there are going to be cases in which methods
>> are defined dynamically. For average users, this is likely not
>> a problem (as long as the check is done at the time the stub is
>> invoked rather than when the stub is defined)
>
> I was originally thinking the checking would happen when the stub is
> defined, but I think you're right about it being better to check when
> the stub is invoked.
>
>> I've also seen plenty of cases where respond_to fails to handle
>> a case that method_missing handles. In these cases, users would
>> get misleading information back, making things more confusing.
>
> That's a valid point, but IMHO an object that overrides method_missing
> but not respond_to? is pretty poorly behaved object. You're
> essentially breaking the semantics of how objects are expected to work
> in ruby.

Agreed, but in pretty much every case that I've seen this it's been in a 3rd party library that I am not in control of.

> I'd personally want an error or warning when I did this, so
> that I'm alerted to the problem and can go fix it by properly defining
> respond_to?.

Interesting side effect of this. I can see how this can help nudge a developer in the right direction.

>> With all this, there is one idea that I've floated that I'd be
>> open to: an audit flag which could be used to generate a report
>> separate from the spec run.
>
> Would this report wind up in a separate file that I'd have to open and
> look at separately?

I think "separate from the spec run" mislead you as to my intention here. What I mean is that I don't want this to raise errors, but rather it would be part of the output, just like pending and failed examples.

> That would reduce the usefulness of this a lot, I
> think.
>
> Instead, could we make it configurable? That way people can set it up
> to fit their development workflow. I've been thinking that this
> should be configurable, since people have a variety of development
> styles. Here's some thoughts about how that configuration could work:
>
> * In the RSpec.configure block, you set a default. Something like
> config.undefined_method_stubs = :error/:warn/nil.
> * In a spec, you can disable this checking for any object, using
> something like object.allow_undefined_method_stubs!. This would work
> well for an object that overrides method_missing, if you really don't
> want to also override respond_to?.
> * In a before(:each) block, you can modify the configuration for that
> example group using something like RSpec::Mocks.undefined_method_stubs
> = :error/:warn/nil.

I was resistant to the idea when it was simpler, but this additional complexity makes me even moreso :)

Without getting into a debate about its relative merits, here's what I'd really like to see: an API in the rspec-mocks framework that would allow you to extend it to do all this in a separate gem. Then you could build this, release it, refine it, etc.

What do you think would be necessary in rspec-mocks to support that?

> Note that your separate report idea could easily be accommodated here;
> it could be an additional allowed value to undefined_method_stubs
> (maybe :separate_report ?).

No need for a separate report - again - poor choice of words on my part.

Myron Marston

unread,
Aug 29, 2010, 2:30:55 AM8/29/10
to rspec...@rubyforge.org
> I think "separate from the spec run" mislead you as to my intention here.
> What I mean is that I don't want this to raise errors, but rather it
> would be part of the output, just like pending and failed examples.

I'm OK with this idea. I just didn't want to have a separate file to
read :).

> I was resistant to the idea when it was simpler, but this additional
> complexity makes me even moreso :)

Fair enough.

> Without getting into a debate about its relative merits, here's what
> I'd really like to see: an API in the rspec-mocks framework that would
> allow you to extend it to do all this in a separate gem. Then you
> could build this, release it, refine it, etc.
>
> What do you think would be necessary in rspec-mocks to support that?

I appreciate your willingness to make changes to RSpec to support 3rd
party libraries...but I honestly think that the necessary changes to
rspec-mocks for the API plus the separate gem would be far more work
than just implementing a simple version of this (as you've suggested)
in rspec mocks itself. Plus I doubt that a separate gem that did this
one simple thing would get much use by other developers.

Now that I understand that you just meant to have this print out a
report as part of the main spec output, I'm completely satisfied with
your suggestion. I'll start working on something in a branch and I'll
see what I can come up with.

David Chelimsky

unread,
Aug 29, 2010, 6:51:19 PM8/29/10
to rspec-users

Cool. Thanks.

Cheers,
David

Reply all
Reply to author
Forward
0 new messages