Nice. An argument in favour of checked exceptions :)
> I hope I have described the problem clearly, but to sum up: when mocking an interface we make assumptions about it's behavior. When the interface changes those assumptions are no later true.
>
> I understand that the UNIT tests for object B theoretically shouldn't fail because even though interface A changes object B still works isolated. However, I also consider the UNIT tests for object B a kind of integration test between object B and the mock. The whole reason the unit tests have any value to us, is that this "integration test" is based on a mock that actually does what the interface says it should. But when we change the interface it doesn't, so the unit test theoretically is completely wrong.
>
> The question is if we can do something about this? I would like my tests to alert me if something like this happens. Is this not possible? What do you guys do about it? I'm sure it must be a problem for most of the test and especially TDD practitioners.
Yes. This is a possible problem. In practice, however, it just doesn't seem to be a problem. It's not what I find myself being caught by on real systems. First, we will have been writing at least some higher level tests to drive out the unit-level requirements. These will catch gross errors when objects just don't fit together. Second, my style, at least, now tends to more, smaller domain objects so that some changes will get caught by the type system. Third, I find that rigorously TDD'd code, with a strong emphasis on expressiveness and simplicity, is just easier to work with, so I'm more likely to catch such problems.
What do other people find?
S.
Steve Freeman
Winner of the Agile Alliance Gordon Pask award 2006
Book: http://www.growing-object-oriented-software.com
+44 797 179 4105
Twitter: @sf105
Higher Order Logic Limited
Registered office. 2 Church Street, Burnham, Bucks, SL1 7HZ.
Company registered in England & Wales. Number 7522677
What happens if you use checked exceptions, rather than unchecked exceptions? I suspect it will solve the problem (though I'm not in front of my computer at the moment so I can't experiment).If the exceptions are meant to be handled (i.e. are an official part of the protocol) then I'd really recommend making them checked exceptions, and only throw RuntimeExceptions for reporting bugs (i.e. incorrect coding). I know that checked exceptions can make code verbose, when you're not interested in handling the exceptions, but in my view it's a price worth paying in most situations.
Joe Rainsberger talks about using Contract tests here, I think.
The tests define the semantics of the interface, and every implementer
of the interface must pass them. I think he has the implementer's
tests inherit the contract tests, but I can't quickly find a
reference. Joe...?
> At this point in time, even though all tests are green there is a
> substantial flaw in our code; namely that object B doesn't work. The unit
> test for object B is based on the earlier version of interface A that worked
> differently than what it does now. The core of the problem is that the
> assumptions/simulations we made in the mock for interface A at the time we
> wrote the unit test for object B aren't necessarily true anymore. The
> surrounding world has changed.
Yes, so follow this rule:
* A stub in a collaboration test must correspond to an expected result
in a contract test
* An expectation in a collaboration test must correspond to an action
in a contract test
This provides a /systematic/ way to check that B remains in sync with
the implementations of A. In your situation, I do this:
1. Change the implementation A1 of A, noticing a change in the contract of A.
2. For each change in the contract of A:
2.1 If an action in A has changed (parameter changed, method name
changed), then look for expectations of that action in collaboration
tests, and change them to match the new action.
2.2 If a response from A has changed (return type, value, what the
value means), then look for stubs of that action in collaboration
tests, and change them to match the new response.
Done.
> For a concrete example take this Java code:
<snip />
When I change func() to return CRuntimeException, I'm changing a
return type. This means that I look for all tests that stub func(),
and I find the BTest.
doSomethingThrowsBExceptionWhenCollaboratorThrowsAException test. I
change the stub to throw CRuntimeException, then decide whether I have
to change the test or fix the implementation of B.
Done.
When people in my training classes tell me that they worry about doing
this correctly, I point out that the rule of correspondence between
stub and expected result or expectation and action tells us exactly
what to look for when we change the contract of any interface. It
takes discipline, but not more discipline than the rest of TDD or
good, modular design takes.
> The question is if we can do something about this? I would like my tests to
> alert me if something like this happens. Is this not possible? What do you
> guys do about it? I'm sure it must be a problem for most of the test and
> especially TDD practitioners.
Someone's working on this as a PhD dissertation. Follow @t_crayford on
the Twitter.
--
J. B. (Joe) Rainsberger :: http://www.jbrains.ca ::
http://blog.thecodewhisperer.com
Author, JUnit Recipes
Free Your Mind to Do Great Work :: http://www.freeyourmind-dogreatwork.com
Find out what others have to say about me at http://nta.gs/jbrains
> Definitely found this in ruby. It's easy to test-drive against a particular
> role and have implementations get entirely out of sync.
Equivalent statement: It's easy to write the wrong code with sloppy thinking.
Keeping collaboration and contract tests in correspondence with each
other requires attention and discipline. It can be tedious, but I
prefer that tedium to chasing mistakes down between two objects
disagree on the contract in between. Others might swing the tradeoff
in the other direction.
> I have found integration/acceptance tests excellent protection against these
> types of errors.
Scam. :)
* A stub in a collaboration test must correspond to an expected result
in a contract test
* An expectation in a collaboration test must correspond to an action
in a contract testThis provides a /systematic/ way to check that B remains in sync with
the implementations of A. In your situation, I do this:1. Change the implementation A1 of A, noticing a change in the contract of A.
2. For each change in the contract of A:
2.1 If an action in A has changed (parameter changed, method name
changed), then look for expectations of that action in collaboration
tests, and change them to match the new action.
2.2 If a response from A has changed (return type, value, what the
value means), then look for stubs of that action in collaboration
tests, and change them to match the new response.
Cheers. I teach it this way quite regularly. So far, no stories of
disastrous results.
> However I'm not sure I understand exactly what you mean by "contract test"?
> I take it that the "collaboration tests" are the unit tests of an object and
> not integration tests (unless you view the unit tests as integration tests
> with mocks:))?
B uses mock A correctly => collaboration tests
A1 implements A correctly => contract tests
A contract test is a test that checks an aspect of the contract of A.
You can start by writing a test for implementation A1, then abstract
away from that test the knowledge of A1 so that the test only knows
about A. It's now a contract test.
More reading:
http://link.jbrains.ca/zziQcE
http://link.jbrains.ca/yG9kqS
Enjoy.
B uses mock A correctly => collaboration tests
A1 implements A correctly => contract testsA contract test is a test that checks an aspect of the contract of A.
You can start by writing a test for implementation A1, then abstract
away from that test the knowledge of A1 so that the test only knows
about A. It's now a contract test.More reading:
http://link.jbrains.ca/zziQcE
http://link.jbrains.ca/yG9kqSEnjoy.
Hi Ole,
I'm bit late to reply but we had the same problem and this is how we solved.
I'm easily confused with A-B example so let me tell you a real case we
have here:
interface DocumentFetcher
public DocumentFetcherResponse getDeviceList(Url url);
class DocumentFetcherHttp implements DocumentFetcher
[with a httpclient inside and the logic to retrieve and validate xml
from external url]
DocumentFetcherResponse [value object]
boolean isOk;
Document xmlDocument;
String errorMessage;
int statusCode;
This is used in the business logic to differenciate content according
the device.
When we test the BL, we mock DocumentFetcher and as expectations we
have a set of stubbed DocumentFetcherResponse that covers all corner
cases from UC.
Then when we test DocumentFetcherHttp we check that from all the
possible xml and conditions (http errors, misconfigurations etc.) it
returns exactly the same set of DocumentFetcherResponse.
Finally we check the test coverage to be sure all our the IFs and
methods in the code are covered by tests. Of course this is not a
warranty by itself but it helps to determine if we need more tests.
In this way if there is a change in code or in the requirements we are
pretty sure to catch it all along the line.
I hope this can help.
cheers
Uberto
As said Rainsberger, many other things requires disciple. TDD requires
discipline. But I see many people claiming that they do TDD and don´t
worry to do properly the refactoring step. I see many that don´t
eliminate the duplication. The elimination of duplication is so
important that the original definition of TDD in Kent Beck book is:
write a failing test; write the code to make the test pass; eliminate
duplication. But they "forget" to do that. My point is, many things
requires discipline. And this one more thing.
But I like your idea of automatically verify that mocks respect the
interface contract. Today I rely on acceptance tests to warranty that
everything works together.
--
Abraços,
Josué
http://twitter.com/josuesantos
> If I understand you correctly the usefulness of contract tests are extremely
> dependent on the fact that when you change an interface something will tell
> you "Hey I'm changing an interface, I should also change the contract tests
> for it to match!".
A contract test is simply a test for the expected behavior from an
interface, so it's not substantially different from a test for any
object, except that no-one can instantiate an interface.
> That "something" might be sheer experience or just
> intuition or maybe even a third thing.
It's a rule, and no different from changing any other object: if I
want to change the behavior, then I start by changing the tests.
> The point is if you don't update the
> contract tests when updating the interfaces all is lost.
Again, this is true of all objects: if we don't change the tests when
changing the behavior, then all is lost.
> Therefore the
> question is whether the technique encourages people to do this actively or
> if it's more intuitive than before or if you in reality just pushed the
> problem further up the test hierarchy.
I don't see any difference whether we change interfaces/protocols or
classes/implementations.
> My second thought is that I'm not sure I follow how you practically utilize
> the contract tests. From your description it sounds like you never actually
> run the contract tests.
On the contrary: when you implement ArrayList, you have to pass the
contract tests for List, so you create class
ArrayListRespectsListContract extends ListContract and inherit the
tests that ArrayList must pass to be considered a correct
implementation of List, respecting the Liskov Substitution Principle.
You might also have some implementation details that need testing, in
which case, I recommend test-driving those details in other test
classes.
Of course, ListContract must be abstract, because it has abstract
methods to create Lists in various states: empty, with one item, with
a few items, and so on.
> All they are present for is for you to get that "aha
> I am chaning a contract test, so I must do one of the following now to stubs
> or mocks: [insert your practical guide here]". After changing the contract
> test you manually go through the touched stubs/mocks. This seems ok, but
> couldn't we do better?
As I say above, the tests actually run.
> When I think about contract tests, I actually want them to tell me at test
> time if any stub or mock has the wrong assumptions. I think this is
> possible, but again please bear with me if I haven't grasped all
> consequences of the technique yet.
I don't see how to do this with dynamic mock objects, because there is
no compile-time class to test, and I set different stubs and
expectations on the interface methods from test to test.
> Let's try a concrete example derived from my previous one:
> public abstract class AContractTest {
> abstract AInterface createAObject();
>
> @Test(expected = ARuntimeException.class)
> public void testFuncThrowsAException() {
> AInterface aObject = createAObject();
> aObject.func();
> }
> }
This contract test doesn't describe the conditions in which func()
throws ARuntimeException. Does func() always throw that exception? If
it does, then what use is it?
> public class BTest extends AContractTest {
> .... // Same earlier example
> @Override
> AInterface createAObject() {
> AInterface aStub = org.mockito.Mockito.mock(AInterface.class);
> when(aStub.func()).thenThrow(new ARuntimeException());
> return aStub;
> }
> }
I can't see the use of this test. This test doesn't stop me from
stubbing func() differently in tests for clients of A.
BTest would not extend AContractTest; instead, when you implement A
with class C, you'll have this:
class CContractTest extends AContractTest {
@Override
AInterface createAObject() {
return new C(); // C implements A
}
}
Now CContractTest inherits the tests from AContractTest, so those
tests execute on implementation C to verify that C implements A
correctly.
> What do you think? I certainly hope I haven't missed something important
> (which I should have caught) that makes the idea irrelevant or useless, but
> it happens from time to time, I guess that's why I like discussing my ideas
> with others :)
What you've written here misses the mark by a long way. I don't have
the energy to type out my entire demo here; it looks like I need to do
it as a video. I don't know when I'll do that, but I will do it
eventually. :)
BTest would not extend AContractTest; instead, when you implement A
with class C, you'll have this:class CContractTest extends AContractTest {
@Override
AInterface createAObject() {
return new C(); // C implements A
}
}Now CContractTest inherits the tests from AContractTest, so those
tests execute on implementation C to verify that C implements A
correctly
S
Perhaps I'm missing something, but in the Ruby world, do shared examples in e.g. RSpec perform the same task as contract tests? If I can say that a new thing behaves the same way as another, and the examples are sufficiently duck-typed, am I safe?
That's right, I do. These contract tests have saved our big old rails
project numerous times -- or at least saved loads of debugging time.
For example, our views have breadcrumbs that show a route into deep
content. Each "crumb" can in fact be any one of a growing number of
different kinds of domain object, so we have a contract test that
defines the behaviour we expect from anything that can appear in a
breadcrumb. The shared example group expects there to be a 'subject'
already set up, and then it performs a variety of checks ranging from
should_implement-every_method_on_the_interface to actual behaviour
contracts.
Cheers,
Kevin
I often hear it when discussing "unit tests", in which you test a
class and mock its collaborations, against "integration tests", in
which you plug all your classes together. Well, there is no right
answer, IMHO. Both techniques have pros and cons.
When you do integration testing, you can rapidly notice any break in a
class contract. However, it can also make your test really hard to
write. A class "A" collaborates with another class "B" which, in its
turn, collaborates with another class "C", and so on. Usually, class
"A" doesn't care about "B" using "C". Now you should instantiate all
of them in a test and, if some dependency changes in the graph, you
would have to change all your tests. Sounds like maintainability gets
harder when doing it.
When using mocks, your tests are less coupled, but less effective. You
don't perceive a possible break in a contract. However, in practice, I
tend to use the mocking style of writing tests. When I change a
contract of something, I press "Ctrl+Shift+G" in Eclipse, and it shows
me all tests that are using that interface, so I can review and change
the behavior of the class accordingly (which will change, right? After
all, you changed a contract!).
Regards,
Mauricio Aniche
--
Mauricio Aniche
www.aniche.com.br
@mauricioaniche
Ah, thanks guys. I've used this approach in the past and had also been struggling with translating the definition of a contract test to RSpec.
This is especially useful for writing gems that are intended to be extended with new adapters etc. - just tell the user who wants to write an adapter to stick a line in their spec file.
> Perhaps I'm missing something, but in the Ruby world, do shared examples in e.g. RSpec perform the same task as contract tests? If I can say that a new thing behaves the same way as another, and the examples are sufficiently duck-typed, am I safe?
I think so; I don't have enough varied experience to feel comfortable
with the details, but generally, I think so.
> But the conditions are also expressed in the collaboration tests. If a
> collaboration tests misstates the conditions, then A1 can't help but
> violate the LSP: while it satisfies the contract as stated by the
> contract tests, it violates the contract as expressed by collaboration
> tests.
Collaboration tests make assumptions about the contract; contract
tests try to justify those assumptions. For over 10 years, we've
written good collaboration tests, but few people write contract tests.
> The contract is articulated in more than one place.
I disagree. The contract tests articulate the contract; the
collaboration tests use the contract. I see exactly the same
"duplication" between production code and tests: more like
double-entry book-keeping than worrisome duplication.
> When using mocks, your tests are less coupled, but less effective.
…as change detectors, but they provide stronger feedback about the design.
> You
> don't perceive a possible break in a contract. However, in practice, I
> tend to use the mocking style of writing tests. When I change a
> contract of something, I press "Ctrl+Shift+G" in Eclipse, and it shows
> me all tests that are using that interface, so I can review and change
> the behavior of the class accordingly (which will change, right? After
> all, you changed a contract!).
Exactly.
Yeah. It would be awesome if we find out a way to have both effective
internal and external feedback, wouldn't it!? Am I dreaming too much?
:)
Regards,
Mauricio Aniche
> Yeah. It would be awesome if we find out a way to have both effective
> internal and external feedback, wouldn't it!? Am I dreaming too much? :)
Unit tests (internal), some acceptance tests (external). No?
--
Abraços,
Josué
http://twitter.com/josuesantos