Avoiding Acceptance Test bloat, when using the Hexagonal architecture

208 views
Skip to first unread message

Coran Hoskin

unread,
Dec 28, 2013, 2:45:53 PM12/28/13
to growing-object-o...@googlegroups.com
I've been making my first attempt at implementing a development process as close to that outlined in GOOS as possible. So far so good, but one issue I seem to have, is that when I follow the cycle. Beginning a feature with a failing Acceptance Test, I then immediately follow the TDD style of doing the simplest thing possible to make the test pass. This involves simply returning whatever the assertion is expecting. Obviously this doesn't mean the feature is complete. But I struggle to see any other way of getting meaningful Acceptance Tests other than by writing two (or more) Acceptance Tests for each feature, so that there is not just one predictable output, but it depends on a given input, forcing the correct solution.
Unless it's advisable to break the "one assertion per test" guideline some suggest, and put two asserts per Acceptance Test, to exercise two possible outputs?

I mention the Hexagonal Architecture, as that is what I am following, and my Acceptance Tests work on the boundaries, so I am unable to look into the inner components/objects to verify the solution.

Any help greatly appreciated!
Thanks,
Coran

Gishu Pillai

unread,
Dec 29, 2013, 4:48:34 AM12/29/13
to growing-object-o...@googlegroups.com
One xUnit assertion per test is a good thumb rule for unit tests... not for acceptance tests.

Try: Acceptance tests that are scenario based, that test one scenario but may employ many intermediate assertions.
The only way to avoid acceptance test bloat is to *NOT* write tests or stash them into a low-freq regression suite/delete if they have served their purpose. Someone has to periodically prune the acceptance test suite.

If this suite is minimal, you may even be able to make them true end-to-end tests.

User-story based acceptance tests will lead to a test maintenance nightmare that you may not wake up from.
Incremental growth of a feature should reflect as incremental growth of the corresponding acceptance tests as well. Not as more new tests.

Gishu

Gishu Pillai

unread,
Dec 29, 2013, 4:49:45 AM12/29/13
to growing-object-o...@googlegroups.com

Gishu Pillai

unread,
Dec 29, 2013, 4:50:47 AM12/29/13
to growing-object-o...@googlegroups.com
Apologize for the Typo: I meant *NOT* write granular acceptance tests...

Coran Hoskin

unread,
Dec 29, 2013, 6:19:05 AM12/29/13
to growing-object-o...@googlegroups.com
Thanks for the reply.
Yes, I'm starting to think the one assertion per test applies to unit tests, but less so Acceptance Tests. 
Maybe a better guideline is one assertion per use case, within the Acceptance Test. So what would be multiple use cases, such as invalid/error handling, will each be one assertion within the encompassing acceptance test for the related feature.

Nat Pryce

unread,
Dec 29, 2013, 8:47:32 AM12/29/13
to growing-object-o...@googlegroups.com
On 29 December 2013 11:19, Coran Hoskin <cor...@gmail.com> wrote:
I'm starting to think the one assertion per test applies to unit tests, but less so Acceptance Tests. 

I find it unhelpful in unit tests as well.  A unit test should demonstrate a single aspect of behaviour to the reader.  It may take a more than one assertion to express that behaviour.

To borrow an over-used example: transferring money between accounts.  The behaviour is that money is moved from one account and to the other. Therefore I'd assert in a unit test that the source account was reduced by the transferred amount and the destination account increased by the transferred amount.  

I'd not write one test that shows that the source account was reduced by a transfer and another test that shows that the destination account is increased by a transfer, because after writing one of those tests and making it pass I'd have produced code that is incorrect. That is, it's not a useful test to write in the first place. 

Neither would I want the reader of my tests to have to piece the behaviour together from multiple itty-bitty tests.  Think of the reader!


--Nat

Steve Freeman

unread,
Dec 29, 2013, 9:42:23 AM12/29/13
to growing-object-o...@googlegroups.com
What Nat said. I like to think in terms of Alistair Cockburn's formulation of use cases: the core success case followed by the exceptions.

There are a lot of "training wheel" rules out there ("6% unit tests should have mocks") that aren't very interesting. What matters is thinking in terms of what you want people to understand, then finding the best way to say that.

One more thing, with the outside-in thing, the idea is to test appropriate to each level assuming what's underneath it works. At the acceptance level, that means showing that the major paths through the system hang together, and that failures are handled. It doesn't have to cover every last possibility, which would get into the combinatorial explosion.

S

Dale Emery

unread,
Dec 29, 2013, 1:56:24 PM12/29/13
to growing-object-o...@googlegroups.com
Hi Coran,


Unless it's advisable to break the "one assertion per test" guideline some suggest, and put two asserts per Acceptance Test, to exercise two possible outputs?

When I see rules like these, I like to dig beneath them to discover interesting distinctions and effects. Once I understand the effects, I can decide case by case whether I want those effects.

A key effect of an assertion is to terminate the test if the assertion is not satisfied. None of the code after a failing assertion is executed.

So: When would I want to execute the subsequent code, and when would I want to avoid it? For me, the key distinction is whether the subsequent code would give me useful diagnostic information, or only noise.

If the subsequent code might help me diagnose what went wrong, I might break my test into multiple tests, each with one assertion. That way, even if one test fails, I can get the useful information from the other tests.

On the other hand, if the code that follows a failed assertion would generate only noise, I might leave the code as a single test with multiple assertions, so that a failing assertion can terminate the test and avoid generating the noise.

So that’s how I treat “training wheel” rules in any domain. Look for the important effects and distinctions. Then consider those effects and distinctions in light of what I want to accomplish.

Dale

Steve Freeman

unread,
Dec 30, 2013, 7:01:26 AM12/30/13
to growing-object-o...@googlegroups.com
Damn, Dale nails it again.

my only difference would be that I find myself writing more complex matchers these days, so that I can hit the point of interest in one (at least as far as the test code goes).

S

Gregory Moeck

unread,
Jan 1, 2014, 12:54:10 PM1/1/14
to growing-object-o...@googlegroups.com
I agree with Steve and Nat's sentiment on the idea that a unit test should express a single aspect of behavior to the reader, but I would want to phrase the goal a slightly different way. In my opinion unit tests should be written in such a way that they document the use cases while being optimized to make clear what specifically went wrong when they fail.

The single assertion "rule" is attempting aid in the second part of that statement. If there is only one assertion then knowing what specifically failed should be immediately obvious: either the value returned was different than what was expected or an incorrect message was sent to a collaborator. When a test has multiple assertions you often have to dig in and reason about the test code before having to reason about the production code (which often requires keeping both paths in your head at the same time, a feat my poor brain cannot accommodate). Or to give another example a test with multiple assertions will sometimes fail before it has finished executing the totality of it's use case giving you incomplete information as to what went wrong. 

If you can mitigate the problems around making clear what specifically went wrong when the test fails using libraries like JMock then the one assertion "rule" becomes superfluous since you're accomplishing the intent of the rule even though you're breaking it. I would heartily agree with Nat when he says "think of the reader". Just make sure you think of the two primary uses cases in which the reader will be reading the test: (1) As documentation (2) when they've just (likely inadvertently) broken the functionality of the system and they need to know why. When there is a conflict between the two I personally I think that one ought to prefer clarity in the the second use case to rather than the first, but certainly try for both.


--

---
You received this message because you are subscribed to the Google Groups "Growing Object-Oriented Software" group.
To unsubscribe from this group and stop receiving emails from it, send an email to growing-object-oriente...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Caio Fernando Bertoldi Paes de Andrade

unread,
Jan 1, 2014, 1:53:02 PM1/1/14
to growing-object-o...@googlegroups.com
My two cents:

I understand the single assertion rule as enforcing the triple A test structure:

Arrange, Act, Assert

The single assertion rule guide us away from the mistake of having multiple arrangements, actions and assertions interleaved in the same unit test.

It doesn't matter how many "physical" assertions you have since they represent together only one "logical" assertion: that your system behaved correctly when acted upon with a given setup.

Caio

Sent from Mailbox for iPhone
Reply all
Reply to author
Forward
0 new messages