TTD of the Interactor-Entity-Gateway architecture

813 views
Skip to first unread message

Frederik Krautwald

unread,
Jul 11, 2014, 8:55:46 AM7/11/14
to clean-code...@googlegroups.com
I’ve been for the past couple of weeks struggling with TDD of the Interactor-Entity-Gateway architecture. What I mean is I don’t know where to begin. Do you begin with test-driving the use cases, the entities, or the gateways?

If I begin with a use case (interactor), I face the problem that the architecture proposes an interface (boundary), which follows the command pattern with no return value, i.e., its execute method takes a request model and a response recipient as arguments, and the response recipient should get called with either its success or failure methods. This leads to mocking, but as the boundary interface has no implementation yet, it is also mocked.

Is it wrong to test-drive the interface and should I only test implementation?

The same goes for the entities. They are have abstract methods and no implementation on the interactor side. How do we test-drive their structure?

I end up writing unit tests for the implementation of entities first, which happens to be outside the interactor side, usually in the repository.

I would really appreciate some guidance with test-driving the Interactor-Entity-Gateway architecture. I have watched Uncle Bob’s clean-coder videos on TDD, so please don’t refer to them, as they don’t provide answers to my problems.

Many thanks.

daniphp

unread,
Jul 11, 2014, 9:29:01 AM7/11/14
to clean-code...@googlegroups.com
I start with the Interactors.
I use concrete InMemoryGateways that implement the interfaces.
I don't use abstract entities if it's not required.

The test needs to have value, don't test interfaces blindly just to have good coverage.

Caio Fernando Bertoldi Paes de Andrade

unread,
Jul 11, 2014, 10:33:51 AM7/11/14
to clean-code...@googlegroups.com
Frederik,

Do you begin with test-driving the use cases, the entities, or the gateways?

I unit-test-drive use cases. I integration-test-drive gateways. :-)


Is it wrong to test-drive the interface and should I only test implementation?

It’s ok to refer to the interactor class directly in your tests.

The same goes for the entities. […] How do we test-drive their structure?

I don't write tests for entities. Entities appear in my designs as refactorings extracted from use cases, so they are already covered indirectly by the unit tests. Since I don’t have tests coupled to entities, I can refactor them very easily, and I love that.

I have watched Uncle Bob’s clean-coder videos on TDD, so please don’t refer to them, as they don’t provide answers to my problems.

I’m sure those answers were given by Uncle Bob somewhere, maybe here in the discussion, maybe in the cleancoders videos, maybe in some blog post or some talk he gave in some conference. I’m sure he talked about those issues, but not sure where nor when.

Caio

--
The only way to go fast is to go well.
---
You received this message because you are subscribed to the Google Groups "Clean Code Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clean-code-discu...@googlegroups.com.
To post to this group, send email to clean-code...@googlegroups.com.
Visit this group at http://groups.google.com/group/clean-code-discussion.

Jakob Holderbaum

unread,
Aug 20, 2014, 12:26:12 AM8/20/14
to clean-code...@googlegroups.com
Nice answer, I'd have gone the same way!

I always start test-driving the concrete interactors. And very often I
use other interactors to inspect system state. With this approach, the
acceptance layer of your test suite (the unit tests for your concrete
use cases) stays independent from entities.

I also use in-memory-implementations for the gateways, which makes my
tests blazing fast and they are simple to implement.

But I have a question, Frederik.

You say, that you don't specifically unit-test your entities, because
they merge from the use cases. I had the same urge probably one year ago
and tried it on smaller projects. Not on large ones, I don't wanted to
take the risk at this point.

My observations where, that this approach of testing feels nice at first
(as you said, entity refactorings for free), but leads to a problematic
state.

If your interactors use several entities, or an aggregate of several
entities, you quickly reach a state explosion and testing special edge
cases becomes really expensive.

What are your experiences?


On 07/11/2014 04:33 PM, Caio Fernando Bertoldi Paes de Andrade wrote:
> Frederik,
>
>> Do you begin with test-driving the use cases, the entities, or the gateways?
>
> I unit-test-drive use cases. I integration-test-drive gateways. :-)
>
>> Is it wrong to test-drive the interface and should I only test implementation?
>
> It's ok to refer to the interactor class directly in your tests.
>
>> The same goes for the entities. [...] How do we test-drive their structure?
>
> I don't write tests for entities. Entities appear in my designs as refactorings extracted from use cases, so they are already covered indirectly by the unit tests. Since I don't have tests coupled to entities, I can refactor them very easily, and I love that.
>
>> I have watched Uncle Bob's clean-coder videos on TDD, so please don't refer to them, as they don't provide answers to my problems.
>
> I'm sure those answers were given by Uncle Bob somewhere, maybe here in the discussion, maybe in the cleancoders videos, maybe in some blog post or some talk he gave in some conference. I'm sure he talked about those issues, but not sure where nor when.
>
> Caio
>
> On 11 Jul 2014, at 09:55, Frederik Krautwald <fkrau...@gmail.com> wrote:
>
>> I've been for the past couple of weeks struggling with TDD of the Interactor-Entity-Gateway architecture. What I mean is I don't know where to begin. Do you begin with test-driving the use cases, the entities, or the gateways?
>>
>> If I begin with a use case (interactor), I face the problem that the architecture proposes an interface (boundary), which follows the command pattern with no return value, i.e., its execute method takes a request model and a response recipient as arguments, and the response recipient should get called with either its success or failure methods. This leads to mocking, but as the boundary interface has no implementation yet, it is also mocked.
>>
>> Is it wrong to test-drive the interface and should I only test implementation?
>>
>> The same goes for the entities. They are have abstract methods and no implementation on the interactor side. How do we test-drive their structure?
>>
>> I end up writing unit tests for the implementation of entities first, which happens to be outside the interactor side, usually in the repository.
>>
>> I would really appreciate some guidance with test-driving the Interactor-Entity-Gateway architecture. I have watched Uncle Bob's clean-coder videos on TDD, so please don't refer to them, as they don't provide answers to my problems.
>>
>> Many thanks.
>>
>> --
>> The only way to go fast is to go well.
>> ---
>> You received this message because you are subscribed to the Google Groups "Clean Code Discussion" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to clean-code-discu...@googlegroups.com.
>> To post to this group, send email to clean-code...@googlegroups.com.
>> Visit this group at http://groups.google.com/group/clean-code-discussion.
>

--
Jakob Holderbaum, B.Sc
Systems Engineer

0176 637 297 71
http://jakob.io
h...@jakob.io
#hldrbm

Frederik Krautwald

unread,
Aug 20, 2014, 12:58:53 AM8/20/14
to clean-code...@googlegroups.com, mail...@jakob.io
I believe it was Caio that said his entities emerged and were refactored from his use cases.
>> To unsubscribe from this group and stop receiving emails from it, send an email to clean-code-discussion+unsub...@googlegroups.com.

Jakob Holderbaum

unread,
Aug 20, 2014, 2:47:29 AM8/20/14
to Frederik Krautwald, clean-code...@googlegroups.com
That is true, I apologize and direct the question by this to Ciao! :)
>> an email to clean-code-discu...@googlegroups.com
>> <javascript:>.
>>>> To post to this group, send email to clean-code...@googlegroups.com
>> <javascript:>.
>>>> Visit this group at
>> http://groups.google.com/group/clean-code-discussion.
>>>
>>
>> --
>> Jakob Holderbaum, B.Sc
>> Systems Engineer
>>
>> 0176 637 297 71
>> http://jakob.io
>> h...@jakob.io <javascript:>
>> #hldrbm
>>
>

--
Jakob Holderbaum, M.Sc.

Sebastian Gozin

unread,
Aug 20, 2014, 6:31:55 AM8/20/14
to clean-code...@googlegroups.com, fkrau...@gmail.com, mail...@jakob.io
I don't seem to test individual entities either. For the most part my use cases don't know much about them either.
Just a little bit like that they have an id and maybe 1 or 2 properties if that. (e.g: a status)

The actual details of the entity for me tend to flow from the acceptance tests and sometimes they just are...
Hard to explain but I sometimes keep room for a dynamic shape.

In practice that means I can very often just throw a simple structure into an interactor and test it works properly.
While at runtime a fully constructed entity dances through the interactions while the details are covered by production level plugins.

Caio Fernando Bertoldi Paes de Andrade

unread,
Aug 20, 2014, 1:08:04 PM8/20/14
to clean-code...@googlegroups.com
I have been giving a lot of thought about this issue lately.
Extracting entities from use cases is very interesting and I have been following that path, but I end up with some strange duplication in the use case tests. I am starting to think that’s very bad. Let me give an example:

CreatePerson needs an address.
EditPerson needs an address too.
Both validate the addresses in the same way, so I extract that duplicate validation into the Address entity.

Imagine that there are N reasons for an address to be invalid, and there are tests to each of the reasons in both use case test suites, i.e. duplicate tests.

If I want to change the address validation rules, I would have to change the tests on both use cases. This can scale up to a hideous number of modifications in tests just to change one validation, which may be what you meant by the explosion of edge case tests.

So I began extracting the validation tests to an AddressTest class, and the use case tests would know only about a generic valid or invalid address, not knowing strictly about all the rules being applied. The downside is that it feels like I’m sort of mocking the entity, therefore the use case tests would have a little knowledge about the entities after all.

So I start by extracting the entities from the use cases like I said before, but I’m coming to the conclusion that I should also extract their tests, which would result in entity tests and less knowledge in use case tests.

Any thoughts?

Caio


Sent from Mailbox


>> <javascript:>.
>>>> To post to this group, send email to clean-code...@googlegroups.com
>> <javascript:>.
>>>> Visit this group at
>> http://groups.google.com/group/clean-code-discussion.
>>>
>>
>> --
>> Jakob Holderbaum, B.Sc
>> Systems Engineer
>>
>> 0176 637 297 71
>> http://jakob.io
>> h...@jakob.io <javascript:>
>> #hldrbm
>>
>

--
Jakob Holderbaum, M.Sc.
Systems Engineer

0176 637 297 71
http://jakob.io
h...@jakob.io
#hldrbm

--
The only way to go fast is to go well.
---
You received this message because you are subscribed to the Google Groups "Clean Code Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clean-code-discu...@googlegroups.com.
To post to this group, send email to clean-code...@googlegroups.com.

Michel Henrich

unread,
Aug 20, 2014, 4:21:18 PM8/20/14
to clean-code...@googlegroups.com, caio...@icloud.com
Caio,

I've been through that too, and the answer for me was to create a test hierarchy.
For instance, if both Create and Edit use cases need to execute the same validations, then the validation tests will be done in an abstract test class that is inherited by both CreateTest and EditTest. 
Since I tend to use the command-pattern for the use cases, its pretty easy to define abstract methods that each concrete test class will implement to return the actual use-case command.

In theory, this is problematic if you have multiple "common behaviours" shared across multiple use cases, because (at least in java) you can't inherit from more than one class. But, so far, this hasn't happened to me.

Jakob Holderbaum

unread,
Aug 21, 2014, 11:31:53 PM8/21/14
to clean-code...@googlegroups.com
Yeah, I totally was at the same decision point. After all, I don't see
"duplication" (I don't even want to say the word) between Unit and
Acceptance as a problem at all.

The different test layers serve different purposes. Acceptance is
written at the beginning of an iteration/sprint/week/"other fancy word"
and used to measure your development progress. The specify the behavior
of your system from a feature perspective. It tries to give an answer to
the question: "Does the system the right agreed upon things?"

You can go for several ways here. You can duplicate the tests, which can
suck pretty fast. You could extract a testing object the executes tests
against another object and use this throughout your tests. You can do it
as Michel Henrich and build up a hierarchy of tests. You can also make
the entire validation an explicit injectable service (which falls back
to its default implementation) and inject a null-implementation as a spy
like object for your tests.
Then you want to test the default implementation separately once.


On the other hand, the unit test level serves the developer to give him
the answer to the question "Am I doing this thing right?". Hard edge
case testing like null values and of course semantic edge cases can be
tested here. And if you stay with a strict 1-to-1 relation between your
unit tests and your units and they all stay without couplings in the
tests it is not a big deal to refactor them.

So basically, I came to the conclusion, that having Unit and Acceptance
Tests in place leads to very robust and yet changeable systems.

WDYT?
>>> h...@jakob.io <javascript:>

Caio Fernando Bertoldi Paes de Andrade

unread,
Aug 22, 2014, 10:42:40 AM8/22/14
to clean-code...@googlegroups.com
@Michel, that’s a very interesting approach, I’ll try it out. :-)

@Jakob, that’s a very interesting comment, where did you get acceptance tests from my example? Do you consider use case tests as acceptance tests because they test from the user’s point of view, even though they are at a unit level and have not been written by a business person?

Caio

Sent from Mailbox

Jakob Holderbaum

unread,
Aug 22, 2014, 12:35:11 PM8/22/14
to clean-code...@googlegroups.com
Ah, not necessarily!

I unit test my interactors/use cases as well as my entities and other
services.

But I like to keep a very strict acceptance-first test approach, where I
utilize nearly exclusively the interactors to test the acceptance of
different features.

Does this answer your question?

Caio Fernando Bertoldi Paes de Andrade

unread,
Aug 22, 2014, 1:12:43 PM8/22/14
to clean-code...@googlegroups.com
It answers my question but raises another (the original) one: do you think it's ok then to have duplication on those (acceptance) tests?

Caio

Sent from Mailbox

Jakob Holderbaum

unread,
Aug 23, 2014, 9:40:46 AM8/23/14
to clean-code...@googlegroups.com
Yes, indeed. Especially on larger projects with several developers this
kind of duplication has a ROI point very close to the project kickoff.

If you stick to the idea (promoted in GOOSE) that every feature should
be introduced by high-level acceptance tests (I interpret high-level
here on use case level; not a big fan of UI involvement in testing),
this suite of tests grows quickly to a very readable description of your
actual system. You could utilize something like cucumber or FitNesse. I
use generic code, but tend to stick to high-level DSL-like methods
inside of the test methods so they stay very semantic and highly readable.

The partial duplication between the library of acceptance tests and some
of your unit tests is probably not even produced by the same programmer
on a larger team, so it can be seen like double-bookkeeping.

From my specific experience, this duplication is no hinder for serious
refactorings or behavioral modifications. It is probably even better and
more supportive than just your unit tests.

If you have to make serious changes on a feature, you first change the
affected feature tests in your acceptance suite or add some new ones.

The afterwards occurring changes in your domain can be done very freely.
You can delete units or move code between them (and of course modify the
unit tests). Especially if a feature touches more than just one unit,
such high-level acceptance testing is a blessing!

I really hope, I could make myself clear. :)

Any thoughts?

Cheers
Jakob

Caio Fernando Bertoldi Paes de Andrade

unread,
Aug 23, 2014, 10:17:52 AM8/23/14
to clean-code...@googlegroups.com
Jakob,

I think you didn’t understand my point. I’m not talking about the overlap between acceptance and unit tests, I also don’t see that as duplication. I’m gonna try to be clearer, let's assume they are all acceptance tests:

CreatePersonUsecase needs an address to create a person. Its acceptance tests have cases to guarantee that address validation is performed by the use case.

EditPersonUsecase also needs an address to edit the person’s address. The address should be validated in the exact same way as in CreatePerson. Should we duplicate the same address validation test cases from the CreatePersonUsecase in EditPersonUsecase’s acceptance test suite?

That would be duplication in the acceptance tests, not overlap. If we want to change that rule later on, we would have to edit both acceptance test suites, and if I use addresses heavily throughout my system, the number of test suites affected could be huge.

So the point is that maybe we should extract a separate acceptance test suite to hold the criteria about address validation, and sort of hide that knowledge from the use cases' acceptance tests. Use case tests would know only about what’s relevant to them (if the address is valid or not), but not about all the rules involved in the validation.

Still we could have impact on those tests if some fundamental change happens in the address validation rules. It’s not really changeproof because we are not mocking the entities, and this idea of "almost" mocking entities is what feels weird to me.

What do you think?
Caio



Sent from Mailbox

Jakob Holderbaum

unread,
Aug 23, 2014, 1:01:49 PM8/23/14
to clean-code...@googlegroups.com
Interesting question! Now I see what you mean.

Let me think. Normally, tests should be used as an indicator for
concepts and architecture. Whenever something is weird to test there
could be probably a concept missing.

The different interactors should normally accept a request object. In
this case, there is obviously the concept of an address, shared between
different interactors. Why not thinking about an address factory-like
interactor, that produces an per definition valid address request
object, that can be passed into different interactors.

By adjusting your system that way, you eliminate the test duplication
and create a new and probably useful abstraction.

I tend to build such objects here and there. Like presenters but from
the opposite direction. Request or boundary objects that can be passed
into one or several interactors.

I don't know, if this is applicable in other duplication scenarios.

WDYT?

Rodrigo Baron

unread,
Aug 25, 2014, 7:46:20 AM8/25/14
to clean-code...@googlegroups.com
@Caio,

I do not see duplication in your example, normally I make tests in the same context. In this case would "PersonValidationTest" to do validation tests. 

If there is another validation with reference to "Address" like "Subsidiary" I not see It as duplication because this validation is in another context (SubsidiaryValidationTest). MAYBE the address validation for person can be different to address validation to subsidary.


<javascript:>
<javascript:>.
To post to this group, send email to clean-code...@googlegroups.com
<javascript:>.
Visit this group at
http://groups.google.com/group/clean-code-discussion.


--
Jakob Holderbaum, B.Sc
Systems Engineer

0176 637 297 71
http://jakob.io
h...@jakob.io <javascript:>
#hldrbm



--
Jakob Holderbaum, M.Sc.
Systems Engineer

0176 637 297 71
http://jakob.io
h...@jakob.io <javascript:>
#hldrbm

--
The only way to go fast is to go well.
---
You received this message because you are subscribed to the Google Groups "Clean Code Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clean-code-discussion+unsub...@googlegroups.com.
To post to this group, send email to clean-code-discussion@googlegroups.com.
--
Jakob Holderbaum, M.Sc.
Systems Engineer
0176 637 297 71
http://jakob.io
h...@jakob.io
#hldrbm
--
The only way to go fast is to go well.
---
You received this message because you are subscribed to the Google Groups "Clean Code Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clean-code-discussion+unsub...@googlegroups.com.
To post to this group, send email to clean-code-discussion@googlegroups.com.
--
Jakob Holderbaum, M.Sc.
Systems Engineer
0176 637 297 71
http://jakob.io
h...@jakob.io
#hldrbm
--
The only way to go fast is to go well.
---
You received this message because you are subscribed to the Google Groups "Clean Code Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clean-code-discussion+unsub...@googlegroups.com.
To post to this group, send email to clean-code-discussion@googlegroups.com.
--
Jakob Holderbaum, M.Sc.
Systems Engineer
0176 637 297 71
http://jakob.io
h...@jakob.io
#hldrbm
--
The only way to go fast is to go well.
---
You received this message because you are subscribed to the Google Groups "Clean Code Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clean-code-discussion+unsub...@googlegroups.com.
To post to this group, send email to clean-code-discussion@googlegroups.com.

--
Jakob Holderbaum, M.Sc.
Systems Engineer

0176 637 297 71
http://jakob.io
h...@jakob.io
#hldrbm

--
The only way to go fast is to go well.
--- You received this message because you are subscribed to the Google Groups "Clean Code Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clean-code-discussion+unsub...@googlegroups.com.
To post to this group, send email to clean-code-discussion@googlegroups.com.



--
Regards,
Rodrigo Baron.

unclebob

unread,
Aug 25, 2014, 11:06:10 AM8/25/14
to clean-code...@googlegroups.com
If you follow the Java Case Study, you will see Micah and I address this problem.  We start with a failing acceptance tests, wire up lots of infrastructure, and finally (finally) start writing unit tests against the first use-case.  But if you think that means we start with use-cases first; you'd be wrong because it's not long before we start writing unit tests against the gateways and the presenters.  In fact it's hard to tell where one begins and another ends in terms of sequence.  
Reply all
Reply to author
Forward
0 new messages