Acceptance tests with front end and back end system

264 views
Skip to first unread message

Ludwig Magnusson

unread,
Jul 11, 2014, 3:49:22 AM7/11/14
to clean-code...@googlegroups.com
At my company we have developed a single page web application with a rest back end. We have not yet used automated acceptance tests but we consider introducing them. I have heard Uncle Bob say that acceptance tests should test the business rules. I think it is fair to say that the business rules live in the back end and that the front end is just presenting the result. However, in a single page application the front end can become quite complex and pretty much a system of its own. So the question is: Does the front end also need acceptance tests? (Note that when I say front end, I do not mean test through the GUI)

We are discussing how to create our acceptance tests and are considering using Cucumber with these 3 alternatives:

1 Only write acceptance tests running against the back end
2 Write a common set of gherkin specs and implement the tests both for front and back end
3 Write separate specs for front and back end. 

What we are asking ourselves is this:
* Will alternative 1 be enough to make us trust that a passing suite is good enough for release since we don't do acceptance tests on the front end?
* Is alternative 2 viable or is there too much of a difference between the two systems?
* Will alternative 3 seem too cumbersome since there surely is some and probably a lot of overlap in the specifications of the front end and the back end?

I would be very happy with feedback from someone with experience of a similar situation.

Jakob Holderbaum

unread,
Jul 11, 2014, 4:27:16 AM7/11/14
to clean-code...@googlegroups.com
Hey Ludwig,

I was faced the same question in my last project (a rather big single
page application with a huge backend in ruby and a frontend in js,
communicating over a HTTP JSON API).

Our approach was absolute statelessness of the frontend. So, every
possible operation was sent to the server and returned the entire state
of affected components. It was a huge tradeoff, because we stacked up a
lot of network saturation. But thats what hardware is for. We used more
app servers behind load balancers, so network was no issue anymore.

By this approach, we could confidently "ignore" frontend for acceptance
testing. There was only functional unit testing in the js frontend, that
ensured correct dispatching from js event to API call and vice versa.

We had a dozen or so integrative acceptance tests that booted the entire
stack and drove some spikes through the js frontend into the backend.
Our approach here was more of a smoke testing for FE BE integration.

This was entirely OK this way, we had nearly no bugs in the application
resulting from integrative problems between these two parts.

Another story, leading to a different way:

I recently finished my masters thesis about TDD and ATDD in Embedded
Systems Development. I developed an interesting approach based on
FitNesse, which I very soon hope to try out at a larger scale of a real
project.

The acceptance tests where written in SLIM in a Project Wiki in
FitNesse. Of course, you can substitute Cucumber here and write it all
in the known Gherkin syntax. The important part is to gather abstract
requirements in an understandable language for all relevant
participants/stakeholders.

In the next step, I implemented degenerated cases with a specific test
driver. This driver compiled a firmware and booted an emulator with it
to stimulate the actual system over its actual (but emulated) bus
interfaces. So to speak, this was a real full stack integration test
through the "UI".

This first test driver enforced an outside-in approach, you have to
implement the simplest possible test case by spiking your entire
infrastructure: early integration maintains your pain level, you should
take a look into "Growing Object Oriented Systems, guided by Tests" from
Freeman and Pryce.

After getting this to ran, I extracted a software module out of the
firmware. Then I implemented an additional test driver, inspecting this
module directly on a code level (your direct call to the use
case/interactor, so to speak).

Now successively, I added more of the abstract acceptance tests to the
code level driver, eventually reaching the point of a working and
presentable prototype (first iteration nearly done).

Then I can successive add some of the gathered and already implemented
acceptance tests also to the integration test driver. By this, I can
validate my functionality on some critical cases or some very simple and
fast ones directly through the actual interface.

To give a "picture" imagine something like this magnificent piece of
ASCII art:


+-------------------------------+
| Abstract Tests / Requirements |
+-------------------------------+
/ \ / \
| |
+----------------+ +----------------------+
| Code Level | | Integration Level |
| Testdriver | | Testdriver |
| (all tests | | (some tests |
| implemented) | | implemented) |
+----------------+ +----------------------+


So, to make it short: Basically, you write your acceptance tests
agnostic to implementation, and implement it in a *separated* suite on
code level by calling your actual use cases. This will be a fast and
solid validation of your application. Than, you can add some of the test
cases to a second *separated* suite of tests which drive it through the
full stack. These are your slow tests, you run them after merging and
before pushing to the main line in addition to the faster code level suite.

Currently, that is my impression of efficient and solid acceptance testing.

Sorry for the rather long elaboration.

WDYT?

Cheers
Jakob
--
Jakob Holderbaum, B.Sc
Systems Engineer

0176 637 297 71
http://jakob.io
h...@jakob.io
#hldrbm

Ludwig Magnusson

unread,
Jul 11, 2014, 7:49:35 AM7/11/14
to clean-code...@googlegroups.com
Thanks very much for the input, and it wasn't too long =)
I understand the idea of keeping the fronted as simple as possible and thus not needing acceptance testing there, however I would not like to select that rout based on the fact that it would be hard to test otherwise (I am not saying that was your reasoning).

So my question is, is it possible to create truly implementation agnostic tests when there are conceptual differences between the two systems. For instance, imagine a system where you create organizations. Each organization can also create teams. If I wan't to specify the create team feature for a stateless back end I would do it something like this. For a stateless back end I would have to put all the information in the same action, i.e. the name of the new team and the organization it should belong to. In a front end environment I would probably first navigate to the "Organization" section and then go to the "Create team" subsection to enter the data which means that what is one single action on the back end is split into two pieces on the front end. With that in mind, is it still possible to keep the specs agnostic?

Sebastian Gozin

unread,
Jul 11, 2014, 9:50:35 AM7/11/14
to clean-code...@googlegroups.com
Does it really matter that in the frontend you first navigated to the organization and then the create team subsection?

The net results seems to me that this behavior simply prefilled the organisation the team belongs to.
You could alternatively implement a frontend which would allow you to navigate to a create team page immediately where you would then select the organisation from a dropdown.

In either case nothing changed to the format of the create team request message. I would argue you don't want to lock down the GUI by acceptance tests because it's sometimes incredible how UI/UX designes can come up with an unexpected but useful way of assembling and finally sending that request message.

Caio Fernando Bertoldi Paes de Andrade

unread,
Jul 11, 2014, 10:46:19 AM7/11/14
to clean-code...@googlegroups.com
Ludwig,

Since you are considering Cucumber:

Given that I have Organization X
When I create a Team Y
Then I expect the system state to have Team Y under Organization X

Now you implement two different steps_definiton files: back_end_step_definitions and full_stack_step_definitions.
The back_end_steps_definitions would implement the fixtures directly to the use case interactor in your backend and inspecting the presenter spy.
The full_stack_steps_definitions would implement the fixtures by navigating the UI, inputting the correct data, clicking submit buttons and inspecting the resulting HTML.

Voilà, abstract requirements, different test implementations. ;-)

Caio

--
The only way to go fast is to go well.
---
You received this message because you are subscribed to the Google Groups "Clean Code Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clean-code-discu...@googlegroups.com.
To post to this group, send email to clean-code...@googlegroups.com.
Visit this group at http://groups.google.com/group/clean-code-discussion.

Sebastian Gozin

unread,
Jul 11, 2014, 10:55:52 AM7/11/14
to clean-code...@googlegroups.com
Caio is right though I would dislike it if those full stack step definitions would be used as an argument for not changing the GUI because otherwise we'd have to spend expensive developer time on making sure those steps still work.
To unsubscribe from this group and stop receiving emails from it, send an email to clean-code-discussion+unsub...@googlegroups.com.

Caio Fernando Bertoldi Paes de Andrade

unread,
Jul 11, 2014, 11:00:07 AM7/11/14
to clean-code...@googlegroups.com
Agreed.

Choose carefully which and how many tests to implement at full-stack (system) level.

They are on the top part of the test pyramid, so they should be very few and very valuable.

Caio

To unsubscribe from this group and stop receiving emails from it, send an email to clean-code-discu...@googlegroups.com.

Ludwig Magnusson

unread,
Jul 11, 2014, 12:11:41 PM7/11/14
to clean-code...@googlegroups.com
You are right. Note that in my question I said that I do not intend to test through the GUI. However, when I envisioned my front end tests I was still thinking in GUI-terms. Reading your post I realize that I don't need to do that and that if I target my interactor layer in the front end as well, the tests can be very similar.

Ludwig Magnusson

unread,
Jul 11, 2014, 12:14:53 PM7/11/14
to clean-code...@googlegroups.com
Calo, my question is then:
If I use the same definitions for the front and back end, will all the features/scenarios be relevant for both systems? Or will I have some scenarios that just have dummy implementations? 

Caio Fernando Bertoldi Paes de Andrade

unread,
Jul 11, 2014, 4:16:01 PM7/11/14
to clean-code...@googlegroups.com
They will be relevant if both systems implement the same business rules. But then you could theoretically throw one of the systems away, and use only the other one, since they will be identical in logic.

If you have only some of the business rules duplicated on the front-end, only their respective tests would be relevant for the front end, but you have to mock out the backend when running them to guarantee their accuracy.

Caio

On 11 Jul 2014, at 13:14, Ludwig Magnusson <ludwig.m...@gmail.com> wrote:

Calo, my question is then:
If I use the same definitions for the front and back end, will all the features/scenarios be relevant for both systems? Or will I have some scenarios that just have dummy implementations? 

Reply all
Reply to author
Forward
0 new messages