Hey Ludwig,
I was faced the same question in my last project (a rather big single
page application with a huge backend in ruby and a frontend in js,
communicating over a HTTP JSON API).
Our approach was absolute statelessness of the frontend. So, every
possible operation was sent to the server and returned the entire state
of affected components. It was a huge tradeoff, because we stacked up a
lot of network saturation. But thats what hardware is for. We used more
app servers behind load balancers, so network was no issue anymore.
By this approach, we could confidently "ignore" frontend for acceptance
testing. There was only functional unit testing in the js frontend, that
ensured correct dispatching from js event to API call and vice versa.
We had a dozen or so integrative acceptance tests that booted the entire
stack and drove some spikes through the js frontend into the backend.
Our approach here was more of a smoke testing for FE BE integration.
This was entirely OK this way, we had nearly no bugs in the application
resulting from integrative problems between these two parts.
Another story, leading to a different way:
I recently finished my masters thesis about TDD and ATDD in Embedded
Systems Development. I developed an interesting approach based on
FitNesse, which I very soon hope to try out at a larger scale of a real
project.
The acceptance tests where written in SLIM in a Project Wiki in
FitNesse. Of course, you can substitute Cucumber here and write it all
in the known Gherkin syntax. The important part is to gather abstract
requirements in an understandable language for all relevant
participants/stakeholders.
In the next step, I implemented degenerated cases with a specific test
driver. This driver compiled a firmware and booted an emulator with it
to stimulate the actual system over its actual (but emulated) bus
interfaces. So to speak, this was a real full stack integration test
through the "UI".
This first test driver enforced an outside-in approach, you have to
implement the simplest possible test case by spiking your entire
infrastructure: early integration maintains your pain level, you should
take a look into "Growing Object Oriented Systems, guided by Tests" from
Freeman and Pryce.
After getting this to ran, I extracted a software module out of the
firmware. Then I implemented an additional test driver, inspecting this
module directly on a code level (your direct call to the use
case/interactor, so to speak).
Now successively, I added more of the abstract acceptance tests to the
code level driver, eventually reaching the point of a working and
presentable prototype (first iteration nearly done).
Then I can successive add some of the gathered and already implemented
acceptance tests also to the integration test driver. By this, I can
validate my functionality on some critical cases or some very simple and
fast ones directly through the actual interface.
To give a "picture" imagine something like this magnificent piece of
ASCII art:
+-------------------------------+
| Abstract Tests / Requirements |
+-------------------------------+
/ \ / \
| |
+----------------+ +----------------------+
| Code Level | | Integration Level |
| Testdriver | | Testdriver |
| (all tests | | (some tests |
| implemented) | | implemented) |
+----------------+ +----------------------+
So, to make it short: Basically, you write your acceptance tests
agnostic to implementation, and implement it in a *separated* suite on
code level by calling your actual use cases. This will be a fast and
solid validation of your application. Than, you can add some of the test
cases to a second *separated* suite of tests which drive it through the
full stack. These are your slow tests, you run them after merging and
before pushing to the main line in addition to the faster code level suite.
Currently, that is my impression of efficient and solid acceptance testing.
Sorry for the rather long elaboration.
WDYT?
Cheers
Jakob
--
Jakob Holderbaum, B.Sc
Systems Engineer
0176 637 297 71
http://jakob.io
h...@jakob.io
#hldrbm