Acceptance Testing for Continuous Delivery

Skip to first unread message

Thierry de Pauw

Aug 25, 2017, 3:56:13 AM8/25/17
to Continuous Delivery

A typical enterprise system looks something like this:
System A -> System B -> System C

Resulting in a chain of dependencies.

The usual mode of testing in large enterprise environments is to perform end-to-end integration tests of the whole system. Now we know this way of testing is problematic as you have very little control on the inputs of System B and System C and it assumes people have in depth knowledge of the full system. Which is most of the time not happening.

From the CD book and Dave Farley's presentation Acceptance Testing for Continuous Delivery, I've understood that you should test every sub-system in isolation where you better control the inputs. And so you come to predictable and deterministic tests.

The real reason people are doing these crazy integration tests is that they worry about interface changes between the sub-systems. Which is a valid concern. But this can be covered by testing the interfaces between the sub-systems using practices like Consumer Driven Contracts or Assume Verify.

My question is now:
Am I then right when saying that the real integration of the sub-systems is only happening in production through minimal smoke tests ?

Thank you for clarifying this for me.


David Farley

Aug 26, 2017, 5:43:47 PM8/26/17
to Continuous Delivery
Yes, I think you have it right. My preference is to use a combination of fairly thorough unittests, created via TDD, combined with a bunch of "whole-system" executable specifications for the behaviour of the system, captured as "Acceptance Tests".

For this second set of acceptance tests, I want my whole system to be deployed and tested as though it was in production, but I also want to isolate it by faking any external dependencies. That way I get realistic testing of the system that I am respnsible for and I get enough control to make that testing good quality. End-to-end testing gets in the way so that I can't really test interesting cases in my system.

I back these tests up with contract testing, as you describe, consumer-contract-testing where I can. That has been enough for me in the projects that I have worked on.

The only time that I can imagine that not working well enough is if the contract between your system and the external system is tightly-coupled and is changing too much. In which case maybe the external system isn't really "external" ;-)


Thierry de Pauw

Sep 2, 2017, 4:37:46 PM9/2/17
Thank you Dave for confirming my thought !

That's my preference too: having lots and lots of very fast unit tests grown using TDD together with acceptance tests exercising the whole system. Any time an acceptance test fails, additional unit tests are added that reproduce the failure of the acceptance test, making sure these things are detected faster next time.

Regarding the smoke tests: I can imagine lesser mature organisations would be more comfortable to first run the smoke tests in a staging or QA department and later on run them again in production. More mature organisations will probably only run them in production.


You received this message because you are subscribed to a topic in the Google Groups "Continuous Delivery" group.
To unsubscribe from this topic, visit
To unsubscribe from this group and all its topics, send an email to
For more options, visit

Reply all
Reply to author
0 new messages