TDD vs BDD

465 views
Skip to first unread message

Bogdan Bujdea

unread,
Feb 26, 2014, 7:18:32 PM2/26/14
to clean-code...@googlegroups.com
Hi! I'm a beginner in Test Driven Development, and recently I started doing it at work, for a Windows Phone app. It's all working great, however, after 2 months since I started the project I'm realizing that I'm doing TDD wrong. My tests are covering almost anything from the code, but this makes them hard to maintain, and it takes a lot of time to maintain the code coverage.So I started to do some more research about TDD, and I find a presentation of Ian Coopers called "TDD: Where did It All Go Wrong?". Here I learn that tests should not cover each line of production code, instead they should only cover behaviors. I want to know how is this different from BDD? I'm also doing BDD for the project, but those tests don't use the API or interact with the local storage, they use mocks just as the unit tests. If I start doing TDD but only for behaviors, then it will be the same thing as my BDD tests, the only difference being that I'm using SpecFlow for BDD and the unit tests are made with a simple testing framework.

Uncle Bob

unread,
Feb 27, 2014, 1:58:02 PM2/27/14
to clean-code...@googlegroups.com
Bogdan,  BDD is a subset of TDD.  TDD is a broad category of driving development through tests.  BDD is the narrower subcategory of driving development with high level behavior oriented tests.  

Ian's paper is a good one.  Behavior oriented tests are generally better than implementation oriented tests because behaviors are a lot less coupled to implementation.  Tests that are coupled to implementation tend to be fragile.  Those that are coupled to behavior tend not to be fragile.

The choice of tool is irrelevant to Ian's point.  You can write BDD tests in Specflow or in code.  The reason you'd choose Specflow is if the tests were being written (or at least read) by business folks.  If the business people aren't using those tests, write them in code.  It's faster and easier for you.

Phil Markgraf

unread,
Feb 27, 2014, 4:25:44 PM2/27/14
to clean-code...@googlegroups.com
The question I ask about testing behaviors versus coverage is... what are those lines of code not covered by behavior test doing, if they aren't providing testable behavior? Lines of code not covered by tests are clues that you either have undocumented behavior (i.e. missing test cases) or dead code.

Ian's presentation does a good job of pushing us to test at the right layers (interface versus internals), and with the right attitude (behavior versus implementation). But I don't think we should use BDD as an excuse to absolve ourselves from striving for maximal test coverage.




On Feb 26, 2014, at 4:18 PM, Bogdan Bujdea <bujdea...@gmail.com> wrote:

Hi! I'm a beginner in Test Driven Development, and recently I started doing it at work, for a Windows Phone app. It's all working great, however, after 2 months since I started the project I'm realizing that I'm doing TDD wrong. My tests are covering almost anything from the code, but this makes them hard to maintain, and it takes a lot of time to maintain the code coverage.So I started to do some more research about TDD, and I find a presentation of Ian Coopers called "TDD: Where did It All Go Wrong?". Here I learn that tests should not cover each line of production code, instead they should only cover behaviors. I want to know how is this different from BDD? I'm also doing BDD for the project, but those tests don't use the API or interact with the local storage, they use mocks just as the unit tests. If I start doing TDD but only for behaviors, then it will be the same thing as my BDD tests, the only difference being that I'm using SpecFlow for BDD and the unit tests are made with a simple testing framework.

--
The only way to go fast is to go well.
---
You received this message because you are subscribed to the Google Groups "Clean Code Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clean-code-discu...@googlegroups.com.
To post to this group, send email to clean-code...@googlegroups.com.
Visit this group at http://groups.google.com/group/clean-code-discussion.

Michel Henrich

unread,
Feb 28, 2014, 9:36:53 AM2/28/14
to clean-code...@googlegroups.com
I agree with Uncle Bob and Philip, but just to add my recent experiences:
In my latest projects, we've been writing most of our tests against the outer interface of my application, and writing few internal tests for the most complex pieces of code (things like tax calculation).
Comparing to the Old Me, who pretty much loved Mocks and mocked absolutely EVERYTHING (internals and externals), the current approach has proven to be significantly better in almost every way.
Tests are simpler - easier to read, easier to write, and hardly ever require modifications when we refactor parts of the application. The production code is free to change its structure, as long as the outer interface is kept. This is what "testing behavior, not implementation" means to me.

As for coverage, as Philip said, whether you test behavior or implementation, you must always strive for 100% coverage. It's likely, however, that a single test file will not be able to cover an entire component. We measure the coverage by running the complete test suite, and in this case, the LOC coverage of our current project is 98%, and branch coverage is 94%. My team is currently working on raising these (low) percentages to 100% right now (and some of that work is to actually delete dead production code!)

James Green

unread,
Feb 28, 2014, 9:41:49 AM2/28/14
to clean-code...@googlegroups.com
Michel,

The LOC count - is that restricted to the application core or does that include infrastructure dependencies too?


--

Philip Markgraf

unread,
Mar 1, 2014, 3:36:58 PM3/1/14
to clean-code...@googlegroups.com
Just the code you need to work!

In my world, the infrastructure code is its own thing and has its own requirements and acceptance tests. The application test coverage tries to achieve 100% coverage over the application code, while the infrastructure test coverage tries to achieve 100% coverage over the infrastructure code. (Actually, we have a handful of infrastructure elements that each have their own requirements and tests.)

Breaking the system into individually tested subsystems is key to making this whole affair manageable. You aren't going to test the entire system with only the outermost edge's behaviors (unless the system is rather small), as it becomes too complex and very difficult to come up with meaningful test cases. Having the system broken into subsystems gives you components that are small enough to reason about and that also can be developed and deployed independently. Infrastructure is just another component!

Tiago Moraes

unread,
Feb 28, 2014, 11:13:00 AM2/28/14
to clean-code...@googlegroups.com
Interesting read about this subject: The secret for 100% test coverage: remove code

Tiago Moraes
Desenvolvimento
Fortes Informática

Fone: (85) 4005.1111
ti...@grupofortes.com.br

www.fortesinformatica.com.br

Dave Schinkel

unread,
Apr 24, 2014, 2:41:02 PM4/24/14
to clean-code...@googlegroups.com
I'm not a guru at TDD yet, but it's true.

I was listening in on a conversation with a good friend who is and has been doing TDD well for a while now.   I won't mention the company but they are known for being a shop that is awesome at TDD, XP and the like and in fact Amazon and other places try to steal people from this place.  My point is it's a good shop.  Lessons have been learned there many times, and they also push the envelop and are a model for a lot of companies to come in and see how they run their Agile/XP/TDD shop.

What he said to some of his collegues (myself as well as some of the guys on his team or from his org all chat on Ventrilo when playing video games).  What my friend is trying to convey to the rest of the devs is that yes, you CAN go overboard with test coverage.  Meaning he is saying to test Units, not every line of code.  He said he's seen shops go overboard with writing too many tests for production code even with TDD.  And then what results is a situation where developers are hesitant to make any kind of refactoring change...they skip the blue part of TDD because then since you have an over-saturation of # of tests, then any little refactoring breaks a TON of tests to maintaining the tests becomes a nightmare because you have so many tests at the lowest production code level that changes that shouldn't be breaking so many tests do, because they're so microscopic..tests are that small and so many.

However, this makes me wonder though.  If you are driving code via test driven development.  Um, wouldn't you automatically have tests and a bunch of them at the lowest level no matter what?

When I write a test it's for a very small scenario, and ends up in maybe 1-5 lines if that.  Then I write another and another till the feature is done per the story.  Well how can I NOT have a lot of tests.  This is what is confusing here in this interesting conversation we're all having...so it's almost like both contradicts itself.  Maybe I don't know TDD well enough yet or have seen as many tests yet but I don't see how you could skip tests (have less of them and more on behavior) when tests drive everything you do.

Can someone who has done TDD for a long time clear this up?


On Friday, February 28, 2014 10:13:00 AM UTC-6, Tiago Moraes wrote:
Interesting read about this subject: The secret for 100% test coverage: remove code

Tiago Moraes
Desenvolvimento
Fortes Informática

Fone: (85) 4005.1111
ti...@grupofortes.com.br

www.fortesinformatica.com.br



2014-02-28 11:41 GMT-03:00 James Green <james.m...@gmail.com>:
Michel,

The LOC count - is that restricted to the application core or does that include infrastructure dependencies too?


On 28 February 2014 14:36, Michel Henrich <michelc...@gmail.com> wrote:
I agree with Uncle Bob and Philip, but just to add my recent experiences:
In my latest projects, we've been writing most of our tests against the outer interface of my application, and writing few internal tests for the most complex pieces of code (things like tax calculation).
Comparing to the Old Me, who pretty much loved Mocks and mocked absolutely EVERYTHING (internals and externals), the current approach has proven to be significantly better in almost every way.
Tests are simpler - easier to read, easier to write, and hardly ever require modifications when we refactor parts of the application. The production code is free to change its structure, as long as the outer interface is kept. This is what "testing behavior, not implementation" means to me.

As for coverage, as Philip said, whether you test behavior or implementation, you must always strive for 100% coverage. It's likely, however, that a single test file will not be able to cover an entire component. We measure the coverage by running the complete test suite, and in this case, the LOC coverage of our current project is 98%, and branch coverage is 94%. My team is currently working on raising these (low) percentages to 100% right now (and some of that work is to actually delete dead production code!)

On Wednesday, February 26, 2014 9:18:32 PM UTC-3, Bogdan Bujdea wrote:
Hi! I'm a beginner in Test Driven Development, and recently I started doing it at work, for a Windows Phone app. It's all working great, however, after 2 months since I started the project I'm realizing that I'm doing TDD wrong. My tests are covering almost anything from the code, but this makes them hard to maintain, and it takes a lot of time to maintain the code coverage.So I started to do some more research about TDD, and I find a presentation of Ian Coopers called "TDD: Where did It All Go Wrong?". Here I learn that tests should not cover each line of production code, instead they should only cover behaviors. I want to know how is this different from BDD? I'm also doing BDD for the project, but those tests don't use the API or interact with the local storage, they use mocks just as the unit tests. If I start doing TDD but only for behaviors, then it will be the same thing as my BDD tests, the only difference being that I'm using SpecFlow for BDD and the unit tests are made with a simple testing framework.

--
The only way to go fast is to go well.
--- 
--- 

swkane

unread,
Apr 24, 2014, 2:52:30 PM4/24/14
to clean-code...@googlegroups.com
You might want to read James Coplien's "Why Most Unit Testing is a Waste" http://www.rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf. James makes some excellent points. The idea of '100% Code Coverage' doesn't make sense because it will fall well short of proving that your code is 100% correct. And what you get with that 100% code coverage is a lot of tests that will never fail and will make refactoring etc. much more difficult, not only providing 0 value to your application but possibly negative value due to required upkeep. As an alternative you can potentially replace much unit testing with assertions that get updated with the code itself, and target your unit testing to test the most critical parts of your application.

I believe James parts ways with Uncle Bob on these points. And I'm new to TDD myself so I'm not sure which side of the fence I fall on. However, I will say that one of the reasons Uncle Bob says we need unit tests is because integration and functional tests are too slow since they call out to external resources. Could that problem be alleviated by running functional and integration tests in parallel? For instance, in theory, I should be able to spin up a bunch of virtual machines and run all my functional/integration tests in parallel.

Steven 


Sebastian Gozin

unread,
Apr 24, 2014, 6:41:06 PM4/24/14
to clean-code...@googlegroups.com
Just of the top of my head but I've seen a lot of tests where the assertions know a lot about the payload going through the system.
More than necessary anyway.

As you adjust these payloads for functional reasons the tests tend to break a lot. I guess you could call them brittle.
But the basic algorithm hasn't changed. Perhaps I'm talking about ISP here.

You can see it in various ways such as simple assertions on specific fields, spies which depend on positional arguments, the use of specific data structures when a generic one would suffice.

I've found myself to test drive with obviously wrong primitives instead in order to force the component to not depend on more than it actually needs.
Sometimes it's enough to treat it as a duck without having to actually know what exactly a duck is.

Something I've picked up from Brian Marick's Robozzle kata.
To unsubscribe from this group and stop receiving emails from it, send an email to clean-code-discussion+unsub...@googlegroups.com.

George Dinwiddie

unread,
Apr 24, 2014, 10:40:17 PM4/24/14
to clean-code...@googlegroups.com
Hi, Dave,

On 4/24/14, 2:41 PM, Dave Schinkel wrote:
> I'm not a guru at TDD yet, but it's true.
>
> I was listening in on a conversation with a good friend who is and has
> been doing TDD well for a while now. I won't mention the company but
> they are known for being a shop that is awesome at TDD, XP and the like
> and in fact Amazon and other places try to steal people from this place.
> My point is it's a good shop. Lessons have been learned there many
> times, and they also push the envelop and are a model for a lot of
> companies to come in and see how they run their Agile/XP/TDD shop.
>
> What he said to some of his collegues (myself as well as some of the
> guys on his team or from his org all chat on Ventrilo when playing video
> games). What my friend is trying to convey to the rest of the devs is
> that yes, you CAN go overboard with test coverage. Meaning he is saying
> to test Units, not every line of code.

Yes, test the behavior you want, and the responses to edge conditions.
Don't worry about the number produced by coverage metrics, but do pay
attention to the shape over time. If it changes, think about why.

> He said he's seen shops go
> overboard with writing too many tests for production code even with TDD.
> And then what results is a situation where developers are hesitant to
> make any kind of refactoring change...they skip the blue part of TDD
> because then since you have an over-saturation of # of tests, then any
> little refactoring breaks a TON of tests to maintaining the tests
> becomes a nightmare because you have so many tests at the lowest
> production code level that changes that shouldn't be breaking so many
> tests do, because they're so microscopic..tests are that small and so many.

I don't know why your changes should break a lot of tests. You might
think about what's going on there. There's definitely something wrong.

I tend toward state-based testing (sometimes called the "Detroit
school"), and the tests are refactored with the code. When I rename
something, it gets renamed in both. When I move something, both the
tests and the code have to refer to it in the new location.

When I used Mock Objects, I tended to write brittle tests that were
based on the implementation rather than the behavior. I quit doing that.

Nat Pryce and Steve Freeman practice interaction-based testing
(sometimes called the "London school"), use lots of mock objects, and
don't seem to get rashes of broken tests. Read their book Growing Object
Oriented Software Guided by Tests to learn more about how they work.

Microscopic tests are good. Rashes of broken tests when you make a
change indicates a problem with your tests. It's not the size, though.

>
> However, this makes me wonder though. If you are driving code via test
> driven development. Um, wouldn't you automatically have tests and a
> bunch of them at the lowest level no matter what?
>
> When I write a test it's for a very small scenario, and ends up in maybe
> 1-5 lines if that. Then I write another and another till the feature is
> done per the story. Well how can I NOT have a lot of tests. This is
> what is confusing here in this interesting conversation we're all
> having...so it's almost like both contradicts itself. Maybe I don't
> know TDD well enough yet or have seen as many tests yet but I don't see
> how you could skip tests (have less of them and more on behavior) when
> tests drive everything you do.
>
> Can someone who has done TDD for a long time clear this up?

Lots of tests is just fine.

- George

--
----------------------------------------------------------------------
* George Dinwiddie * http://blog.gdinwiddie.com
Software Development http://www.idiacomputing.com
Consultant and Coach http://www.agilemaryland.org
----------------------------------------------------------------------

Rusi Filipov

unread,
May 7, 2014, 3:22:04 PM5/7/14
to clean-code...@googlegroups.com
Jan Cooper makes some good points about the fragility of fine-grained unit tests, but also neglects their benefits. 

Actually there is a trade-off between the precision and the fragility of the test suite and it depends what you value more. I am striving to test the error handling in units that I test-drive, so I am willing to use mocks and simulate the exception paths. This is just too important to me to neglect and I am willing to pay the price of fine-grained tests, even if some people say it is a waste and we should test on the component level.

Another source of fragility when using mocks is the misconception that mocks should be as strict as possible by default and expect even irrelevant interactions to be defined. Take EasyMock and Mockito for example. They have different defaults about the strictness of the mocks and it actually makes a lot of difference, Mockito having better defaults IMO.

Dave Schinkel

unread,
Jul 19, 2015, 2:16:22 AM7/19/15
to clean-code...@googlegroups.com
How are you segregating your tests per Unit / Component?  If you aren't creating a ton of implementation tests at the lower level, what do your BDD tests look like?  Sure, they derive from and are driven from a Story and Scenarios but to me that seems like you'd have one layer of BDD tests for example when building a REST API.  But what about if you want to test other units at their (surface area)?  for instance what if you wanna cover a layer such as a Repository or Business Domain layer.  Are we saying we really don't put tests around those abstractions anymore and so for a REST API we'd test drive the SUT for the API but that the surface area and 'Unit' is the API Contract (your service code that's exposed to the world as your endpoint code)?

How are your tests setup and can you give an example of the scope of them and how you determined the scope of your tests and what units they are testing?

Dave Schinkel

unread,
Jul 19, 2015, 2:17:40 AM7/19/15
to clean-code...@googlegroups.com
And when I say 'Unit'  I'm not referring to one class or one method.  I view a unit as a set of tests that provide a function and are exposed for other apps or other consumers to reuse.  So 'Unit' is an API contract layer or just some other kind of group of tests that make up a unit or module that provides something for others to consume.


On Friday, February 28, 2014 at 8:36:53 AM UTC-6, Michel Henrich wrote:

Dave Schinkel

unread,
Jul 19, 2015, 2:18:23 AM7/19/15
to clean-code...@googlegroups.com
"Breaking the system into individually tested subsystems is key to making this whole affair manageable"

can you be specific, how have you don that?
Reply all
Reply to author
Forward
0 new messages