Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

TDD considered harmful

278 views
Skip to first unread message

Mr Flibble

unread,
Jan 30, 2016, 6:11:21 PM1/30/16
to
If you eschew private methods and instead make everything public then it
is impossible to maintain a class invariant except WITHIN a SINGLE
public method which has the effect that some of your public methods MUST
be unnecessarily large/complex.

If you don't think about or don't understand class invariants (which I
suspect is the case for most TDD adherents) then your existing
non-trivial designs really will be god-awful.

TDD is the enemy of encapsulation. TDD really is a bad idea.

/Flibble

P.S. Sausages.

Paavo Helde

unread,
Jan 30, 2016, 7:14:29 PM1/30/16
to
On 31.01.2016 1:11, Mr Flibble wrote:
> If you eschew private methods and instead make everything public then it
> is impossible to maintain a class invariant except WITHIN a SINGLE
> public method which has the effect that some of your public methods MUST
> be unnecessarily large/complex.
>
> If you don't think about or don't understand class invariants (which I
> suspect is the case for most TDD adherents) then your existing
> non-trivial designs really will be god-awful.

You have found another straw man to attack. Why should TDD prohibit
private methods or attempt to test them? The private methods are
declared private because they are not guaranteed to maintain class
invariant; declaring them private avoids other code calling them
accidentally (including any test code).

> TDD is the enemy of encapsulation. TDD really is a bad idea.

Non sequitur. Maybe you should stick to sausages?



Alf P. Steinbach

unread,
Jan 30, 2016, 7:58:35 PM1/30/16
to
On 1/31/2016 1:14 AM, Paavo Helde wrote:
>
> Why should TDD prohibit
> private methods or attempt to test them? The private methods are
> declared private because they are not guaranteed to maintain class
> invariant; declaring them private avoids other code calling them
> accidentally (including any test code).

Why should TDD not test private methods?

I understand from the “avoids … test code [calling them]” that you're
thinking of a case where somehow the private methods are inaccessible to
test code, but why should they be?

I'm just asking. I've not been into TDD very much. If it's really the
case that it's not applicable to private parts, or e.g. that there's no
idea of hierarchy and nesting and levels, then it seems that it's much
in need of modernization of tooling and/or methodology, like some
COBOL-like brute animal from the distant past?


Cheers,

- Alf

Öö Tiib

unread,
Jan 30, 2016, 9:06:44 PM1/30/16
to
On Sunday, 31 January 2016 02:58:35 UTC+2, Alf P. Steinbach wrote:
> On 1/31/2016 1:14 AM, Paavo Helde wrote:
> >
> > Why should TDD prohibit
> > private methods or attempt to test them? The private methods are
> > declared private because they are not guaranteed to maintain class
> > invariant; declaring them private avoids other code calling them
> > accidentally (including any test code).
>
> Why should TDD not test private methods?
>
> I understand from the "avoids ... test code [calling them]" that you're
> thinking of a case where somehow the private methods are inaccessible to
> test code, but why should they be?

It is generally more difficult if external software (including tests)
accesses the internal details and assumes those. So testing private methods
may make functionality or performance improving difficult, froze the
design and add unneeded internal paranoia bloat to class implementation.

>
> I'm just asking. I've not been into TDD very much. If it's really the
> case that it's not applicable to private parts, or e.g. that there's no
> idea of hierarchy and nesting and levels, then it seems that it's much
> in need of modernization of tooling and/or methodology, like some
> COBOL-like brute animal from the distant past?

Code of unit tests is typically longer than code of class itself.
Sometimes order of magnitude longer. Now we maintain the class.
Tests fail. May be those are false positives or may be those are
not? May be the test assumed something unneeded? May be it was needed
but the writer of test was just lazy to achieve same situation from
public interface? If the tests do not cheapen the maintenance then
TDD is applied in harmful manner.

Mr Flibble

unread,
Jan 30, 2016, 10:42:12 PM1/30/16
to
It isn't a non-sequitur at all: do you understand encapsulation? TDD
wants lots of testable small (atomic) public methods to test but every
public method you add actually REDUCES the encapsulation of a class
whilst adding a private method does not sausages.

/Flibble

Öö Tiib

unread,
Jan 31, 2016, 12:03:28 AM1/31/16
to
That is not true at all. Typically TDD just tests that public interface
of a class does what it claims in interface specification.
So one using TDD wants to reduce public interface to minimum needed, not
to make private members public.

Ian Collins

unread,
Jan 31, 2016, 3:20:28 AM1/31/16
to
Mr Flibble wrote:
> If you eschew private methods and instead make everything public then it
> is impossible to maintain a class invariant except WITHIN a SINGLE
> public method which has the effect that some of your public methods MUST
> be unnecessarily large/complex.

TDD and using private methods are orthogonal.

Does your local butcher having a secret recipe for his sausages prevent
you from taste testing them?

--
Ian Collins

Paavo Helde

unread,
Jan 31, 2016, 4:54:52 AM1/31/16
to
On 31.01.2016 2:58, Alf P. Steinbach wrote:
> On 1/31/2016 1:14 AM, Paavo Helde wrote:
>>
>> Why should TDD prohibit
>> private methods or attempt to test them? The private methods are
>> declared private because they are not guaranteed to maintain class
>> invariant; declaring them private avoids other code calling them
>> accidentally (including any test code).
>
> Why should TDD not test private methods?
>
> I understand from the “avoids … test code [calling them]” that you're
> thinking of a case where somehow the private methods are inaccessible to
> test code, but why should they be?

TDD means test-driven development. This means that when adding any
feature or fixing a bug one first writes a test which *uses* the needed
part of the software and first fails. Then code is added or modified so
that the test passes. The resulting code is refactored to become simpler
and more maintainable, while still passing the test. Finally the test is
added to the automatic test suite (if possible and feasible).

Note that in this description no methods or classes are mentioned, this
is all about *usage*. And usage happens only via public methods. What
happens inside the class is of no concern in TDD, as long as the tests
succeed.

Note that the concept of TDD is a bit orthogonal to unit testing. If the
TDD-tested feature is small enough, the TDD test may be added to the
unit test suite. If it is too large, it might be added to the
integration test suite. Or if it is used for development only (the last
'D' in TDD), then it is not added anywhere.

Private methods are more a topic for code reviews, unit testing and
mocked interfaces. There are probably projects where private methods are
considered units and tested accordingly. I have never done such things,
so have no experience, but I believe this could easily become an
overkill and counter-productive. For example, the refactoring step
present in the TDD cycle would become much harder.

Cheers
Paavo



4ndre4

unread,
Jan 31, 2016, 1:45:48 PM1/31/16
to
On 30/01/2016 23:11, Mr Flibble wrote:

> If you eschew private methods and instead make everything public

That's not what TDD is.

--
4ndre4
"The use of COBOL cripples the mind; its teaching should, therefore, be
regarded as a criminal offense." (E. Dijkstra)

4ndre4

unread,
Jan 31, 2016, 1:55:40 PM1/31/16
to
On 31/01/2016 03:42, Mr Flibble wrote:

[...]
> TDD
> wants lots of testable small (atomic) public methods to test but every
> public method you add actually REDUCES the encapsulation of a class

No, it doesn't. Sorry, but you haven't understood what TDD is. The fact
that you can test each individual electronic component available on the
market in isolation, does not mean that you are esposing any details
about the interaction between those components, on any electronic board
you can build with them. Remember that there are languages (such as
Python, JavaScript, etc..) where there is no concept of "private
method". Private methods are just a convention.

Mr Flibble

unread,
Jan 31, 2016, 4:20:19 PM1/31/16
to
On 31/01/2016 18:45, 4ndre4 wrote:
> On 30/01/2016 23:11, Mr Flibble wrote:
>
>> If you eschew private methods and instead make everything public
>
> That's not what TDD is.

Sure it is.

/Flibble

Mr Flibble

unread,
Jan 31, 2016, 4:23:37 PM1/31/16
to
On 31/01/2016 18:55, 4ndre4 wrote:
> On 31/01/2016 03:42, Mr Flibble wrote:
>
> [...]
> > TDD
>> wants lots of testable small (atomic) public methods to test but every
>> public method you add actually REDUCES the encapsulation of a class
>
> No, it doesn't. Sorry, but you haven't understood what TDD is. The fact
> that you can test each individual electronic component available on the
> market in isolation, does not mean that you are esposing any details
> about the interaction between those components, on any electronic board
> you can build with them. Remember that there are languages (such as
> Python, JavaScript, etc..) where there is no concept of "private
> method". Private methods are just a convention.

You obviously don't understand class invariants. Any language that
doesn't offer private methods doesn't offer a way to easily enforce a
class invariant: no public method can break a class invariant by
definition. Such languages are at best toy and at worst harmful (just
like TDD).

Python and JavaScript are both toy.

/Flibble


Ian Collins

unread,
Jan 31, 2016, 4:51:14 PM1/31/16
to
Mr Flibble wrote:
>
> Python and JavaScript are both toy.

Yeah right.

--
Ian Collins

Ian Collins

unread,
Jan 31, 2016, 5:03:28 PM1/31/16
to
Er no, TDD doesn't want anything of the sort. Being a process, TDD
doesn't actually "want" anything at all except for tested, working code...

If you understood the process, you would know that there isn't any
conflict between TDD and encapsulation.

--
Ian Collins

Mr Flibble

unread,
Jan 31, 2016, 6:06:46 PM1/31/16
to
I do understand the process: private methods are untestable so should be
avoided in TDD land ergo encapsulation is increased when adding more
testable public methods. TDD is the enemy of both encapsulation in
general and C++ in particular.

/Flibble

Paavo Helde

unread,
Jan 31, 2016, 6:17:16 PM1/31/16
to
Private methods are untestable only if they are never called by the
public methods. TDD tests the features and does not care how the class
provides these features.

Mr Flibble

unread,
Jan 31, 2016, 6:17:58 PM1/31/16
to
Meh.


Mr Flibble

unread,
Jan 31, 2016, 6:21:55 PM1/31/16
to
TDD wouldn't so bad if it wasn't ALSO an acronym for Test Driven DESIGN
which is of course total and absolute bollocks. Intelligence not failed
unit tests should driver design of non-trivial non-toy systems.

Jerry Stuckle

unread,
Jan 31, 2016, 6:29:09 PM1/31/16
to
On 1/31/2016 1:55 PM, 4ndre4 wrote:
> On 31/01/2016 03:42, Mr Flibble wrote:
>
> [...]
>> TDD
>> wants lots of testable small (atomic) public methods to test but every
>> public method you add actually REDUCES the encapsulation of a class
>
> No, it doesn't. Sorry, but you haven't understood what TDD is. The fact
> that you can test each individual electronic component available on the
> market in isolation, does not mean that you are esposing any details
> about the interaction between those components, on any electronic board
> you can build with them. Remember that there are languages (such as
> Python, JavaScript, etc..) where there is no concept of "private
> method". Private methods are just a convention.
>

If the circuit works properly, you don't have to test each individual
electronic component.

--
==================
Remove the "x" from my email address
Jerry Stuckle
jstu...@attglobal.net
==================

Jerry Stuckle

unread,
Jan 31, 2016, 6:39:03 PM1/31/16
to
Yes and no. Private methods are untestable directly, but they can be
tested through the public methods.

If the public methods don't call them, then the private methods are
unusable anyway. And even whether a private method exists or not is
immaterial; one could easily add or delete private methods and change
the public methods appropriately without changing the operation of the
class.

Mr Flibble

unread,
Jan 31, 2016, 6:45:44 PM1/31/16
to
You obviously don't understand TDD then and its mantra of designing
upward from failing tests. Private methods are totally at odds with TDD.

/Flibble


Ian Collins

unread,
Jan 31, 2016, 6:51:45 PM1/31/16
to
Mr Flibble wrote:
>
> You obviously don't understand TDD then and its mantra of designing
> upward from failing tests. Private methods are totally at odds with TDD.

Repeatedly burning your sausages doesn't make them good sausages.

--
Ian Collins

Mr Flibble

unread,
Jan 31, 2016, 7:56:45 PM1/31/16
to
Except if you use TDD you don't end up with sausages you end up with
meatballs. TDD is anathema to best (SOLID) practices related to the
design of non-trivial software systems.

/Flibble

Ian Collins

unread,
Jan 31, 2016, 8:13:50 PM1/31/16
to
Restating the same empty nonsense won't make it any more correct.

There aren't any conflicts between TDD and SOLID practices. SOLID
practices help make TDD better. If I recall correctly, SOLID was
originally coined by Robert Martin (Uncle Bob) (possibly here:
http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod). You may be
surprised to know that he is also a strong advocate of TDD:
https://blog.8thlight.com/uncle-bob/2014/05/02/ProfessionalismAndTDD.html

Go figure.

--
Ian Collins

Jerry Stuckle

unread,
Jan 31, 2016, 9:48:14 PM1/31/16
to
Oh, I understand it. And private methods have absolutely nothing to do
with TDD.

TDD deals with the external (public methods) interface of the class.
The internal (private methods) of the class are immaterial. In fact,
the private methods can change, be added or deleted without changing the
public interface - which is what TDD cares about.

How something is done - the implementation (which includes private
methods) is immaterial. What is material is the results.

Robert Wessel

unread,
Feb 1, 2016, 2:01:35 AM2/1/16
to
While TDD is not our prime methodology, we commonly* have** "friend
class TestAbc;" in our definitions of class Abc (a bit of additional
fiddle - namely a forward declaration of class TestAbc - needed if
there's a namespace involved). At least for those cases when we want
tests for private functions. Ugly? A bit, sure. And yes, it
requires the maintenance of enough discipline so that people don't
write TestAbc classes just to poke around where they oughtn't.



*FSVO "commonly"

**For historical reasons the prefix string is actually "tharness"
rather than "Test".

leigh.v....@googlemail.com

unread,
Feb 1, 2016, 7:23:53 AM2/1/16
to
Designing software through trial and error by fixing failing "tests" rather than by applying some intelligence and thinking about abstractions, interfaces, class hierarchies and object hierarchies is, quite frankly, both absurd and possibly even harmful and I am amazed that you cannot see this.

TDD is the totally wrong approach to software development. Design first, implement second and unit test third.

Öö Tiib

unread,
Feb 1, 2016, 9:06:26 AM2/1/16
to
On Monday, 1 February 2016 14:23:53 UTC+2, leigh.v....@googlemail.com wrote:
> Designing software through trial and error by fixing failing "tests" rather than
> by applying some intelligence and thinking about abstractions, interfaces, class
> hierarchies and object hierarchies is, quite frankly, both absurd and possibly
> even harmful and I am amazed that you cannot see this.

You talk about software architecture. We write tests to test functionality not
architecture. It is impossible to write compiling (but possibly yet failing) tests
when the architecture elements (IOW interfaces, abstractions and hierarchies)
aren't already in place. When it is inconvenient to write tests then it indeed
indicates (early) that architecture might be is complicated and non-robust
enough and may be worth correcting.

>
> TDD is the totally wrong approach to software development. Design first,
> implement second and unit test third.

That is orthogonal. For example there is software that will contain about 12
modules, 8 classes in average per module, (so 96 classes) and making one
of those classes takes about 3.5 days. Lets say a programmer took a class and
implemented it and unit tests for it with 2 days. Who cares in what order
he did that? Are we some sort of micro-managers or engineers?

Cholo Lennon

unread,
Feb 1, 2016, 9:22:47 AM2/1/16
to
I agree with you. The first time I used TDD (don't confuse with writing
a unit test, a common mistake) was in a TDD course that my company sent
me. It was really weird. The teacher presented the problem to solve:
convert decimal numbers to roman numbers. Well, my group designed and
implemented the solution in a few minutes, wrong the teacher said! after
that, he explained how we should think and solve the problem using TDD:
Start with a simple test that fail. After that add the solution for 1
and test. Add the solution for 2 and test, etc. Try to find a pattern.
The iteration process was long because the pattern was changing. It was
your first time, the teacher said; future problems will be solved more
quickly.

I don't know, after several years of trying to apply the technic in
C++/Java/Python I'am still unconvinded. The teacher explained that TDD
avoids "analysis paralysis" (get stuck in a problem due to overthinking
it). You have to think the problem on the fly, try, fail, fix... but
IMO most of the time thinking in advance, solves the problem more
quickly and with a better design.

Regards


--
Cholo Lennon
Bs.As.
ARG

Öö Tiib

unread,
Feb 1, 2016, 10:14:55 AM2/1/16
to
On Sunday, 31 January 2016 23:23:37 UTC+2, Mr Flibble wrote:
>
> Python and JavaScript are both toy.

Nah ... asm.js is rather impressive already ... even in its current
alpha stage.

Paavo Helde

unread,
Feb 1, 2016, 10:25:14 AM2/1/16
to
On 1.02.2016 16:22, Cholo Lennon wrote:
> On 02/01/2016 09:23 AM, leigh.v....@googlemail.com wrote:
>> TDD is the totally wrong approach to software development. Design
> > first, implement second and unit test third.
>>
>
> I agree with you. The first time I used TDD (don't confuse with writing
> a unit test, a common mistake) was in a TDD course that my company sent
> me. It was really weird. The teacher presented the problem to solve:
> convert decimal numbers to roman numbers. Well, my group designed and
> implemented the solution in a few minutes, wrong the teacher said! after
> that, he explained how we should think and solve the problem using TDD:
> Start with a simple test that fail. After that add the solution for 1
> and test. Add the solution for 2 and test, etc. Try to find a pattern.
> The iteration process was long because the pattern was changing. It was
> your first time, the teacher said; future problems will be solved more
> quickly.

Seems like that was a wrong task for TDD. If the task is simple and you
can foresee all the nuances beforehand, then there is no need to write
any tests first. Obviously this depends both on the task and on the person.

TDD works best when you need to develop a complex feature or to add a
new feature to an existing complex system where you are not able to see
all the hairy details in one glance. Working in gradual steps (first get
the simplest test to pass, then the next, then the next) provides an
approach to tackle such tasks.

Cheers
Paavo

Paavo Helde

unread,
Feb 1, 2016, 10:30:27 AM2/1/16
to
On 1.02.2016 14:23, leigh.v....@googlemail.com wrote:

> Design first, implement second and unit test third.

I fully agree. However, this has nothing to do with the presence or
absence of TDD. TDD is used mostly for the implementation step and as a
bonus it will often produce initial material for unit tests as well.

Jerry Stuckle

unread,
Feb 1, 2016, 11:06:31 AM2/1/16
to
On 2/1/2016 7:23 AM, leigh.v....@googlemail.com wrote:
> Designing software through trial and error by fixing failing "tests" rather than by applying some intelligence and thinking about abstractions, interfaces, class hierarchies and object hierarchies is, quite frankly, both absurd and possibly even harmful and I am amazed that you cannot see this.
>
> TDD is the totally wrong approach to software development. Design first, implement second and unit test third.
>

+1

Jerry Stuckle

unread,
Feb 1, 2016, 11:11:01 AM2/1/16
to
Yup, some temporary agencies love this approach. They can milk the
contract for more money.

Mr Flibble

unread,
Feb 1, 2016, 11:58:46 AM2/1/16
to
Did you even bother reading his second paragraph (snipped)? You've been
drinking too much TDD koolaid mate.

/Flibble

Mr Flibble

unread,
Feb 1, 2016, 12:08:15 PM2/1/16
to
Sounds horrid.

Private methods are NOT supposed to be directly testable as they can
break class invariants: any unit test should only affect a class in a
way that doesn't break the class invariant which means only testing
public methods but this goes back to my original point: TDD only
considers the creation of a public method to pass/fail a test.

TDD is at odds with proper thinking of how to design and implement with
a high degree of quality which should involve creating directly
untestable private methods and KEEPING PUBLIC METHODS TO A MINIMUM to
increase encapsulation.

I repeat my original point: TDD is NOT about keeping public methods to a
minimum; it is about creating lots of individually testable public
methods hence it is the enemy of encapsulation and good design.

/Flibble

Wouter van Ooijen

unread,
Feb 1, 2016, 12:11:48 PM2/1/16
to
Op 01-Feb-16 om 1:23 PM schreef leigh.v....@googlemail.com:
> Designing software through trial and error by fixing failing "tests" rather than by applying some intelligence and thinking about abstractions, interfaces, class hierarchies and object hierarchies is, quite frankly, both absurd and possibly even harmful and I am amazed that you cannot see this.
>
> TDD is the totally wrong approach to software development. Design first, implement second and unit test third.

I see some sense in Test-Driven-Bug-Fixing, but I agree that
Test-Driven-Design is madness.

Maybe people actaully use Test-Driven-Coding, which might work, but IME
the design is the real work, coding is trivial, so I don't care much for
any formalised way of coding.

Wouter van Ooijen

Ian Collins

unread,
Feb 1, 2016, 2:11:47 PM2/1/16
to
I see you have chosen to loose the context so that you can conveniently
ignore the fact that the source of your beloved SOLID practices is also
a strong advocate off TDD!

There's an admission of defeat if ever I saw one...

TDD clearly isn't anathema to SOLID practices.

--
Ian Collins

Gareth Owen

unread,
Feb 1, 2016, 4:39:11 PM2/1/16
to
Ian Collins <ian-...@hotmail.com> writes:

> Mr Flibble wrote:
>> If you eschew private methods and instead make everything public then it
>> is impossible to maintain a class invariant except WITHIN a SINGLE
>> public method which has the effect that some of your public methods MUST
>> be unnecessarily large/complex.
>
> TDD and using private methods are orthogonal.

QFT. You might as well suggest that "Unit testing means you can't use
static linkage". Private members are an implementation detail. You
don't individually unit-test implementation details.

Robert Wessel

unread,
Feb 2, 2016, 1:04:22 AM2/2/16
to
Waterfall is back in fashion? Why not, I saw someone selling tie-dye
kits at the mall the other day.

Öö Tiib

unread,
Feb 2, 2016, 2:40:39 AM2/2/16
to
"Agile" basically means "using two-week waterfalls".

Zaphod Beeblebrox

unread,
Feb 2, 2016, 5:58:13 AM2/2/16
to
On Sunday, 31 January 2016 21:23:37 UTC, Mr Flibble wrote:

[...]
> You obviously don't understand class invariants.

...I do, and you are obviously a troll :)

>Any language that
> doesn't offer private methods doesn't offer a way to easily enforce a
> class invariant

Wrong. The keyword "private" can be changed to "public" at any time, by anyone, and a private method can be made publicly accessible with a snap of the fingers. The various access specifiers are not a way to "lock" a class; they define a contract. A private method tell the client programmer that they shouldn't be using that method, and the compiler enforces that check. But anyone wants to change the contract, can do that, so it's not different than just a syntactic convention. When you write a method with an underscore in Python/JavaScript, you are telling the client programmer that they are not supposed to call that method directly. It's a way to define a contract. Same thing as access specifiers.

> Python and JavaScript are both toy.

Yeah, and you are a troll :)

Zaphod Beeblebrox

unread,
Feb 2, 2016, 5:59:04 AM2/2/16
to
On Sunday, 31 January 2016 23:29:09 UTC, Jerry Stuckle wrote:

[...]
> If the circuit works properly, you don't have to test each individual
> electronic component.

If the circuit works properly, it's just because each electronic component has been individually tested.


Zaphod Beeblebrox

unread,
Feb 2, 2016, 5:59:52 AM2/2/16
to
On Sunday, 31 January 2016 21:20:19 UTC, Mr Flibble wrote:

[...]
> >> If you eschew private methods and instead make everything public
> >
> > That's not what TDD is.
>
> Sure it is.

No, it's not.

Zaphod Beeblebrox

unread,
Feb 2, 2016, 6:04:06 AM2/2/16
to
On Monday, 1 February 2016 00:56:45 UTC, Mr Flibble wrote:

[...]
> TDD is anathema to best (SOLID) practices related to the
> design of non-trivial software systems.

Unfortunately, Uncle Bob, one of the first experts naming the SOLID principles, disagrees with you: https://skillsmatter.com/courses/418-uncle-bobs-test-driven-development-and-refactoring


leigh.v....@googlemail.com

unread,
Feb 2, 2016, 7:12:23 AM2/2/16
to
One cannot simply change a private method to a public one at snap of fingers because private methods are allowed to break class invariants.

Zaphod Beeblebrox

unread,
Feb 2, 2016, 7:35:24 AM2/2/16
to
On Tuesday, 2 February 2016 12:12:23 UTC, leigh.v....@googlemail.com wrote:

[...]
> One cannot simply change a private method to a public one at snap of fingers because private methods are allowed to break class invariants.

You did not get what I said. I said that it is technically possible to do that, not that it is a sensible or even necessary thing to do. The point is that the keyword "private" does not grant any particular safety, compared to any other syntactical convention. Defining a "private" method in C++ is not different from defining a public method in JavaScript/Python with a tag on it reading "please, mr programmer, don't call it". The access specifiers in C++ and similar languages only define a contract for the external user, they are not meant to "lock" the class from external intrusion. Anyone is technically able to change a private method to public, having the source code. You just don't do it, because you'd change the contract (which keeps the class invariants safe).

Zaphod Beeblebrox

unread,
Feb 2, 2016, 7:42:28 AM2/2/16
to
On Monday, 1 February 2016 17:08:15 UTC, Mr Flibble wrote:

[...]
> I repeat my original point: TDD is NOT about keeping public methods to a
> minimum; it is about creating lots of individually testable public
> methods hence it is the enemy of encapsulation and good design.

Wrong. TDD is about creating a number of testable INTERFACES, not "creating lots of individually testable public methods". That's your idiotic interpretation of TDD. TDD is based upon something that is not different from what we do in Electronics. If you have a component X that integrates components A, B and C, you test those three components' behaviours separately, and once you know they work, you assemble them and test X's behaviour. Any class encapsulates a number of behaviours. Each behaviour has to be componentized and tested separately. The class invariants are kept safe by assembling the components implementing those behaviours in a specific way and not making the integrated behaviour modifiable - the same way you encapsulate electronic components within other components.

Jorgen Grahn

unread,
Feb 2, 2016, 8:25:26 AM2/2/16
to
On Tue, 2016-02-02, Zaphod Beeblebrox wrote:
> On Sunday, 31 January 2016 21:23:37 UTC, Mr Flibble wrote:
>
> [...]
>> You obviously don't understand class invariants.
>
> ...I do, and you are obviously a troll :)
>
>>Any language that
>> doesn't offer private methods doesn't offer a way to easily enforce a
>> class invariant
>

> Wrong. The keyword "private" can be changed to "public" at any time,
> by anyone, and a private method can be made publicly accessible with
> a snap of the fingers. The various access specifiers are not a way
> to "lock" a class; they define a contract. A private method tell the
> client programmer that they shouldn't be using that method, and the
> compiler enforces that check. But anyone wants to change the
> contract, can do that, so it's not different than just a syntactic
> convention.

That's a funny way of looking at it ... anyone can edit any program to
say what they want; that doesn't mean private: is just a convention.
However:

> When you write a method with an underscore in
> Python/JavaScript, you are telling the client programmer that they
> are not supposed to call that method directly. It's a way to define
> a contract. Same thing as access specifiers.

I disagree with your wording, but I agree with your point. IME, the
conventions in Python are almost as good as private: in C++, if you
stick strictly to the conventions (which I think people generally do).

(The difference is in Python I have to trust the users of my class not
to be stupid, or perhaps run a linting tool. In C++ I can pretty much
look at a class and tell that noone is messing with the privates,
because it would be not only stupid but remarkably stupid for them to
do so.)

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Zaphod Beeblebrox

unread,
Feb 2, 2016, 8:41:48 AM2/2/16
to
On Tuesday, 2 February 2016 13:25:26 UTC, Jorgen Grahn wrote:

[...]
> That's a funny way of looking at it ... anyone can edit any program to
> say what they want; that doesn't mean private: is just a convention.

That's a straw man, dude. I never said that "private" does not mean private. I said quite the opposite: I said that "private" DOES mean that the method is private, but it's not different from putting an underscore in front of a method and ASSUMING it is private. Private/public/protected are just conventions. They define a contract. The original objection of the troll was "Any language that
doesn't offer private methods". That's stupid. ANY language can offer private methods by means of convention.

> conventions in Python are almost as good as private: in C++, if you
> stick strictly to the conventions (which I think people generally do).

Conventions in Python are better than in C++, cause they are not reinforced by the compiler. Any class can be designed in a wrong way. A method that should be public is instead private. Python does not need the owner of that code to change it, before you can use it. You can call the method anyway, and then get the fixed version of that class.

> (The difference is in Python I have to trust the users of my class

You have to trust the user of your C++ code too. If you provide them the source code for your class, as I said, anyone is technically able to change your "private" to "public" and use the method anyway.

>In C++ I can pretty much
> look at a class and tell that noone is messing with the privates,
> because it would be not only stupid but remarkably stupid for them to
> do so.)

The point is that there's no difference with those language that do not enforce the access specifiers at compile time. It's just a matter of convention.

Jerry Stuckle

unread,
Feb 2, 2016, 10:02:57 AM2/2/16
to
Nope. I didn't need to test the individual components in my last
project (an audio generator). I put it together, hooked a scope up to
the output and turned it on. I got the expected output, within both
amplitude and frequency tolerances. I didn't check *any* components.

Do you check that every expression in your code works properly? Or do
you see if the function does what it's supposed to?

Cholo Lennon

unread,
Feb 2, 2016, 10:18:03 AM2/2/16
to
I thing you are wrong too. TDD is an iterative process where you must
find the solution through that iteration. Defining testable interfaces
is just a consequence. People often confuse TDD with writing/designing
unit tests. TDD *is* a development process, it's an alternative way of
attacking a problem resolution.

From wikipedia (https://en.wikipedia.org/wiki/Test-driven_development):

"Test-driven development (TDD) is a software development process that
relies on the repetition of a very short development cycle: first the
developer writes an (initially failing) automated test case that defines
a desired improvement or new function, then produces the minimum amount
of code to pass that test, and finally refactors the new code to
acceptable standards"

The process is explained here:

https://en.wikipedia.org/wiki/Test-driven_development#Test-driven_development_cycle

Of course, testable components are more difficult to develop because
encapsulation must be broken in order to made rigorous tests (friend
access in C++, package access in Java, dependency injection, etc).

Zaphod Beeblebrox

unread,
Feb 2, 2016, 11:52:01 AM2/2/16
to
On Tuesday, 2 February 2016 15:02:57 UTC, Jerry Stuckle wrote:

[...]
> Nope. I didn't need to test the individual components in my last
> project (an audio generator). I put it together, hooked a scope up to
> the output and turned it on. I got the expected output, within both
> amplitude and frequency tolerances. I didn't check *any* components.

You are either trolling or not able to understand a very simple concept.
The reason why you did not have to test the individual components in your project is only because you were taking for granted that the company producing those components had tested them and they were perfectly working. The design for each of those components WAS individually validated. It's similar to using a library in your project: you don't test that Boost works, you take it for granted. I am talking about YOUR OWN code. If you had to design each components in your project yourself, you would HAVE to test each component individually, before assembling them.

> Do you check that every expression in your code works properly? Or do
> you see if the function does what it's supposed to?

I check that all the code that *I* *write* works. Not all the code that I *use*.


Jorgen Grahn

unread,
Feb 2, 2016, 11:58:35 AM2/2/16
to
On Mon, 2016-02-01, Cholo Lennon wrote:
> On 02/01/2016 09:23 AM, leigh.v....@googlemail.com wrote:
>> Designing software through trial and error by fixing failing "tests"
> > rather than by applying some intelligence and thinking about
> > abstractions, interfaces, class hierarchies and object hierarchies
> > is, quite frankly, both absurd and possibly even harmful and I am
> > amazed that you cannot see this.
>>
>> TDD is the totally wrong approach to software development. Design
> > first, implement second and unit test third.
>
> I agree with you.

And I kind of agree ...

> The first time I used TDD (don't confuse with writing
> a unit test, a common mistake) was in a TDD course that my company sent
> me. It was really weird. The teacher presented the problem to solve:
> convert decimal numbers to roman numbers. Well, my group designed and
> implemented the solution in a few minutes, wrong the teacher said! after
> that, he explained how we should think and solve the problem using TDD:
> Start with a simple test that fail. After that add the solution for 1
> and test. Add the solution for 2 and test, etc. Try to find a pattern.
> The iteration process was long because the pattern was changing. It was
> your first time, the teacher said; future problems will be solved more
> quickly.
>
> I don't know, after several years of trying to apply the technic in
> C++/Java/Python I'am still unconvinded. The teacher explained that TDD
> avoids "analysis paralysis" (get stuck in a problem due to overthinking
> it).

It's true that when you sketch on some unit tests, you may discover
that your interface or implementation doesn't have to be that
grandiose, after all ...

> You have to think the problem on the fly, try, fail, fix... but
> IMO most of the time thinking in advance, solves the problem more
> quickly and with a better design.

I have never tried by-the-book TDD, but I've noted that when I spend
much energy on the unit tests, I spend less on getting the
implementation itself right. Then after a while I discover that I
cannot tell if it's correct or not, and having the unit tests doesn't
help because I would need to find out how complete /they/ are ...

OTOH, I believe unit testing in general is a wonderful tool; I use it
in several different ways. But I avoid TDD (and make a point to not
use the phrase to describe what I do).

A discussion here last year between me and Ian Collins makes me think
it's a personality or education thing -- that TDD suits some people,
but not others. I'm more into reading my code to convince myself it
correctly implements its interface.

Zaphod Beeblebrox

unread,
Feb 2, 2016, 12:03:10 PM2/2/16
to
On Tuesday, 2 February 2016 15:18:03 UTC, Cholo Lennon wrote:

[...]
> I thing you are wrong too.

Let's see. I've been applying TDD for quite a while now, so hardly so, but maybe you're right.

>TDD is an iterative process where you must find the solution through that iteration. Defining testable interfaces is just a consequence.

I don't understand your point. In order to write testable code, you HAVE to define testable interfaces. The iteration you are talking about is just the red-green-refactor process, which is at the foundation of TDD.

> TDD *is* a development process

TDD is also a DESIGN process. Many call it test-driven design, and they're right in doing so. TDD drives a much more modular design, which often corresponds to a much better design. I am not saying anything new. There are many articles about TDD around highlighting the better design deriving from it.

> Of course, testable components are more difficult to develop because
> encapsulation must be broken in order to made rigorous tests (friend
> access in C++, package access in Java, dependency injection, etc).

You don't have to break absolutely anything. The fact that encapsulation breaks, when writing individual testable components, is a misconception.
To build an electronic circuit, you can use resistors, capacitors, diodes, transistors, integrated circuits, etc.. Each of those has been tested individually. The way you assemble them forms the specific "behaviour" you want to implement. You're not breaking the encapsulation of your own circuit, assembling those components in a specific way.

Mr Flibble

unread,
Feb 2, 2016, 12:04:49 PM2/2/16
to
Nevertheless TDD is anathema to best (SOLID) practices related to the
design of non-trivial software systems.

/Flibble


Mr Flibble

unread,
Feb 2, 2016, 12:06:47 PM2/2/16
to
Wrong. TDD is about creating the smallest possible unit, initially
failing, that must then be fixed during that iteration and this unit may
or may not be an interface.

/Flibble


Zaphod Beeblebrox

unread,
Feb 2, 2016, 12:07:26 PM2/2/16
to
On Tuesday, 2 February 2016 17:04:49 UTC, Mr Flibble wrote:

[...]
> Nevertheless TDD is anathema to best (SOLID) practices related to the
> design of non-trivial software systems.

No, it's not.


Mr Flibble

unread,
Feb 2, 2016, 12:08:06 PM2/2/16
to
On 02/02/2016 13:41, Zaphod Beeblebrox wrote:
>
> Conventions in Python are better than in C++, cause they are not reinforced by the compiler. Any class can be designed in a wrong way. A method that should be public is instead private. Python does not need the owner of that code to change it, before you can use it. You can call the method anyway, and then get the fixed version of that class.

Wrong. C++ is better (and safer) than Python simply because it makes it
harder to call a private method by mistake.

/Flibble

Zaphod Beeblebrox

unread,
Feb 2, 2016, 12:08:30 PM2/2/16
to
On Tuesday, 2 February 2016 07:40:39 UTC, Öö Tiib wrote:

[...]
> "Agile" basically means "using two-week waterfalls".

No, it doesn't. Why don't you all grab a book and start studying what you don't know, rather than rambling idiocies in a ng?

Mr Flibble

unread,
Feb 2, 2016, 12:09:57 PM2/2/16
to
C++ makes it harder to fuck up than with Python which is just one reason
why C++ is far superior to Python the latter being not much better than
a toy language in my opinion.

/Flibble


Mr Flibble

unread,
Feb 2, 2016, 12:10:32 PM2/2/16
to
On 02/02/2016 16:51, Zaphod Beeblebrox wrote:
> On Tuesday, 2 February 2016 15:02:57 UTC, Jerry Stuckle wrote:
>
> [...]
>> Nope. I didn't need to test the individual components in my last
>> project (an audio generator). I put it together, hooked a scope up to
>> the output and turned it on. I got the expected output, within both
>> amplitude and frequency tolerances. I didn't check *any* components.
>
> You are either trolling or not able to understand a very simple concept.

Anyone who disagrees with you is a troll it seems.

/Flibble

Mr Flibble

unread,
Feb 2, 2016, 12:17:26 PM2/2/16
to
On 02/02/2016 17:02, Zaphod Beeblebrox wrote:
> On Tuesday, 2 February 2016 15:18:03 UTC, Cholo Lennon wrote:
>
> [...]
>> I thing you are wrong too.
>
> Let's see. I've been applying TDD for quite a while now, so hardly so, but maybe you're right.
>
>> TDD is an iterative process where you must find the solution through that iteration. Defining testable interfaces is just a consequence.
>
> I don't understand your point. In order to write testable code, you HAVE to define testable interfaces. The iteration you are talking about is just the red-green-refactor process, which is at the foundation of TDD.
>
>> TDD *is* a development process
>
> TDD is also a DESIGN process. Many call it test-driven design, and they're right in doing so. TDD drives a much more modular design, which often corresponds to a much better design. I am not saying anything new. There are many articles about TDD around highlighting the better design deriving from it.

Test Driven Design is egregious. You cannot design non-trivial software
systems with good quality by trial and error fixing failing test cases;
not only is it more inefficient as a process then formal up-front design
with periodic design reviews (with associated re-factoring) the
resultant design (and implementation) will not be thought through, will
be difficult to understand and maintain and it will be, in a word, slapdash.

If you use Test Driven Design to design your software I cannot imagine
how bad your output actually is. I wouldn't touch your software with a
bargepole mate.

/Flibble

Jerry Stuckle

unread,
Feb 2, 2016, 12:23:50 PM2/2/16
to
On 2/2/2016 11:51 AM, Zaphod Beeblebrox wrote:
> On Tuesday, 2 February 2016 15:02:57 UTC, Jerry Stuckle wrote:
>
> [...]
>> Nope. I didn't need to test the individual components in my last
>> project (an audio generator). I put it together, hooked a scope up to
>> the output and turned it on. I got the expected output, within both
>> amplitude and frequency tolerances. I didn't check *any* components.
>
> You are either trolling or not able to understand a very simple concept.
> The reason why you did not have to test the individual components in your project is only because you were taking for granted that the company producing those components had tested them and they were perfectly working. The design for each of those components WAS individually validated. It's similar to using a library in your project: you don't test that Boost works, you take it for granted. I am talking about YOUR OWN code. If you had to design each components in your project yourself, you would HAVE to test each component individually, before assembling them.
>

No, component manufacturers don't test every individual component that
comes off the line. Capacitors which cost less than $0.01 to make and
have a failure rate of less than 0.0001% aren't worth testing. The same
is true with many IC's.

Some, like microprocessors, cost more and have a higher failure rate, so
each one is tested.

>> Do you check that every expression in your code works properly? Or do
>> you see if the function does what it's supposed to?
>
> I check that all the code that *I* *write* works. Not all the code that I *use*.
>
>

That isn't what I asked. You create expressions. Do you test every
expression that you write, to ensure it works properly? That would be
the coding equivalent of testing every electronic part.

Paavo Helde

unread,
Feb 2, 2016, 12:29:58 PM2/2/16
to
On 2.02.2016 19:06, Mr Flibble wrote:

> Wrong. TDD is about creating the smallest possible unit, initially
> failing, that must then be fixed during that iteration and this unit may
> or may not be an interface.

Using a piece without interface is not possible, so it would not be the
"smallest possible unit".


Jerry Stuckle

unread,
Feb 2, 2016, 12:31:56 PM2/2/16
to
On 2/2/2016 11:58 AM, Jorgen Grahn wrote:
<snip>
>>
>> I don't know, after several years of trying to apply the technic in
>> C++/Java/Python I'am still unconvinded. The teacher explained that TDD
>> avoids "analysis paralysis" (get stuck in a problem due to overthinking
>> it).
>
> It's true that when you sketch on some unit tests, you may discover
> that your interface or implementation doesn't have to be that
> grandiose, after all ...
>

True, but analysis/design is a skill, also, one that takes a long time
and a lot of practice to learn to do properly. And even then, not
everyone is good at it.

I've been fortunate to have known some great analyzers/designers over
the hears (and, unfortunately, some not so great ones). I've learned a
lot from them. But no way would I ever try to compare my skills to theirs.

TDD can avoid analysis paralysis - but like any method, it has its own
weaknesses. No method is perfect. Some people like it and can make it
work well. I haven't found it to be very useful.

>> You have to think the problem on the fly, try, fail, fix... but
>> IMO most of the time thinking in advance, solves the problem more
>> quickly and with a better design.
>
> I have never tried by-the-book TDD, but I've noted that when I spend
> much energy on the unit tests, I spend less on getting the
> implementation itself right. Then after a while I discover that I
> cannot tell if it's correct or not, and having the unit tests doesn't
> help because I would need to find out how complete /they/ are ...
>

Which is another reason for having a different group performing the
tests. Developers should be concentrating on writing good code, not
worrying about how to test the code. Testing is a major distraction.

Öö Tiib

unread,
Feb 2, 2016, 12:55:12 PM2/2/16
to
What book? The manifesto isn't book.

Agile favor's working software over comprehensive documentation and so there
is no point to document much more than what is planned to get really to
be done, working, reviewed, tested and demonstrated during the iteration.

Agile favors responding to change over following a plan and so the grand
plan is reviewed and if needed adjusted after each short iteration.

Agile favors customer collaboration over contract negotiation and so that
means that the customer will have good overview and control of direction
of project and changes. However for that there are demos, backlog and
iteration planning, for not to turn a project into madness of monkey
business of daily changes.

Agile favors individuals and interactions over processes and tools, however
for not to turn software development into constantly interacting
individuals instead of getting something done there must be clear time
when to work with work items and when that interaction takes place.

Therefore if the iteration is two weeks then result is planning,
documentation, development, testing and demonstration all done with two
weeks. IOW full waterfall.

Jorgen Grahn

unread,
Feb 2, 2016, 1:51:35 PM2/2/16
to
On Tue, 2016-02-02, Zaphod Beeblebrox wrote:
> On Tuesday, 2 February 2016 13:25:26 UTC, Jorgen Grahn wrote:
>
> [...]
>> That's a funny way of looking at it ... anyone can edit any program to
>> say what they want; that doesn't mean private: is just a convention.
>
> That's a straw man, dude. I never said that "private" does not mean
> private. I said quite the opposite: [...]

Since you snipped the text we were discussing, I'll refrain from
commenting.

...
>> (The difference is in Python I have to trust the users of my class
>
> You have to trust the user of your C++ code too. If you provide them
> the source code for your class, as I said, anyone is technically
> able to change your "private" to "public" and use the method anyway.

We must be talking about different things. When I say "the users of my
class" I'm not talking about people who have downloaded it, or
something. I'm talking about other code in the same project, which is
under my control (at least the version I'm looking at at that moment).

If someone modifies the code, it becomes her problem, not mine.

Ian Collins

unread,
Feb 2, 2016, 1:58:17 PM2/2/16
to
The sky is green.

The sky is green.

The sky is green.

The sky is green.

The sky is green.

The sky is green.

The sky is green.

The sky is green.

You see, repeating bollocks doesn't make it true. If you disagree with
Uncle Bob, present your augments. Put up or shut up time.

--
Ian Collins

Cholo Lennon

unread,
Feb 2, 2016, 2:04:28 PM2/2/16
to
On 02/02/2016 02:02 PM, Zaphod Beeblebrox wrote:
> On Tuesday, 2 February 2016 15:18:03 UTC, Cholo Lennon wrote:
>
> [...]
>> I thing you are wrong too.
>
> Let's see. I've been applying TDD for quite a while now, so hardly
> so, but maybe you're right.
>
> TDD is an iterative process where you must find the solution
> through that iteration. Defining testable interfaces is just a
consequence.
>
> I don't understand your point. In order to write testable code, you
> HAVE to define testable interfaces. The iteration you are talking
> about is just the red-green-refactor process, which is at the
> foundation of TDD.
>

AFAIK (in my own experience) the interface is constantly mutating due to
the nature of the (TDD) process, so is not important its definition at
the beginning. The interface will eventually converge to its final form
after several iterations.


>> TDD *is* a development process
>
> TDD is also a DESIGN process. Many call it test-driven design, and
> they're right in doing so. TDD drives a much more modular design,
> which often corresponds to a much better design. I am not saying
> Anything new. There are many articles about TDD around highlighting
> the better design deriving from it.
>
>> Of course, testable components are more difficult to develop because
>> encapsulation must be broken in order to made rigorous tests (friend
>> access in C++, package access in Java, dependency injection, etc).
>
> You don't have to break absolutely anything. The fact that
> encapsulation breaks, when writing individual testable components,
> is a misconception.
> To build an electronic circuit, you can use resistors, capacitors,
> diodes, transistors, integrated circuits, etc.. Each of those has
> been tested individually. The way you assemble them forms the specific
> "behaviour" you want to implement. You're not breaking the encapsulation
> of your own circuit, assembling those components in a specific way.
>

The analogy with electronic circuits is not always true: The problem
here is that sometimes (more often than you might expect) you don't have
the "internal" components of a class: ie. a real database, a timer, etc.
You have to mock them in order to simulate the real use of your class.
And, if you need to change the internal components,then you have to
break the encapsulation using for example injection of dependencies.

Just to clarify my position: I am not defending TDD, I am just trying to
explain what it really is (in another post I wrote my ideas about TDD)

Mr Flibble

unread,
Feb 2, 2016, 3:44:31 PM2/2/16
to
It's quite simple really: SOLID is all about good DESIGN practices but
when you are using TDD you are stumbling along fixing failing test cases
without designing anything. TDD results in an incoherent mess that
cannot be called "design" by any stretch of the imagination.

/Flibble


Ian Collins

unread,
Feb 2, 2016, 4:05:57 PM2/2/16
to
Have you read
https://blog.8thlight.com/uncle-bob/2014/05/02/ProfessionalismAndTDD.html and
the paper it links?

I've yet to see any of my colleagues "stumbling along fixing failing
test cases", they all know hoe to use TDD well. Have you ever worked on
a team that uses it? Given what you have written thus far, that
question is probably rhetorical.

--
Ian Collins

Vir Campestris

unread,
Feb 2, 2016, 4:11:54 PM2/2/16
to
On 02/02/2016 16:58, Jorgen Grahn wrote:
> hen after a while I discover that I
> cannot tell if it's correct or not, and having the unit tests doesn't
> help because I would need to find out how complete/they/ are ...

They're not complete.

In non-trivial systems they can't be complete - there are race
conditions you can't pick up with simple tests. I know, we once had one
it took 18 months to find. Once we knew the trigger we could make it
fail most days. Not every day.

That doesn't mean you shouldn't have tests, of course.

Right now I'm working on Android systems, and I have the reverse of the
waterfall principle; I have all the source, but very little
documentation. I really don't want to reverse engineer ReiserFS to find
out to open a file, and there are occasions I've been doing something
equivalent.

Andy

Dombo

unread,
Feb 7, 2016, 2:19:57 PM2/7/16
to
Op 02-Feb-16 om 18:23 schreef Jerry Stuckle:
> On 2/2/2016 11:51 AM, Zaphod Beeblebrox wrote:
>> On Tuesday, 2 February 2016 15:02:57 UTC, Jerry Stuckle wrote:
>>
>> [...]
>>> Nope. I didn't need to test the individual components in my last
>>> project (an audio generator). I put it together, hooked a scope up to
>>> the output and turned it on. I got the expected output, within both
>>> amplitude and frequency tolerances. I didn't check *any* components.
>>
>> You are either trolling or not able to understand a very simple concept.
>> The reason why you did not have to test the individual components in your project is only because you were taking for granted that the company producing those components had tested them and they were perfectly working. The design for each of those components WAS individually validated. It's similar to using a library in your project: you don't test that Boost works, you take it for granted. I am talking about YOUR OWN code. If you had to design each components in your project yourself, you would HAVE to test each component individually, before assembling them.
>>
>
> No, component manufacturers don't test every individual component that
> comes off the line. Capacitors which cost less than $0.01 to make and
> have a failure rate of less than 0.0001% aren't worth testing. The same
> is true with many IC's.
>
> Some, like microprocessors, cost more and have a higher failure rate, so
> each one is tested.

But not every transistor in that microprocessor. If one really needs to
test every component (or make private methods public) to determine if a
subsystem is working correctly there is something fundamentally wrong
with the design. If exhaustive tests proof that a subsystem functions
correctly at interface level there is no need to dig any deeper.

From TDD perspective it is undesirable to have tests have intimate
knowledge about the implementation of a subsystem. Such tests are
brittle (they may fail even when the subsystem functions correctly) and
are useless for refactoring.

Jorgen Grahn

unread,
Feb 7, 2016, 3:26:59 PM2/7/16
to
On Sun, 2016-02-07, Dombo wrote:
...
> If one really needs to
> test every component (or make private methods public) to determine if a
> subsystem is working correctly there is something fundamentally wrong
> with the design. If exhaustive tests proof that a subsystem functions
> correctly at interface level there is no need to dig any deeper.

I agree, but wouldn't use the word "exhaustive" -- it sounds too much
like "test the code with all possible inputs", which is of course
almost always impossible.

> From TDD perspective it is undesirable to have tests have intimate
> knowledge about the implementation of a subsystem. Such tests are
> brittle (they may fail even when the subsystem functions correctly) and
> are useless for refactoring.

Jerry Stuckle

unread,
Feb 7, 2016, 4:35:16 PM2/7/16
to
That is true. Only the function of the microprocessor is tested. And I
agree - if the interface works as designed, there is nothing else to test.

> From TDD perspective it is undesirable to have tests have intimate
> knowledge about the implementation of a subsystem. Such tests are
> brittle (they may fail even when the subsystem functions correctly) and
> are useless for refactoring.
>

Very true. The implementation can change at any time. And it's not
necessarily unusual for that to occur.

Zaphod Beeblebrox

unread,
Feb 7, 2016, 5:08:04 PM2/7/16
to
On Tuesday, 2 February 2016 17:23:50 UTC, Jerry Stuckle wrote:

[...]
> No, component manufacturers don't test every individual component that
> comes off the line.

Again, you don't understand. It's not every individual component that is tested, it's the DESIGN of a component that is tested. How dense must you be not to understand this simple concept? If you want to design a specific component, you have to test that it does what it's supposed to do. Then, the line production is a totally different problem. Normally, we test a class, not each single instance of that class.

> That isn't what I asked. You create expressions. Do you test every
> expression that you write, to ensure it works properly?

LOL :) What does that mean? The problem is that you keep talking about something you don't know. Stop being stupid and go study what TDD is! Your question makes no sense. I don't test the expressions, I test the code that is founded on those expressions I use, and yes, if there are any complex expressions, I test that the expressions return the correct values.

Zaphod Beeblebrox

unread,
Feb 7, 2016, 5:10:27 PM2/7/16
to
On Sunday, 7 February 2016 19:19:57 UTC, Dombo wrote:

[...]
> But not every transistor in that microprocessor. If one really needs to
> test every component (or make private methods public) to determine if a
> subsystem is working correctly there is something fundamentally wrong
> with the design.

TDD *IS NOT* about making private methods public.

Mr Flibble

unread,
Feb 7, 2016, 6:19:22 PM2/7/16
to
On 07/02/2016 22:07, Zaphod Beeblebrox wrote:
> On Tuesday, 2 February 2016 17:23:50 UTC, Jerry Stuckle wrote:
>
> [...]
>> No, component manufacturers don't test every individual component that
>> comes off the line.
>
> Again, you don't understand. It's not every individual component that is tested, it's the DESIGN of a component that is tested. How dense must you be not to understand this simple concept? If you want to design a specific component, you have to test that it does what it's supposed to do. Then, the line production is a totally different problem. Normally, we test a class, not each single instance of that class.

Nonsense. The problem here mate is that you cannot recognize a bad analogy.

/Flibble

Vir Campestris

unread,
Feb 8, 2016, 4:23:02 PM2/8/16
to
On 07/02/2016 21:35, Jerry Stuckle wrote:
> That is true. Only the function of the microprocessor is tested. And I
> agree - if the interface works as designed, there is nothing else to test.

I've only been involved in the design of one processor, and not for very
long - and it was an SIMD thing, not a standard processor. But the HW
guys went on about "scan chains", which allow them to access _every_
gate and make sure they are all working as designed.

(I was quite pleased to be able to uncover a race condition when you had
a branch, prefetch, and a cache miss all at the same time... this was of
course not detected by the scan chain, as it was working as designed...
and yes, it was fixed for customer silicon)

Andy

Jerry Stuckle

unread,
Feb 8, 2016, 4:37:03 PM2/8/16
to
Sure, but did they test every gate? Or did they just have access to
every gate? Or did they just use the scan chains to debug problems?

You have access to every expression in the source code. But do you test
that every expression works? Or do you see if the unit does what it's
supposed to?

Geoff

unread,
Feb 9, 2016, 12:45:33 AM2/9/16
to
On Mon, 8 Feb 2016 16:36:50 -0500, Jerry Stuckle
<jstu...@attglobal.net> wrote:

>On 2/8/2016 4:22 PM, Vir Campestris wrote:
>> On 07/02/2016 21:35, Jerry Stuckle wrote:
>>> That is true. Only the function of the microprocessor is tested. And I
>>> agree - if the interface works as designed, there is nothing else to
>>> test.
>>
>> I've only been involved in the design of one processor, and not for very
>> long - and it was an SIMD thing, not a standard processor. But the HW
>> guys went on about "scan chains", which allow them to access _every_
>> gate and make sure they are all working as designed.
>>
>> (I was quite pleased to be able to uncover a race condition when you had
>> a branch, prefetch, and a cache miss all at the same time... this was of
>> course not detected by the scan chain, as it was working as designed...
>> and yes, it was fixed for customer silicon)
>>
>> Andy
>
>Sure, but did they test every gate? Or did they just have access to
>every gate? Or did they just use the scan chains to debug problems?
>

The technology is JTAG IEEE 1149.1 and the chains allow every function
on a chip to be tested against the specification and/or the
simulation. Yes, they can and often do test every gate.

The code (VHDL) that expresses the design of the chip is tested
extensively in simulation before it's dedicated to silicon and the
JTAG simulation can also be tested at that time.

Once you have hardware you repeat the tests against the device and
verify it functions as designed and as verified by the designers.

Once the device is installed in a board the test suites that were
designed per the functional specifications for the board and its
components are run to verify the production version of the board.

JTAG tests can also be run on the production line to verify
functionality of samples or 100% of circuit boards or chips.

Design of the tests and hardware design are done concurrently against
the same specification.

>You have access to every expression in the source code. But do you test
>that every expression works? Or do you see if the unit does what it's
>supposed to?

That would depend on the specifications of the unit tests wouldn't it?
A correct unit test specification would test a function or method such
that every code path is exercised, return values are as expected for
arguments given and all corner cases are exercised.

Jerry Stuckle

unread,
Feb 9, 2016, 8:33:56 AM2/9/16
to
On 2/9/2016 12:45 AM, Geoff wrote:
> On Mon, 8 Feb 2016 16:36:50 -0500, Jerry Stuckle
> <jstu...@attglobal.net> wrote:
>
>> On 2/8/2016 4:22 PM, Vir Campestris wrote:
>>> On 07/02/2016 21:35, Jerry Stuckle wrote:
>>>> That is true. Only the function of the microprocessor is tested. And I
>>>> agree - if the interface works as designed, there is nothing else to
>>>> test.
>>>
>>> I've only been involved in the design of one processor, and not for very
>>> long - and it was an SIMD thing, not a standard processor. But the HW
>>> guys went on about "scan chains", which allow them to access _every_
>>> gate and make sure they are all working as designed.
>>>
>>> (I was quite pleased to be able to uncover a race condition when you had
>>> a branch, prefetch, and a cache miss all at the same time... this was of
>>> course not detected by the scan chain, as it was working as designed...
>>> and yes, it was fixed for customer silicon)
>>>
>>> Andy
>>
>> Sure, but did they test every gate? Or did they just have access to
>> every gate? Or did they just use the scan chains to debug problems?
>>
>
> The technology is JTAG IEEE 1149.1 and the chains allow every function
> on a chip to be tested against the specification and/or the
> simulation. Yes, they can and often do test every gate.
>

The design is tested. Individual gates are not - or so I've been told
by Intel engineers - or any other hardware engineers I've worked with.
They don't even test individual gates in SSI chips, which have very few
gates. For instance, they don't test the individual gates in a 7473
flip-flop - they just ensure the chip operates correctly.

> The code (VHDL) that expresses the design of the chip is tested
> extensively in simulation before it's dedicated to silicon and the
> JTAG simulation can also be tested at that time.
>

Again, the design is tested. Not every single gate.

> Once you have hardware you repeat the tests against the device and
> verify it functions as designed and as verified by the designers.
>

Yes, and the operation of the device is tested - not every single gate.

> Once the device is installed in a board the test suites that were
> designed per the functional specifications for the board and its
> components are run to verify the production version of the board.
>

Sure. The operation is tested - not every single gate.

> JTAG tests can also be run on the production line to verify
> functionality of samples or 100% of circuit boards or chips.
>

Not to repeat myself - but the operation is tested - not every single gate.

> Design of the tests and hardware design are done concurrently against
> the same specification.
>

Do I need to repeat myself again?

>> You have access to every expression in the source code. But do you test
>> that every expression works? Or do you see if the unit does what it's
>> supposed to?
>
> That would depend on the specifications of the unit tests wouldn't it?
> A correct unit test specification would test a function or method such
> that every code path is exercised, return values are as expected for
> arguments given and all corner cases are exercised.
>

I have never seen a test specification which tests individual
expressions. They test the operation of the unit.

Scott Lurndal

unread,
Feb 9, 2016, 9:49:55 AM2/9/16
to
Jerry Stuckle <jstu...@attglobal.net> writes:
>On 2/8/2016 4:22 PM, Vir Campestris wrote:
>> On 07/02/2016 21:35, Jerry Stuckle wrote:
>>> That is true. Only the function of the microprocessor is tested. And I
>>> agree - if the interface works as designed, there is nothing else to
>>> test.
>>
>> I've only been involved in the design of one processor, and not for very
>> long - and it was an SIMD thing, not a standard processor. But the HW
>> guys went on about "scan chains", which allow them to access _every_
>> gate and make sure they are all working as designed.
>>
>> (I was quite pleased to be able to uncover a race condition when you had
>> a branch, prefetch, and a cache miss all at the same time... this was of
>> course not detected by the scan chain, as it was working as designed...
>> and yes, it was fixed for customer silicon)
>>
>> Andy
>
>Sure, but did they test every gate? Or did they just have access to
>every gate? Or did they just use the scan chains to debug problems?

We test every gate in simulation. Every block has both BIST and BISR.

Manufacturing uses the scan chains when testing individual processors.

Geoff

unread,
Feb 9, 2016, 12:14:00 PM2/9/16
to
On Tue, 9 Feb 2016 08:33:41 -0500, Jerry Stuckle
<jstu...@attglobal.net> wrote:
[snip]

>The design is tested. Individual gates are not - or so I've been told
>by Intel engineers - or any other hardware engineers I've worked with.
>They don't even test individual gates in SSI chips, which have very few
>gates. For instance, they don't test the individual gates in a 7473
>flip-flop - they just ensure the chip operates correctly.

Your knowledge is outdated. Your knowledge is also second-hand and
most likely have misunderstood what you were told. As a practicing
engineer in the field who has practical experience with ASIC and SOC
designs I can tell you unequivocally that gates are tested, functional
blocks are tested and the entire design is tested at every step in the
process, especially before the silicon is produced. Once in production
the scan chains are tested against the expected patterns and defects
are investigated and root causes exposed and corrected.

SSI chips are another matter entirely and I never mentioned them and
now you bring them up as some kind of example. Then you cite 7400
series chips where JTAG is not even feasible due to pin count and TTL
technology isn't even used in most large scale projects where JTAG is
an essential part of the validation process. This shows how your
knowledge of this topic really out of date and obsolescent. Where
flip-flops (of any technology) are incorporated in the LSI and HSI of
an ASIC or SOC I can tell you with absolute certainty they are tested
at the gate level.

You don't need to repeat yourself. You need to shut up about a topic
you know nothing about. Your ignorance is exposed.

Ian Collins

unread,
Feb 9, 2016, 1:04:46 PM2/9/16
to
Geoff wrote:
> On Tue, 9 Feb 2016 08:33:41 -0500, Jerry Stuckle
> <jstu...@attglobal.net> wrote:
> [snip]
>
>> The design is tested. Individual gates are not - or so I've been told
>> by Intel engineers - or any other hardware engineers I've worked with.
>> They don't even test individual gates in SSI chips, which have very few
>> gates. For instance, they don't test the individual gates in a 7473
>> flip-flop - they just ensure the chip operates correctly.
>
> Your knowledge is outdated.

Not even that, just wrong... The ASIC designs I was involved with in the
mid 80s had to have full scan path test coverage. It was a requirement
for military avionics.

--
Ian Collins

Jerry Stuckle

unread,
Feb 9, 2016, 1:05:43 PM2/9/16
to
On 2/9/2016 12:13 PM, Geoff wrote:
> On Tue, 9 Feb 2016 08:33:41 -0500, Jerry Stuckle
> <jstu...@attglobal.net> wrote:
> [snip]
>
>> The design is tested. Individual gates are not - or so I've been told
>> by Intel engineers - or any other hardware engineers I've worked with.
>> They don't even test individual gates in SSI chips, which have very few
>> gates. For instance, they don't test the individual gates in a 7473
>> flip-flop - they just ensure the chip operates correctly.
>
> Your knowledge is outdated. Your knowledge is also second-hand and
> most likely have misunderstood what you were told. As a practicing
> engineer in the field who has practical experience with ASIC and SOC
> designs I can tell you unequivocally that gates are tested, functional
> blocks are tested and the entire design is tested at every step in the
> process, especially before the silicon is produced. Once in production
> the scan chains are tested against the expected patterns and defects
> are investigated and root causes exposed and corrected.
>

Not really - not when I was consulting with the programmers at Intel.
And I can tell you that every one of the millions of gates on a chip is
NOT tested. In fact, in the finished chip, there aren't even ways to
test the vast majority of the gates. What IS tested is the operation of
the chip.

Yes, the design is tested, and within that design, functional blocks are
tested. But if a half adder works, I don't need to test any of the
individual gates.

> SSI chips are another matter entirely and I never mentioned them and
> now you bring them up as some kind of example. Then you cite 7400
> series chips where JTAG is not even feasible due to pin count and TTL
> technology isn't even used in most large scale projects where JTAG is
> an essential part of the validation process. This shows how your
> knowledge of this topic really out of date and obsolescent. Where
> flip-flops (of any technology) are incorporated in the LSI and HSI of
> an ASIC or SOC I can tell you with absolute certainty they are tested
> at the gate level.
>

I'm also not talking about SOC's - YOU brought those up. I'm talking
about Pentiums, I3's, I5's, I7's - these aren't even designed by
engineers. The engineers feed in design specs and the chips are
designed by computers. The output is a logical "chip" which can be
tested in pieces and as a whole. Each gate *could* be tested - but it's
a huge waste of time. A NAND gate is a NAND gate. It will always work
as a NAND gate, and there's no reason to think it won't.

Of course, not everything can be tested - timing, for instance, can
cause problems. So once the design is verified at this level, a mask is
created and prototypes are made. The prototypes are then tested - but
not every gate is tested.

And even with JTAG, it is not feasible to test any of the millions of
gates on a current microprocessor. There aren't nearly enough pins to
do it. In fact, the ration of gates to pins is lower on a 7400 series
chip than it is on any of the current microprocessors. If you can't do
it on a 7400 series chip, you sure as heck can't do it on an I7.

> You don't need to repeat yourself. You need to shut up about a topic
> you know nothing about. Your ignorance is exposed.
>

You should learn to take your own advice.

Jerry Stuckle

unread,
Feb 9, 2016, 1:07:22 PM2/9/16
to
You check every NAND gate to ensure it works as a NAND gate? And you do
it on the chip?

That's different than doing it in simulation. And scan chains still do
not check every gate. They check the operation of small blocks.

Jerry Stuckle

unread,
Feb 9, 2016, 3:03:43 PM2/9/16
to
So you checked every single NAND gate, NOR gate, and inverter on your
chip? Individually?

Ian Collins

unread,
Feb 9, 2016, 10:02:54 PM2/9/16
to
Jerry Stuckle wrote:
> On 2/9/2016 1:04 PM, Ian Collins wrote:
>> Geoff wrote:
>>> On Tue, 9 Feb 2016 08:33:41 -0500, Jerry Stuckle
>>> <jstu...@attglobal.net> wrote:
>>> [snip]
>>>
>>>> The design is tested. Individual gates are not - or so I've been told
>>>> by Intel engineers - or any other hardware engineers I've worked with.
>>>> They don't even test individual gates in SSI chips, which have very few
>>>> gates. For instance, they don't test the individual gates in a 7473
>>>> flip-flop - they just ensure the chip operates correctly.
>>>
>>> Your knowledge is outdated.
>>
>> Not even that, just wrong... The ASIC designs I was involved with in the
>> mid 80s had to have full scan path test coverage. It was a requirement
>> for military avionics.
>>
>
> So you checked every single NAND gate, NOR gate, and inverter on your
> chip? Individually?

Blimey, this is going back over 30 years (and it wasn't my chip! (and
much beer has passed through my brain since then))....

So what I can remember is everything with state (flip-flops) was tested
in the chain. Imagine something like an octal tri-state latch (say a
74574 for the oldies) where test mode connected the 8 latches in series.

So I guess the analogy with unit tests something like:

line of code -> gate/inverter
function -> flip-flop
test suite -> scan test.

So the scan test tests the flip-flops which verifies the gates. So yes,
any duff gate would cause the test to fail.

--
Ian Collins

Jerry Stuckle

unread,
Feb 9, 2016, 10:15:14 PM2/9/16
to
On 2/9/2016 10:02 PM, Ian Collins wrote:
> Jerry Stuckle wrote:
>> On 2/9/2016 1:04 PM, Ian Collins wrote:
>>> Geoff wrote:
>>>> On Tue, 9 Feb 2016 08:33:41 -0500, Jerry Stuckle
>>>> <jstu...@attglobal.net> wrote:
>>>> [snip]
>>>>
>>>>> The design is tested. Individual gates are not - or so I've been told
>>>>> by Intel engineers - or any other hardware engineers I've worked with.
>>>>> They don't even test individual gates in SSI chips, which have very
>>>>> few
>>>>> gates. For instance, they don't test the individual gates in a 7473
>>>>> flip-flop - they just ensure the chip operates correctly.
>>>>
>>>> Your knowledge is outdated.
>>>
>>> Not even that, just wrong... The ASIC designs I was involved with in the
>>> mid 80s had to have full scan path test coverage. It was a requirement
>>> for military avionics.
>>>
>>
>> So you checked every single NAND gate, NOR gate, and inverter on your
>> chip? Individually?
>
> Blimey, this is going back over 30 years (and it wasn't my chip! (and
> much beer has passed through my brain since then))....
>
> So what I can remember is everything with state (flip-flops) was tested
> in the chain. Imagine something like an octal tri-state latch (say a
> 74574 for the oldies) where test mode connected the 8 latches in series.
>

But you claimed every GATE was tested. What's the real answer?

> So I guess the analogy with unit tests something like:
>
> line of code -> gate/inverter
> function -> flip-flop
> test suite -> scan test.
>
> So the scan test tests the flip-flops which verifies the gates. So yes,
> any duff gate would cause the test to fail.
>

Ah, so you didn't test every gate - despite your earlier claims.

Vir Campestris

unread,
Feb 11, 2016, 4:36:07 PM2/11/16
to
On 09/02/2016 13:33, Jerry Stuckle wrote:
> The design is tested. Individual gates are not - or so I've been told
> by Intel engineers - or any other hardware engineers I've worked with.
> They don't even test individual gates in SSI chips, which have very few
> gates. For instance, they don't test the individual gates in a 7473
> flip-flop - they just ensure the chip operates correctly.

Jerry,

You've got a bunch of people here who have been either designing the
chips or working closely with those who are.

If you want to tell us that Intel, the biggest manufacturer of
microprocessors on the planet, have worse testing regimes than their
competitors, and you certain of that because you spoke to one of their
engineers back when a 7473 was still in common use - well, I'm not going
to argue with you about it.

But for the avoidance of doubt you are wrong.

Andy

Jerry Stuckle

unread,
Feb 11, 2016, 7:02:12 PM2/11/16
to
No, I'm not. I'm telling you they don't perform unnecessary tests.
They know a NAND gate in a design is a NAND gate. They don't need to
test to see that it works. But they do test the circuit the NAND gate
is in. If the circuit works, then they know the NAND gate is working.

And no, I didn't speak to their engineers when the 7473 was in common
use - I used that as an example. I spoke with them much more recently.
And since my original background was EE, I spoke with them on an equal
level - which is why companies call me back. I understand both the
hardware and the software.

So yes, I HAVE been working closely with "those who are". And I
understand how they work.

But you've just once again proven how dense you are.

Jerry Stuckle

unread,
Feb 11, 2016, 9:15:38 PM2/11/16
to
On 2/11/2016 4:35 PM, Vir Campestris wrote:
Oh, and BTW - when I challenged these so-called experts as to whether
they tested every NAND, NOR and inverter in their design to see if they
operated as such, there was no answer.

Confirmation enough. They don't do it, either.

Gert-Jan de Vos

unread,
Feb 15, 2016, 8:43:31 AM2/15/16
to
You might be interested to read these articles on TDD by a well respected author. He elaborates on several downsides of TDD here:

http://www.rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf
http://www.rbcs-us.com/documents/Segue.pdf

Richard

unread,
Feb 15, 2016, 12:13:13 PM2/15/16
to
[Please do not mail me a copy of your followup]

Gert-Jan de Vos <gert-ja...@onsneteindhoven.nl> spake the secret code
<7b97604b-5808-4e93...@googlegroups.com> thusly:
James Coplien and Uncle Bob discussing TDD:
<https://www.youtube.com/watch?v=KtHQGs3zFAM>
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
The Terminals Wiki <http://terminals.classiccmp.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>

Paavo Helde

unread,
Feb 15, 2016, 1:11:42 PM2/15/16
to
Tried to read these links. To me it seems he is putting up several straw
men and then attacking them vehemently.

One of these is that he seems to be using some non-productive definition
of "unit test" and then goes to great lengths to show that creating and
using such tests is, err, non-productive.


Öö Tiib

unread,
Feb 15, 2016, 1:56:19 PM2/15/16
to
On Monday, 15 February 2016 19:13:13 UTC+2, Richard wrote:
> [Please do not mail me a copy of your followup]
>
> Gert-Jan de Vos <gert-ja...@onsneteindhoven.nl> spake the secret code
> <7b97604b-5808-4e93...@googlegroups.com> thusly:
>
> >You might be interested to read these articles on TDD by a well
> >respected author. He elaborates on several downsides of TDD here:
> >
> >http://www.rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf
> >http://www.rbcs-us.com/documents/Segue.pdf
>
> James Coplien and Uncle Bob discussing TDD:
> <https://www.youtube.com/watch?v=KtHQGs3zFAM>

Uncle Bob has 3 rules:

1. You are not allowed to write any production code unless it is to make
a failing unit test pass.
2. You are not allowed to write any more of a unit test than is sufficient
to fail; and compilation failures are failures.
3. You are not allowed to write any more production code than is sufficient
to pass the one failing unit test.

It feels oversimplification because isn't it circular? Code does what
tests test and tests test what code does. The rules do not tell how to
break out of that circle and to achieve that the code does what actual
product (that we are likely supposed to write) should do. Is it assumed
as granted to happen somehow?

Cholo Lennon

unread,
Feb 15, 2016, 2:20:40 PM2/15/16
to
On 02/15/2016 10:42 AM, Gert-Jan de Vos wrote:
> You might be interested to read these articles on TDD by a well respected author. He elaborates on several downsides of TDD here:

TDD is not the same thing as writing unit tests.

>
> http://www.rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf
> http://www.rbcs-us.com/documents/Segue.pdf
>

I do not agree with the author. Unit tests are really useful tools,
specially for library developers


Regards


--
Cholo Lennon
Bs.As.
ARG

Wouter van Ooijen

unread,
Feb 15, 2016, 3:31:50 PM2/15/16
to
Op 15-Feb-16 om 7:55 PM schreef Öö Tiib:
> On Monday, 15 February 2016 19:13:13 UTC+2, Richard wrote:
>> [Please do not mail me a copy of your followup]
>>
>> Gert-Jan de Vos <gert-ja...@onsneteindhoven.nl> spake the secret code
>> <7b97604b-5808-4e93...@googlegroups.com> thusly:
>>
>>> You might be interested to read these articles on TDD by a well
>>> respected author. He elaborates on several downsides of TDD here:
>>>
>>> http://www.rbcs-us.com/documents/Why-Most-Unit-Testing-is-Waste.pdf
>>> http://www.rbcs-us.com/documents/Segue.pdf
>>
>> James Coplien and Uncle Bob discussing TDD:
>> <https://www.youtube.com/watch?v=KtHQGs3zFAM>
>
> Uncle Bob has 3 rules:
>
> 1. You are not allowed to write any production code unless it is to make
> a failing unit test pass.

So unit-internal itegrity checking (defensive coding) is 'verboten'?

> 2. You are not allowed to write any more of a unit test than is sufficient
> to fail; and compilation failures are failures.

So sitting back, partioning your problem, and writing tests on the
boundaries (in one go, just because your mind is focussed on it) is out??

> 3. You are not allowed to write any more production code than is sufficient
> to pass the one failing unit test.

Yeah, by now I've got it.

So I want to write f(n) -> n+1.
Case 1: f(0)==1

int f( int n ){
return 1;
}

Case 2: f(1) == 2

int f( int n ){
if( n == 1 ) return 2;
return 1;
}

Case 3: f(2) == 3

int f( int n ){
if( n == 2 ) return 3;
if( n == 1 ) return 2;
return 1;
}

Wow, I feel productive! Am I getting payed by the lines of code or by
the unit tests? Doesn't matter, they go hand in hand, and I can write
one each per minute. Better yet, I can write a Python script that
generates a pair each millisecond! (But can I write that script non-TDD
fashion, please?)


*any* methodology that forbids people to sit back and think sucks.

Wouter van Ooijen
It is loading more messages.
0 new messages