Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

High horse

571 views
Skip to first unread message

Mr Flibble

unread,
Jun 1, 2018, 10:17:55 PM6/1/18
to
I see Ian Collins is on his TDD high horse again. Fecking clueless.

/Flibble

--
"Suppose it’s all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I’d say, bone cancer in children? What’s that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It’s not right, it’s utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That’s what I would say."

Ian Collins

unread,
Jun 1, 2018, 11:00:51 PM6/1/18
to
On 02/06/18 14:17, Mr Flibble wrote:
> I see Ian Collins is on his TDD high horse again. Fecking clueless.

Feel free to write code that you don't know works.

--
Ian

Paavo Helde

unread,
Jun 2, 2018, 2:29:00 AM6/2/18
to
On 2.06.2018 5:17, Mr Flibble wrote:
> I see Ian Collins is on his TDD high horse again. Fecking clueless.

TDD is fine as long as it stays a tool and does not become a goal in
itself. One still has to know what the program must do and how to write
the code to accomplish that. TDD does not mean randomly alternating your
code to make the next test pass (for starters, that would require 100%
perfect tests which never happens). And if any TDD method appears
counter-productive or not effective in some situation, it can be skipped
or replaced (because it is just a tool).

The same can be said for any tool or methodology of course, but it looks
like TDD is especially prone to misunderstandings and misuse for some
reason (it cannot beat "agile" of course ;-)


Ian Collins

unread,
Jun 2, 2018, 2:46:21 AM6/2/18
to
Very true!

--
Ian.

Mr Flibble

unread,
Jun 2, 2018, 3:53:18 PM6/2/18
to
Designing software by incrementally fixing failing tests is NOT designing
software.

Ian Collins

unread,
Jun 2, 2018, 4:22:41 PM6/2/18
to
On 03/06/18 07:53, Mr Flibble wrote:
> On 02/06/2018 04:00, Ian Collins wrote:
>> On 02/06/18 14:17, Mr Flibble wrote:
>>> I see Ian Collins is on his TDD high horse again.  Fecking clueless.
>>
>> Feel free to write code that you don't know works.
>
> Designing software by incrementally fixing failing tests is NOT designing
> software.

True.

https://www.thinksys.com/development/tdd-myths-misconceptions/

Point 3.

--
Ian.

Daniel

unread,
Jun 3, 2018, 9:26:48 AM6/3/18
to
Are you suggesting that TDD is the one and only way to write code that you
know "works"? Works in what sense? Are you one of those people that talk
about "100 percent" test coverage? Do you write tests for all your accessors, the way Robert Martin and Kent Beck do in their books? Do you think that is the best use of your time for keeping your software free of defects?

A few points.

One, for a non-trivial specification, it would presumably take an arbitrarily large number of tests, in some cases an infinite number, to cover off all outcomes. Since nobody has that much time, we sample, sometimes randomly, sometimes deterministically, e.g. edge cases. It's nonsensical to talk about "100" percent coverage unless you're talking about software verified through mathematical proof, and if you're talking about that, you're not talking about C++.

Two, there is a significant amount of validation that cannot be approached through unit tests, I'm thinking particularly of simulation, where we need to run code for hours and calculate the statistical properties of outcomes.

Best regards,
Daniel

Daniel

unread,
Jun 3, 2018, 9:49:34 AM6/3/18
to
In what way does point 3 refute Mr Flibble's point?

The books about TDD by authors such as Kent Beck and Robert Martin are
replete with chapters where the developer starts by writing some test
for something, e.g. getCustomer, attempts to compile, and discovers that
compile fails! They then add some code or do some refactoring, until the
test passes.

Best regards,
Daniel

Jorgen Grahn

unread,
Jun 3, 2018, 11:43:49 AM6/3/18
to
On Sun, 2018-06-03, Daniel wrote:
> On Friday, June 1, 2018 at 11:00:51 PM UTC-4, Ian Collins wrote:
>> On 02/06/18 14:17, Mr Flibble wrote:
>> > I see Ian Collins is on his TDD high horse again. Fecking clueless.
>>
>> Feel free to write code that you don't know works.
>>
> Are you suggesting that TDD is the one and only way to write code
> that you know "works"?

Are there /any/ ways?

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Jorgen Grahn

unread,
Jun 3, 2018, 11:50:29 AM6/3/18
to
On Sun, 2018-06-03, Daniel wrote:
> On Saturday, June 2, 2018 at 4:22:41 PM UTC-4, Ian Collins wrote:
>> On 03/06/18 07:53, Mr Flibble wrote:
>> >
>> > Designing software by incrementally fixing failing tests is NOT designing
>> > software.
>>
>> True.
>>
>> https://www.thinksys.com/development/tdd-myths-misconceptions/
>>
>> Point 3.
>>
> In what way does point 3 refute Mr Flibble's point?

Read it again, and work your way through the layers of negation.
Flibble, Ian and the article agree that TDD isn't a software design
method.

> The books about TDD by authors such as Kent Beck and Robert Martin are
> replete with chapters where the developer starts by writing some test
> for something, e.g. getCustomer, attempts to compile, and discovers that
> compile fails! They then add some code or do some refactoring, until the
> test passes.

That's certainly the impression I got too, from reading Beck's book
once. His argument seemed weak to me.

woodb...@gmail.com

unread,
Jun 3, 2018, 3:15:49 PM6/3/18
to
On Sunday, June 3, 2018 at 10:43:49 AM UTC-5, Jorgen Grahn wrote:
> On Sun, 2018-06-03, Daniel wrote:
> > Are you suggesting that TDD is the one and only way to write code
> > that you know "works"?
>
> Are there /any/ ways?
>

“We make men without chests and expect from them virtue and
enterprise. We laugh at honor and are shocked to find traitors
in our midst.” C.S. Lewis

My approach is seeking divine guidance and help (Providence).
This worked for the Pilgrims at least somewhat.


Brian
Ebenezer Enterprises
http://webEbenezer.net

Daniel

unread,
Jun 3, 2018, 4:26:57 PM6/3/18
to
On Sunday, June 3, 2018 at 3:15:49 PM UTC-4, woodb...@gmail.com wrote:
> On Sunday, June 3, 2018 at 10:43:49 AM UTC-5, Jorgen Grahn wrote:
> > On Sun, 2018-06-03, Daniel wrote:
> > > Are you suggesting that TDD is the one and only way to write code
> > > that you know "works"?
> >
> > Are there /any/ ways?
> >
>
> “We make men without chests and expect from them virtue and
> enterprise. We laugh at honor and are shocked to find traitors
> in our midst.” C.S. Lewis
>
> My approach is seeking divine guidance and help (Providence).
>

Could be working. I notice that nobody has reported an issue to Ebenezer-
group/onwards

https://github.com/Ebenezer-group/onwards/issues

Unless there's another explanation ...

Daniel

Ian Collins

unread,
Jun 3, 2018, 4:47:09 PM6/3/18
to
On 04/06/18 01:26, Daniel wrote:
> On Friday, June 1, 2018 at 11:00:51 PM UTC-4, Ian Collins wrote:
>> On 02/06/18 14:17, Mr Flibble wrote:
>>> I see Ian Collins is on his TDD high horse again. Fecking
>>> clueless.
>>
>> Feel free to write code that you don't know works.
>>
> Are you suggesting that TDD is the one and only way to write code
> that you know "works"?

No.

> Works in what sense?

Works as in does what the detailed design (the tests) says it is
supposed to do.

> Are you one of those
> people that talk about "100 percent" test coverage?

No, in software, there is always the unexpected. But I do strive to
minimise the unexpected through test coverage.

> Do you write
> tests for all your accessors, the way Robert Martin and Kent Beck do
> in their books?

Trivial accessors, no and neither to they.

> Do you think that is the best use of your time for
> keeping your software free of defects?

Yes because it helps me write code faster and better. Maybe not
initially, but the time saved in chasing down bugs, retesting after
changes or stepping through code in a debugger pays off pretty quickly.
Tests are an investment in the future of the code.

> A few points.
>
> One, for a non-trivial specification, it would presumably take an
> arbitrarily large number of tests, in some cases an infinite number,
> to cover off all outcomes. Since nobody has that much time, we
> sample, sometimes randomly, sometimes deterministically, e.g. edge
> cases. It's nonsensical to talk about "100" percent coverage unless
> you're talking about software verified through mathematical proof,
> and if you're talking about that, you're not talking about C++.

I don't think anyone claims otherwise. See the last point in the link I
posted.

> Two, there is a significant amount of validation that cannot be
> approached through unit tests, I'm thinking particularly of
> simulation, where we need to run code for hours and calculate the
> statistical properties of outcomes.

That's correct, there should always be multiple levels of testing, again
I don't think anyone claims otherwise.

--
Ian.

Daniel

unread,
Jun 3, 2018, 5:42:05 PM6/3/18
to
On Sunday, June 3, 2018 at 4:47:09 PM UTC-4, Ian Collins wrote:
> On 04/06/18 01:26, Daniel wrote:
>
> > Works in what sense?
>
> Works as in does what the detailed design (the tests) says it is
> supposed to do.

Except the tests aren't the detailed design, never were. This is simply a
conceit of TDD. Much of my open source contributions are about implementing
Internet Engineering Task Force (IETF) recommendations, and I don't think
that I could claim much conformance if I limited myself to available test
suites. Tests are better understood as sampling the specification.

>
> > Do you write
> > tests for all your accessors, the way Robert Martin and Kent Beck do
> > in their books?
>
> Trivial accessors, no and neither to they.

Martin does, in his book Extreme Programming in Practice (2001)

> > One, for a non-trivial specification, it would presumably take an
> > arbitrarily large number of tests, in some cases an infinite number,
> > to cover off all outcomes. Since nobody has that much time, we
> > sample, sometimes randomly, sometimes deterministically, e.g. edge
> > cases. It's nonsensical to talk about "100" percent coverage unless
> > you're talking about software verified through mathematical proof,
> > and if you're talking about that, you're not talking about C++.
>
> I don't think anyone claims otherwise.

You can't have it both ways, that tests _are_ the specification, as TDD'ers
claim, and that tests _sample_ the specification, as everybody else would
have it.

Best regards,
Daniel

Melzzzzz

unread,
Jun 3, 2018, 6:23:28 PM6/3/18
to
On 2018-06-03, woodb...@gmail.com <woodb...@gmail.com> wrote:
> On Sunday, June 3, 2018 at 10:43:49 AM UTC-5, Jorgen Grahn wrote:
>> On Sun, 2018-06-03, Daniel wrote:
>> > Are you suggesting that TDD is the one and only way to write code
>> > that you know "works"?
>>
>> Are there /any/ ways?
>>
>
> “We make men without chests and expect from them virtue and
> enterprise. We laugh at honor and are shocked to find traitors
> in our midst.” C.S. Lewis
>
> My approach is seeking divine guidance and help (Providence).

Intuition?

> This worked for the Pilgrims at least somewhat.

Intuition is not always right...

--
press any key to continue or any other to quit...

Ian Collins

unread,
Jun 3, 2018, 6:25:24 PM6/3/18
to
On 04/06/18 09:41, Daniel wrote:
> On Sunday, June 3, 2018 at 4:47:09 PM UTC-4, Ian Collins wrote:
>> On 04/06/18 01:26, Daniel wrote:
>>
>>> Works in what sense?
>>
>> Works as in does what the detailed design (the tests) says it is
>> supposed to do.
>
> Except the tests aren't the detailed design, never were. This is simply a
> conceit of TDD. Much of my open source contributions are about implementing
> Internet Engineering Task Force (IETF) recommendations, and I don't think
> that I could claim much conformance if I limited myself to available test
> suites. Tests are better understood as sampling the specification.

How can tests sample something that doesn't exist? Most products I have
worked on (baring OTT DoD stuff in the 80s and safety critical) didn't
have what heavyweight processes would recognise as a detailed design
specification. Most stop at the layer above or user stories which are
probably the most common "requirement" in non-safety critical projects.

If the "specification" is a user story, the tests break that down into a
series of steps needed to complete the story. The tests bridge the gap
between what the user asks for and the code. The entity that bridges
the gap between what the user asks for and the code is usually called a
specification..

Would you consider writing a detailed design to bridge the gap between
what the user asks for and the code as sampling the specification? If
where is the conceit? If no, how does a requirement written in prose
differ from one written in code?

I have written software to generate test cases form the clauses in
formal detailed design on medical products (IEC 62304 class B&C) and
these ended up being a subset of the actual test cases for the code. So
the tests were more detailed than the formal detailed design; at least
until they were fed back into the design.

>>> Do you write
>>> tests for all your accessors, the way Robert Martin and Kent Beck do
>>> in their books?
>>
>> Trivial accessors, no and neither to they.
>
> Martin does, in his book Extreme Programming in Practice (2001)

That isn't what Kent Beck described.

>>> One, for a non-trivial specification, it would presumably take an
>>> arbitrarily large number of tests, in some cases an infinite number,
>>> to cover off all outcomes. Since nobody has that much time, we
>>> sample, sometimes randomly, sometimes deterministically, e.g. edge
>>> cases. It's nonsensical to talk about "100" percent coverage unless
>>> you're talking about software verified through mathematical proof,
>>> and if you're talking about that, you're not talking about C++.
>>
>> I don't think anyone claims otherwise.
>
> You can't have it both ways, that tests _are_ the specification, as TDD'ers
> claim, and that tests _sample_ the specification, as everybody else would
> have it.

I was responding to "It's nonsensical to talk about "100" percent coverage".

--
Ian.

woodb...@gmail.com

unread,
Jun 3, 2018, 8:04:18 PM6/3/18
to
Joseph had dreams about the future and shared his dreams
with his brothers. They reacted badly and thought about
killing him. They wound up selling him as a slave. They
had a baseless hatred against their brother.

When I started working on my software in 1999, it wasn't
obvious that ++C would remain one of best languages for
software development. It also wasn't obvious that an
on-line approach (cloud computing) would become so
important. So I say "thanks G-d" for helping me get
those and other things right.

https://www.biblestudytools.com/john/passage/?q=john+15:18-25

Daniel

unread,
Jun 3, 2018, 9:11:14 PM6/3/18
to
On Sunday, June 3, 2018 at 6:25:24 PM UTC-4, Ian Collins wrote:
>
> Would you consider writing a detailed design to bridge the gap between
> what the user asks for and the code as sampling the specification?

Probably not anymore :-) Because I don't want to, and don't have to.

> If where is the conceit? If no, how does a requirement written in prose
> differ from one written in code?

I work mostly on financial trading and risk projects. The specification
such as it is consists of a wide variety of business documents, technical
notes that can be expected to contain errors, dozens of matlab scipts,
spreadsheets and VBA code, interview notes, business processes (a guy runs
over to a Bloomberg terminal and downloads some data daily) and much else. I
regard this collective body of artifacts as the "specification", to be
reflected upon and used as the basis for a computer application.

In general I would say that a requirement written in prose is easier to read
and think about than one written in code. It's nice to have specifications
written in a formal notation such as BNF, but even that requires some
supplementary prose.
>
> I have written software to generate test cases form the clauses in
> formal detailed design on medical products (IEC 62304 class B&C) and
> these ended up being a subset of the actual test cases for the code. So
> the tests were more detailed than the formal detailed design; at least
> until they were fed back into the design.
>
If you're generating test cases from the clauses in formal notation, I don't see how that's test first.

Best regards,
Daniel

seeplus

unread,
Jun 3, 2018, 9:58:26 PM6/3/18
to
On Monday, June 4, 2018 at 10:04:18 AM UTC+10, woodb...@gmail.com wrote:
> So I say "thanks G-d" for helping me get
> those and other things right.
>
> https://www.biblestudytools.com/john/passage/?q=john+15:18-25
>
>
> Brian
> Ebenezer Enterprises
> http://webEbenezer.net

So you have a robot snipping away the cancer around your prostate gland....
Would you prefer:
- The programmer thought long and hard and decided not to
use "auto" in this case OR:

- Just prayed to a mythical faery that some divine inspired snipping
"for loop" should work .... somehow?

Ian Collins

unread,
Jun 3, 2018, 10:15:00 PM6/3/18
to
On 04/06/18 13:11, Daniel wrote:
> On Sunday, June 3, 2018 at 6:25:24 PM UTC-4, Ian Collins wrote:
>>
>> Would you consider writing a detailed design to bridge the gap between
>> what the user asks for and the code as sampling the specification?
>
> Probably not anymore :-) Because I don't want to, and don't have to.

Me too, thanks to tests :)

>> If where is the conceit? If no, how does a requirement written in prose
>> differ from one written in code?
>
> I work mostly on financial trading and risk projects. The specification
> such as it is consists of a wide variety of business documents, technical
> notes that can be expected to contain errors, dozens of matlab scipts,
> spreadsheets and VBA code, interview notes, business processes (a guy runs
> over to a Bloomberg terminal and downloads some data daily) and much else. I
> regard this collective body of artifacts as the "specification", to be
> reflected upon and used as the basis for a computer application.
>
> In general I would say that a requirement written in prose is easier to read
> and think about than one written in code. It's nice to have specifications
> written in a formal notation such as BNF, but even that requires some
> supplementary prose.

It is also something a typical customer, or in our case, product
specialists and UX team lack the skills to produce. Even when written
in formal notation, you still can't guarantee a written specification
truly reflects the code.

I also believe most programmers fall asleep way sooner reading
requirements that reading code..

>> I have written software to generate test cases form the clauses in
>> formal detailed design on medical products (IEC 62304 class B&C) and
>> these ended up being a subset of the actual test cases for the code. So
>> the tests were more detailed than the formal detailed design; at least
>> until they were fed back into the design.
>>
> If you're generating test cases from the clauses in formal notation, I don't see how that's test first.

Well the test cases were empty prototypes that did come before the code...

This was an investigation into generating documentation with validated
links between clauses in the various layers. The test case was just the
final step to verify that each clause had a test.

--
Ian.

Juha Nieminen

unread,
Jun 4, 2018, 3:12:29 AM6/4/18
to
I think that TDD is poorly suited for the kind of programming I do,
which is graphically-heavy video games.

How exactly do you write a unit test for "this sprite should appear
here, and its fragment shader should make it look like this"?

There are certain things that could ostensibly be automatically tested,
but the machinery required to do that feels excessively laborious and
complicated to implement. Such as something like "when the projectile
shot by the player hits this enemy sprite, it should launch this particle
effect and run this fragment shader". I suppose theoretically that could
be automatically tested, but the code required for it would be quite
complicated.

Another thing I have noticed about TDD is that it's difficult to test
the behavior of private member functions (without exposing them to
outside code, breaking object-oriented modularity design.) Essentially
you would need to add testing code to the class itself, which I think
isn't something TDD is supposed to do.

David Brown

unread,
Jun 4, 2018, 3:12:55 AM6/4/18
to
On 04/06/18 02:04, woodb...@gmail.com wrote:
> On Sunday, June 3, 2018 at 3:26:57 PM UTC-5, Daniel wrote:
>> On Sunday, June 3, 2018 at 3:15:49 PM UTC-4, woodb...@gmail.com wrote:
>>> On Sunday, June 3, 2018 at 10:43:49 AM UTC-5, Jorgen Grahn wrote:
>>>> On Sun, 2018-06-03, Daniel wrote:
>>>>> Are you suggesting that TDD is the one and only way to write code
>>>>> that you know "works"?
>>>>
>>>> Are there /any/ ways?
>>>>
>>>
>>> “We make men without chests and expect from them virtue and
>>> enterprise. We laugh at honor and are shocked to find traitors
>>> in our midst.” C.S. Lewis
>>>
>>> My approach is seeking divine guidance and help (Providence).
>>>
>>
>> Could be working. I notice that nobody has reported an issue to Ebenezer-
>> group/onwards
>>
>> https://github.com/Ebenezer-group/onwards/issues
>>
>> Unless there's another explanation ...
>>
>
> Joseph had dreams about the future and shared his dreams
> with his brothers. They reacted badly and thought about
> killing him. They wound up selling him as a slave. They
> had a baseless hatred against their brother.

Are you trying to say you think people hate you? They don't - not as
far as I can see. I dislike some of your opinions (like your
homophobia), and I advise against mixing personal philosophies and
religion with professional work, but I certainly don't /hate/ you. I
would not be trying to give you advice if I did!

Or are you saying people are afraid of your dreams? If so, that is
utter nonsense. You can see what people think of your online code
generator plans - the reaction is /not/ one of fear.

Or are you simply showing your megalomania again? You have previously
claimed to be a second Noah and Jesus's appointed vice-president on
earth with special responsibilities for C++. So maybe being merely a
reincarnated Joseph is progress.

"But the fact that some geniuses were laughed at does not imply that all
who are laughed at are geniuses. They laughed at Columbus, they laughed
at Fulton, they laughed at the Wright Brothers. But they also laughed at
Bozo the Clown." Carl Sagan


>
> When I started working on my software in 1999, it wasn't
> obvious that ++C would remain one of best languages for
> software development.

It was a solid bet at the time that C++ would remain popular for a good
while to come. Not a sure bet, but a reasonable one. And it was a
self-fulfilling prophesy, because every second programmer the world over
made exactly the same bet.

> It also wasn't obvious that an
> on-line approach (cloud computing) would become so
> important.

Betting on online code generation in 1999 would certainly have been a
bold move. Betting on online "something" or "the internet" was not
unreasonable.

But while cloud computing is important, online code generation is almost
entirely irrelevant.

> So I say "thanks G-d" for helping me get
> those and other things right.

Thank whoever you want for your successes (but don't forget to blame
them equally for your failures or disappointments). Just don't expect
others to hold the same views. And if you want to get any customers,
you might want to tell them that it was in fact /you/ who had the ideas
and wrote the code. No potential customer is going to want code written
by your god - they will only come to you for code /you/ wrote.

Ian Collins

unread,
Jun 4, 2018, 3:49:54 AM6/4/18
to
On 04/06/18 19:12, Juha Nieminen wrote:
> Ian Collins <ian-...@hotmail.com> wrote:
>> On 02/06/18 14:17, Mr Flibble wrote:
>>> I see Ian Collins is on his TDD high horse again. Fecking clueless.
>>
>> Feel free to write code that you don't know works.
>
> I think that TDD is poorly suited for the kind of programming I do,
> which is graphically-heavy video games.

I have never worked in that field, so I can't really comment.

> How exactly do you write a unit test for "this sprite should appear
> here, and its fragment shader should make it look like this"?

Our UI teams do unit test similar component functionality, but again
that is not my area.

> There are certain things that could ostensibly be automatically tested,
> but the machinery required to do that feels excessively laborious and
> complicated to implement. Such as something like "when the projectile
> shot by the player hits this enemy sprite, it should launch this particle
> effect and run this fragment shader". I suppose theoretically that could
> be automatically tested, but the code required for it would be quite
> complicated.

Some of our test support infrastructure is very complex, but it only had
to be written once and it is very widely used, so the investment was
worth while. If the code doesn't have a long lifetime, it might not be
worth the cost. We have a considerable amount of legacy (early 2000s)
code which was never designed for test, if the code was, testing would
be much easier.

> Another thing I have noticed about TDD is that it's difficult to test
> the behavior of private member functions (without exposing them to
> outside code, breaking object-oriented modularity design.) Essentially
> you would need to add testing code to the class itself, which I think
> isn't something TDD is supposed to do.

If the functionality can't be tested through the public interface, the
usual approach (which often occurs when working with existing code) is
to extract the private member function functionality into a class and
test that. This does, surprisingly often, produce a useful reusable
component.

--
Ian.

leigh.v....@googlemail.com

unread,
Jun 4, 2018, 7:22:23 AM6/4/18
to
Reducing encapsulation in the process. I have said before that TDD is the enemy of encapsulation and therefore by extension an enemy of good design as encapsulation is an important design principle.

/Leigh

James Kuyper

unread,
Jun 4, 2018, 11:54:25 AM6/4/18
to
On 06/03/2018 09:26 AM, Daniel wrote:
> On Friday, June 1, 2018 at 11:00:51 PM UTC-4, Ian Collins wrote:
>> On 02/06/18 14:17, Mr Flibble wrote:
>>> I see Ian Collins is on his TDD high horse again. Fecking clueless.
>>
>> Feel free to write code that you don't know works.
>>
> Are you suggesting that TDD is the one and only way to write code that you
> know "works"? Works in what sense? Are you one of those people that talk
> about "100 percent" test coverage? ...
> ... Do you think that is the best use of your time for keeping your software free of defects?
>
> A few points.
>
> One, for a non-trivial specification, it would presumably take an
> arbitrarily large number of tests, in some cases an infinite number, to
> cover off all outcomes. Since nobody has that much time, we sample,
> sometimes randomly, sometimes deterministically, e.g. edge cases. It's
> nonsensical to talk about "100" percent coverage unless you're talking
> about software verified through mathematical proof, and if you're
> talking about that, you're not talking about C++.

I worked for awhile under a requirement that our unit tests provide 100%
coverage of our code - but I strongly suspect that what we were talking
about was involved different numbers in the numerator and denominator of
the fraction that is being converted to 100% than what you're apparently
thinking of. In our case, the denominator was the number of
full-expressions in the code. To this number was added any
sub-expressions that were not guaranteed to be executed when the
containing expression was executed (such as b and c in a?b:c, or b in
a&&b). The numerator was the number of such expressions which should be
executed by the unit test if the code was correct (this is obviously not
black-box testing, which we also did).
This was combined with other requirements, such as the requirement that
if a branch in the code depends upon the value of an integer expression,
there must be at least two cases where that branch is tested; one where
the expression should have the minimum possible value that should cause
the high branch to be chosen, and one where it should have the maximum
possible value that should cause the low branch to be chosen. This
principle was extended in the obvious fashion for switch() statements.
For purposes of this requirement, empty branches (such as if(a) b();)
are treated as if there were an expression-statement on the missing
branch (if(a) b(); else dummy();). Furthermore, there was a requirement
that if two test cases caused different expressions to be executed, the
test cases must be set up so that whether or not those expressions were
executed causes a detectable difference in the test results.

Was creating such thorough tests was "the best use of [my] time to keep
[my] software free of defects"? Well, our defect rate rose significantly
when management stopped requiring that we do such detailed testing, and
started pressuring us to make deliveries too quickly to allow such
testing to be done, so I suspect that the answer is "yes".

> Two, there is a significant amount of validation that cannot be
> approached through unit tests, I'm thinking particularly of
> simulation, where we need to run code for hours and calculate the
> statistical properties of outcomes.

I would be very surprised to learn that the "T" in TDD referred only to
unit tests. I would assume that if a requirement involved the
interaction of a program with the external environment (for instance,
there is a requirement that, under certain circumstances, a missile must
be fired), the corresponding test would include evaluating the impact on
the external environment (i.e. confirming that the missile (or at least,
a dummy missile) was actually fired).

James Kuyper

unread,
Jun 4, 2018, 12:03:16 PM6/4/18
to
On 06/04/2018 03:12 AM, Juha Nieminen wrote:
> Ian Collins <ian-...@hotmail.com> wrote:
>> On 02/06/18 14:17, Mr Flibble wrote:
>>> I see Ian Collins is on his TDD high horse again. Fecking clueless.
>>
>> Feel free to write code that you don't know works.
...
> Another thing I have noticed about TDD is that it's difficult to test
> the behavior of private member functions (without exposing them to
> outside code, breaking object-oriented modularity design.

You test the private member function by calling whichever public
functions call it, directly or indirectly. If execution of the private
member function is needed to achieve some required result, it's always
possible to perform such a test, by checking to see whether that
required result occurred. If execution of a private member function has
no testable consequences, then that private member function isn't
actually needed.

Mr Flibble

unread,
Jun 4, 2018, 12:16:46 PM6/4/18
to
On 04/06/2018 17:03, James Kuyper wrote:
> ... If execution of a private member function has
> no testable consequences, then that private member function isn't
> actually needed.

False. A private member function is allowed to call other private member
functions without the need to maintain a class invariant; these other
private member functions may indeed have no testable consequences unless
taken together with the other private member functions that call them.

TDD is the enemy of encapsulation ergo TDD is the enemy of good (object
oriented) software design.

Richard

unread,
Jun 4, 2018, 1:01:58 PM6/4/18
to
[Please do not mail me a copy of your followup]

Ian Collins <ian-...@hotmail.com> spake the secret code
<fnefio...@mid.individual.net> thusly:

>On 02/06/18 14:17, Mr Flibble wrote:
>> I see Ian Collins is on his TDD high horse again. Fecking clueless.
>
>Feel free to write code that you don't know works.

From his high horse it's much easier to hit the ball with his polo mallet :-).
--
"The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline>
The Terminals Wiki <http://terminals-wiki.org>
The Computer Graphics Museum <http://computergraphicsmuseum.org>
Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>

Richard

unread,
Jun 4, 2018, 1:09:58 PM6/4/18
to
[Please do not mail me a copy of your followup]

Jorgen Grahn <grahn...@snipabacken.se> spake the secret code
<slrnph83hp.31...@frailea.sa.invalid> thusly:
In a language like C++ where we have to contend with build systems and
compilers and linkers and their funny ways, getting code to a
syntactically correct state can be non-trivial at times. Going in
small steps, e.g. from compilation error to failing test to passing
test is one way to avoid making stupid mistakes. In my experience
with doing TDD in C++, the "satisfy the compiler" or "satisfy the
build system" steps generally only happen at the start of a new
library or executable. In those situations, making sure my build
system is setup correctly is a useful small step. When working on
adding new features to existing code that was written with TDD, it's
not usually necessary to proceed in those smaller steps. Experts take
larger steps than novices.

I've developed code without TDD for decades before I adopted TDD. I
certainly know how to do it both ways and for an existing code base,
it may simply be less work to do things the "old way" than adopt TDD
because there might be too much decoupling needed to do strict TDD.
However, having done it both ways for a number of years, I have found
that I am more productive with TDD than without it. I find my bugs
faster and spend less time doing long tedious integration debugging
sessions. (I've been in a tedious integration debugging session for
about two weeks right now at work.)

James Kuyper

unread,
Jun 4, 2018, 1:34:53 PM6/4/18
to
On 06/04/2018 12:16 PM, Mr Flibble wrote:
...
> False. A private member function is allowed to call other private member
> functions without the need to maintain a class invariant; these other
> private member functions may indeed have no testable consequences unless
> taken together with the other private member fun

So perform a test that takes them together with the other private member
functions (which must also, of course, be invoked indirectly). What's
the problem? The important thing isn't how the code is subdivided into
individual functions, whether public or private. The important thing is
what the code actually does. If you can stub out a private member
function without producing any testable failure to meet requirements,
you don't need that function to meet requirements - it's inherently and
by definition that simple.
For example, consider a private function which is intended to meet the
requirement that it cause a visit to the user by the Invisible Pink Unicorn
<https://en.wikipedia.org/wiki/Invisible_Pink_Unicorn>. By Her very
nature, it's inherently impossible to prove whether or not She has
visited, so there would be no testable consequences of stubbing out that
function. But for precisely the same reason, there's no problem with
delivering that code with the function still stubbed out.

Mr Flibble

unread,
Jun 4, 2018, 1:54:50 PM6/4/18
to
On 04/06/2018 18:34, James Kuyper wrote:
> On 06/04/2018 12:16 PM, Mr Flibble wrote:
> ...
>> False. A private member function is allowed to call other private member
>> functions without the need to maintain a class invariant; these other
>> private member functions may indeed have no testable consequences unless
>> taken together with the other private member fun
>
> So perform a test that takes them together with the other private member
> functions (which must also, of course, be invoked indirectly). What's
> the problem? The important thing isn't how the code is subdivided into
> individual functions, whether public or private. The important thing is
> what the code actually does. If you can stub out a private member
> function without producing any testable failure to meet requirements,
> you don't need that function to meet requirements - it's inherently and
> by definition that simple.

In a word: bullshit. Private member functions don't have to be testable in
isolation BY DESIGN but that doesn't mean they can removed. If you don't
get this then you are beyond help: your designs must be a mess.

> For example, consider a private function which is intended to meet the
> requirement that it cause a visit to the user by the Invisible Pink Unicorn
> <https://en.wikipedia.org/wiki/Invisible_Pink_Unicorn>. By Her very
> nature, it's inherently impossible to prove whether or not She has
> visited, so there would be no testable consequences of stubbing out that
> function. But for precisely the same reason, there's no problem with
> delivering that code with the function still stubbed out.

I've seen a lot of bad analogies in my day but that just takes the
biscuit. You really are clueless mate. I suggest you do a bit of reading
starting with a read about class invariants and encapsulation.

Dan Cross

unread,
Jun 4, 2018, 2:02:07 PM6/4/18
to
In article <fnj2ar...@mid.individual.net>,
Ian Collins <ian-...@hotmail.com> wrote:
>On 04/06/18 01:26, Daniel wrote:
>> On Friday, June 1, 2018 at 11:00:51 PM UTC-4, Ian Collins wrote:
>>> On 02/06/18 14:17, Mr Flibble wrote:
>>>> I see Ian Collins is on his TDD high horse again. Fecking
>>>> clueless.
>>>
>>> Feel free to write code that you don't know works.
>>>
>> Are you suggesting that TDD is the one and only way to write code
>> that you know "works"?
>
>No.
>
>> Works in what sense?
>
>Works as in does what the detailed design (the tests) says it is
>supposed to do.

"Test as design" is an attempt to respond to the reality that a
detailed design document (in the waterfall sense) is destined to
become hopelessly out of date in very short order. But it does
not follow that this is the only way to document a "design", nor
that it is even a good way to do so (let alone sufficient for
long-term maintenance). Tests can certainly perform some
documentary function, but they fall far short of the sort of
design that may be required for non-trivial software.

The example was brought up of implementing IETF standards; in
that case, an RFC is sort of like a design document for a
protocol (yes, there are RFCs for other sorts of things, but
let's keep it simple). I hope we could all agree that if a
protocol were described by a series of tests, that would be a
poor way to communicate a description of the protocol sufficient
for an implementer; I would argue it would be so opaque as to be
insufficient.

>> Are you one of those
>> people that talk about "100 percent" test coverage?
>
>No, in software, there is always the unexpected. But I do strive to
>minimise the unexpected through test coverage.

Test coverage is neither necessary nor sufficient for minimizing
the unexpected. Unit tests are useful as an easy to use and
easy to understand communication medium for verifying that
software behaves in an expected way in the context of a highly
controlled testing environment, but of course there are other
ways to do that. However, often those ways aren't easy to use
without special training; testing provides a nice balance as a
platform for shaking a lot of bugs out of the tree. This is
presumably why it is so popular.

However, it is manifestly insufficient: suppose I have a bit of
code that is succeptible to perturbations of various kinds in
the execution environment: for example, it may be subject to
tight timing guarantees. How does one probe that in the context
of a unit test? Or perhaps I need some guarantee of time
complexity for an algorithm as applied to some user-provided
data. Unit testing is awkward at best for this purpose.

>> Do you write
>> tests for all your accessors, the way Robert Martin and Kent Beck do
>> in their books?
>
>Trivial accessors, no and neither to they.
>
>> Do you think that is the best use of your time for
>> keeping your software free of defects?
>
>Yes because it helps me write code faster and better. Maybe not
>initially, but the time saved in chasing down bugs, retesting after
>changes or stepping through code in a debugger pays off pretty quickly.
>Tests are an investment in the future of the code.

This conflates the utility of a robust body of unit tests with
the practice of TDD. The former is undeniably useful; but the
latter is only useful as a means to the former and it is
trivially not the only such means.

>> A few points.
>>
>> One, for a non-trivial specification, it would presumably take an
>> arbitrarily large number of tests, in some cases an infinite number,
>> to cover off all outcomes. Since nobody has that much time, we
>> sample, sometimes randomly, sometimes deterministically, e.g. edge
>> cases. It's nonsensical to talk about "100" percent coverage unless
>> you're talking about software verified through mathematical proof,
>> and if you're talking about that, you're not talking about C++.
>
>I don't think anyone claims otherwise. See the last point in the link I
>posted.

Proponents of TDD in the Martin, Beck, Jeffries sense really do
try and push it too far. I wrote about this some time ago:
http://pub.gajendra.net/2012/09/stone_age

See, for example, Martin's comments here:
https://sites.google.com/site/unclebobconsultingllc/home/articles/echoes-from-the-stone-age

(The first comment on his page is pretty spot-on.)

See also this post, and look for Martin's comment:
http://ravimohan.blogspot.com/2007/04/learning-from-sudoku-solvers.html

There, Martin tries to make a vague analogy connecting Ron
Jeffries's failure to produce a working sudoku solver to kind of
martial arts mysticism, wherein Jeffries is a venerable master
engaging in a mysterious kind of exercise that's so abstruse as
to be essentially unknowable to all but the most senior of the
initiated. I call shenanigans: Ron Jeffries simply didn't know
what he was doing and while yes, it took some courage to fail
publicly, the inescapable conclusion is that TDD is insufficient
as a design paradigm.

Peter Seibel's blog post is required reading on this debacle:
https://gigamonkeys.wordpress.com/2009/10/05/coders-unit-testing/

>> Two, there is a significant amount of validation that cannot be
>> approached through unit tests, I'm thinking particularly of
>> simulation, where we need to run code for hours and calculate the
>> statistical properties of outcomes.
>
>That's correct, there should always be multiple levels of testing, again
> I don't think anyone claims otherwise.

They most certainly do; repeatedly and in public. It might be
that they do not believe it themselves, and are only saying so
because they are (most commonly) consultants and authors who are
trying to sell engagements and books.

But there's a certain kind of cultish devotion that arises from
this that's unsettling: it's the sort of fervent devotion of the
converted, who claim that, "if you just genuflected THIS way..."
then all would be well.

The bottom line: unit testing seems to sit at a local maxima on
the cost/benefit curve, but there are many ways to get similar
quality results. However, unit testing is insufficient to
ensure that software is correct and responsive in all respects
and TDD, in particular, seems like something of an oversold fad.

- Dan C.

James Kuyper

unread,
Jun 4, 2018, 2:11:46 PM6/4/18
to
On 06/04/2018 01:54 PM, Mr Flibble wrote:
> On 04/06/2018 18:34, James Kuyper wrote:
...
>> So perform a test that takes them together with the other private member
>> functions (which must also, of course, be invoked indirectly). What's
>> the problem? The important thing isn't how the code is subdivided into
>> individual functions, whether public or private. The important thing is
>> what the code actually does. If you can stub out a private member
>> function without producing any testable failure to meet requirements,
>> you don't need that function to meet requirements - it's inherently and
>> by definition that simple.
>
> In a word: bullshit. Private member functions don't have to be testable in
> isolation BY DESIGN but that doesn't mean they can removed. If you don't
> get this then you are beyond help: your designs must be a mess.

So, don't test them in isolation. Just because they can only be tested
in conjunction with other functions doesn't mean they can't be tested.

>> For example, consider a private function which is intended to meet the
>> requirement that it cause a visit to the user by the Invisible Pink Unicorn
>> <https://en.wikipedia.org/wiki/Invisible_Pink_Unicorn>. By Her very
>> nature, it's inherently impossible to prove whether or not She has
>> visited, so there would be no testable consequences of stubbing out that
>> function. But for precisely the same reason, there's no problem with
>> delivering that code with the function still stubbed out.
>
> I've seen a lot of bad analogies in my day but that just takes the
> biscuit. You really are clueless mate. I suggest you do a bit of reading
> starting with a read about class invariants and encapsulation.

I'll have a better idea of what you mean by that claim with a specific
example. Please give me the simplest example you can create of a
requirement, and a complete program meeting that requirement, which
makes use of a private member function whose behavior is necessary to
meet requirement, where the entire program's ability to meet that
requirement is untestable.

Daniel

unread,
Jun 4, 2018, 3:13:41 PM6/4/18
to
On Monday, June 4, 2018 at 2:02:07 PM UTC-4, Dan Cross wrote:
> In article <fnj2ar...@mid.individual.net>,
> Ian Collins <ian-...@hotmail.com> wrote:
>
> Proponents of TDD in the Martin, Beck, Jeffries sense really do
> try and push it too far.

Agreed
>
> >That's correct, there should always be multiple levels of testing, again
> > I don't think anyone claims otherwise.
>
> They most certainly do; repeatedly and in public. It might be
> that they do not believe it themselves, and are only saying so
> because they are (most commonly) consultants and authors who are
> trying to sell engagements and books.
>
I can believe that about Boston Consulting Group, Accenture, IBM etc
promoting Agile, but not Robert Martin :-) Having had the pleasure
of communicating with him on various usenet groups since his C++ Report
days, I don't believe that he would express an opinion about any matter that
he didn't actually hold.
>
> The bottom line: unit testing seems to sit at a local maxima on
> the cost/benefit curve, but there are many ways to get similar
> quality results. However, unit testing is insufficient to
> ensure that software is correct and responsive in all respects
> and TDD, in particular, seems like something of an oversold fad.
>
Agreed, on all counts.

Daniel

Richard

unread,
Jun 4, 2018, 3:46:01 PM6/4/18
to
[Please do not mail me a copy of your followup]

cr...@spitfire.i.gajendra.net (Dan Cross) spake the secret code
<pf3uql$e4b$1...@reader1.panix.com> thusly:

>The bottom line: unit testing seems to sit at a local maxima on
>the cost/benefit curve [...]

Uh.... citation needed.

Seriously.

There are actual academic studies among programmers doing comparable
work with and without TDD. The ones using TDD got to completeness
first faster on average than those that didn't use TDD.

Mr Flibble

unread,
Jun 4, 2018, 3:47:32 PM6/4/18
to
On 04/06/2018 20:45, Richard wrote:
> [Please do not mail me a copy of your followup]
>
> cr...@spitfire.i.gajendra.net (Dan Cross) spake the secret code
> <pf3uql$e4b$1...@reader1.panix.com> thusly:
>
>> The bottom line: unit testing seems to sit at a local maxima on
>> the cost/benefit curve [...]
>
> Uh.... citation needed.
>
> Seriously.
>
> There are actual academic studies among programmers doing comparable
> work with and without TDD. The ones using TDD got to completeness
> first faster on average than those that didn't use TDD.

You think being code complete faster means higher quality? Seriously?

I think you need a cluebat mate.

Scott Lurndal

unread,
Jun 4, 2018, 3:51:06 PM6/4/18
to
legaliz...@mail.xmission.com (Richard) writes:
>[Please do not mail me a copy of your followup]
>
>cr...@spitfire.i.gajendra.net (Dan Cross) spake the secret code
><pf3uql$e4b$1...@reader1.panix.com> thusly:
>
>>The bottom line: unit testing seems to sit at a local maxima on
>>the cost/benefit curve [...]
>
>Uh.... citation needed.
>
>Seriously.
>
>There are actual academic studies among programmers doing comparable
>work with and without TDD. The ones using TDD got to completeness
>first faster on average than those that didn't use TDD.

Uh..... citation needed.

Seriously.

Ian Collins

unread,
Jun 4, 2018, 4:50:26 PM6/4/18
to
On 04/06/18 23:22, leigh.v....@googlemail.com wrote:
> On Monday, June 4, 2018 at 8:49:54 AM UTC+1, Ian Collins wrote:
>> On 04/06/18 19:12, Juha Nieminen wrote:
>>
>>
>>> Another thing I have noticed about TDD is that it's difficult to
>>> test the behavior of private member functions (without exposing
>>> them to outside code, breaking object-oriented modularity
>>> design.) Essentially you would need to add testing code to the
>>> class itself, which I think isn't something TDD is supposed to
>>> do.
>>
>> If the functionality can't be tested through the public interface,
>> the usual approach (which often occurs when working with existing
>> code) is to extract the private member function functionality into
>> a class and test that. This does, surprisingly often, produce a
>> useful reusable component.
>
> Reducing encapsulation in the process.

No, extracting functionality performed by private methods does not
reduce encapsulation.

For example if a private method performs a transformation on private
data, say rotation of a point about an axis, how does extracting that
transformation break encapsulation? The transform on its own has no
impact on the parent class state, you need the transformation + data to
do that. All you end up with is a tested transformation which can be
used elsewhere if needed.

> I have said before that TDD is
> the enemy of encapsulation and therefore by extension an enemy of
> good design as encapsulation is an important design principle.

So you have, like a lot of things you say you have yet to prove it.

Here's a counter claim: obsessive encapsulation is the enemy of
modularisation and therefore by extension an enemy of good design as
modularisation is an important design principle.

--
Ian.

Ian Collins

unread,
Jun 4, 2018, 4:56:58 PM6/4/18
to
This is an important point people who target individual agile practices
fail to acknowledge; to get the best results, the practices need to be
used together, you can't just adopt one in isolation and expect to get
the best results. You can't just rely on unit tests in isolation,
whether they be added after the fact or through TDD. You need another
layer (often two - automated system tests and user testing) to be sure
your product id ready for market.

--
Ian.

leigh.v....@googlemail.com

unread,
Jun 4, 2018, 5:03:15 PM6/4/18
to
On Monday, June 4, 2018 at 9:50:26 PM UTC+1, Ian Collins wrote:
> No, extracting functionality performed by private methods does not
> reduce encapsulation.

False. An increase in the number of public member variables across one or more classes represents a corresponding decrease in encapsulation. This is basic stuff you should know already.

/Leigh

Mr Flibble

unread,
Jun 4, 2018, 5:13:21 PM6/4/18
to
Minor mistake there (wrote it on my phone): I meant "functions" not
"variables"; so:

An increase in the number of public member functions across one or more
classes represents a corresponding decrease in encapsulation. This is
basic stuff you should know already.

Ian Collins

unread,
Jun 4, 2018, 5:20:18 PM6/4/18
to
On 05/06/18 09:13, Mr Flibble wrote:
> On 04/06/2018 22:03, leigh.v....@googlemail.com wrote:
>> On Monday, June 4, 2018 at 9:50:26 PM UTC+1, Ian Collins wrote:
>>> No, extracting functionality performed by private methods does not
>>> reduce encapsulation.
>>
>> False. An increase in the number of public member variables across one or more classes represents a corresponding decrease in encapsulation. This is basic stuff you should know already.
>
> Minor mistake there (wrote it on my phone): I meant "functions" not
> "variables"; so:
>
> An increase in the number of public member functions across one or more
> classes represents a corresponding decrease in encapsulation. This is
> basic stuff you should know already.

I don't think you (deliberately or otherwise) missed and snipped my
point. I didn't say add more public member functions, I said extract
the functionality into its one object (which could be a free function).

--
Ian.

Ian Collins

unread,
Jun 4, 2018, 5:28:26 PM6/4/18
to
Need coffee.

I think you (deliberately or otherwise) missed and snipped my point

.... to its own object ...

--
Ian.

Dan Cross

unread,
Jun 4, 2018, 6:07:33 PM6/4/18
to
In article <pf44te$itg$1...@news.xmission.com>, Richard <> wrote:
>[Please do not mail me a copy of your followup]
>
>cr...@spitfire.i.gajendra.net (Dan Cross) spake the secret code
><pf3uql$e4b$1...@reader1.panix.com> thusly:
>
>>The bottom line: unit testing seems to sit at a local maxima on
>>the cost/benefit curve [...]
>
>Uh.... citation needed.
>
>Seriously.
>
>There are actual academic studies among programmers doing comparable
>work with and without TDD. The ones using TDD got to completeness
>first faster on average than those that didn't use TDD.

...and there are conflicting reports showing that TDD leads to
slower task completion, albeit with fewer defects (Nagapan et
al).

But note that I said *testing*, not *TDD*: the two are not
synonymous.

- Dan C.

Ross A. Finlayson

unread,
Jun 4, 2018, 6:11:25 PM6/4/18
to
On Friday, June 1, 2018 at 11:29:00 PM UTC-7, Paavo Helde wrote:
> On 2.06.2018 5:17, Mr Flibble wrote:
> > I see Ian Collins is on his TDD high horse again. Fecking clueless.
>
> TDD is fine as long as it stays a tool and does not become a goal in
> itself. One still has to know what the program must do and how to write
> the code to accomplish that. TDD does not mean randomly alternating your
> code to make the next test pass (for starters, that would require 100%
> perfect tests which never happens). And if any TDD method appears
> counter-productive or not effective in some situation, it can be skipped
> or replaced (because it is just a tool).
>
> The same can be said for any tool or methodology of course, but it looks
> like TDD is especially prone to misunderstandings and misuse for some
> reason (it cannot beat "agile" of course ;-)

TDD? Only as strong as the formal model
(and the space of its inputs), usually
of which there is none.

TDD and coverage is a nice safety net
for coders who can't be bothered
to read most of the code.

It's difficult to achieve 100% coverage
because most of the interactions (of the
"units" together) isn't "designed for test".

Thus it's nice sometimes when library vendors
surface mocks.

Tests are _part_ of coverage then often
and usually for runtime coverage.

Of course having tests saves many
production systems from immature changes.

Juha Nieminen

unread,
Jun 5, 2018, 1:46:26 AM6/5/18
to
leigh.v....@googlemail.com wrote:
> Reducing encapsulation in the process. I have said before that TDD
> is the enemy of encapsulation and therefore by extension an enemy of
> good design as encapsulation is an important design principle.

Not really. Sometimes it can increase encapsulation and abstraction, in a
potentially positive way.

For example, you might have a class that takes a std::ostream or FILE*
directly. However, for TDD you usually need to replace that with a more
abstract custom wrapper class. Which in the end increases abstraction
and modularity, and might be beneficial if you ever need the class to
write its result to something else than a stream.

Juha Nieminen

unread,
Jun 5, 2018, 1:53:04 AM6/5/18
to
James Kuyper <james...@alumni.caltech.edu> wrote:
>> Another thing I have noticed about TDD is that it's difficult to test
>> the behavior of private member functions (without exposing them to
>> outside code, breaking object-oriented modularity design.
>
> You test the private member function by calling whichever public
> functions call it, directly or indirectly. If execution of the private
> member function is needed to achieve some required result, it's always
> possible to perform such a test, by checking to see whether that
> required result occurred. If execution of a private member function has
> no testable consequences, then that private member function isn't
> actually needed.

The private implementation of a class may be quite extensive and complex,
and the whole idea of TDD is to write unit tests for every single small
component and function. If you have a test for something that involves
dozens of functions and data structures, it makes it more difficult to
find where exactly the problem might be, in the case of a failed test.

The whole idea of TDD is to make it more obvious where the bug might be,
which is why unit tests are written for every single function and
component.

As a hypothetical example, suppose you have a class that can write and
read JPEG files. It only has a minimum public interface to do that, and
pretty much nothing else, but the implementation inside is really huge.
So you write a unit test that writes a trivial image, and then tries to
read it again and compare the result, and it turns out it's different.
Great, you caught a bug. But where is it? The class might consist of
thousands of lines of code, and hundreds of functions and data
structures. None of which ought to be exposed to the outside.

Ian Collins

unread,
Jun 5, 2018, 2:44:47 AM6/5/18
to
On 05/06/18 06:01, Dan Cross wrote:
> In article <fnj2ar...@mid.individual.net>,
> Ian Collins <ian-...@hotmail.com> wrote:
>> On 04/06/18 01:26, Daniel wrote:
>>> On Friday, June 1, 2018 at 11:00:51 PM UTC-4, Ian Collins wrote:
>>>> On 02/06/18 14:17, Mr Flibble wrote:
>>>>> I see Ian Collins is on his TDD high horse again. Fecking
>>>>> clueless.
>>>>
>>>> Feel free to write code that you don't know works.
>>>>
>>> Are you suggesting that TDD is the one and only way to write code
>>> that you know "works"?
>>
>> No.
>>
>>> Works in what sense?
>>
>> Works as in does what the detailed design (the tests) says it is
>> supposed to do.
>
> "Test as design" is an attempt to respond to the reality that a
> detailed design document (in the waterfall sense) is destined to
> become hopelessly out of date in very short order. But it does
> not follow that this is the only way to document a "design", nor
> that it is even a good way to do so (let alone sufficient for
> long-term maintenance). Tests can certainly perform some
> documentary function, but they fall far short of the sort of
> design that may be required for non-trivial software.

Agreed, non-trivial software tends to have layers of design, just like
it has layers of testing.

> The example was brought up of implementing IETF standards; in
> that case, an RFC is sort of like a design document for a
> protocol (yes, there are RFCs for other sorts of things, but
> let's keep it simple). I hope we could all agree that if a
> protocol were described by a series of tests, that would be a
> poor way to communicate a description of the protocol sufficient
> for an implementer; I would argue it would be so opaque as to be
> insufficient.

Agreed.

>>> Are you one of those
>>> people that talk about "100 percent" test coverage?
>>
>> No, in software, there is always the unexpected. But I do strive to
>> minimise the unexpected through test coverage.
>
> Test coverage is neither necessary nor sufficient for minimizing
> the unexpected. Unit tests are useful as an easy to use and
> easy to understand communication medium for verifying that
> software behaves in an expected way in the context of a highly
> controlled testing environment, but of course there are other
> ways to do that. However, often those ways aren't easy to use
> without special training; testing provides a nice balance as a
> platform for shaking a lot of bugs out of the tree. This is
> presumably why it is so popular.

Not just special training, but also more time. The sooner something is
tested after it is written, the sooner a defect can be rectified.

> However, it is manifestly insufficient: suppose I have a bit of
> code that is succeptible to perturbations of various kinds in
> the execution environment: for example, it may be subject to
> tight timing guarantees. How does one probe that in the context
> of a unit test? Or perhaps I need some guarantee of time
> complexity for an algorithm as applied to some user-provided
> data. Unit testing is awkward at best for this purpose.

Agreed, unit tests test the functionality of an algorithm, not its
performance. They do aid the task of performance driven internal
refactoring. You have the confidence to tune having minimised the risk
of breaking the functionality.

>>> Do you write
>>> tests for all your accessors, the way Robert Martin and Kent Beck do
>>> in their books?
>>
>> Trivial accessors, no and neither to they.
>>
>>> Do you think that is the best use of your time for
>>> keeping your software free of defects?
>>
>> Yes because it helps me write code faster and better. Maybe not
>> initially, but the time saved in chasing down bugs, retesting after
>> changes or stepping through code in a debugger pays off pretty quickly.
>> Tests are an investment in the future of the code.
>
> This conflates the utility of a robust body of unit tests with
> the practice of TDD. The former is undeniably useful; but the
> latter is only useful as a means to the former and it is
> trivially not the only such means.

It is a means that makes unit tests more enjoyable to write. Most
programmers I know find writing tests after the fact tedious in the
extreme, especially so if the code has to be modified to facilitate
testing.

>>> A few points.
>>>
>>> One, for a non-trivial specification, it would presumably take an
>>> arbitrarily large number of tests, in some cases an infinite number,
>>> to cover off all outcomes. Since nobody has that much time, we
>>> sample, sometimes randomly, sometimes deterministically, e.g. edge
>>> cases. It's nonsensical to talk about "100" percent coverage unless
>>> you're talking about software verified through mathematical proof,
>>> and if you're talking about that, you're not talking about C++.
>>
>> I don't think anyone claims otherwise. See the last point in the link I
>> posted.
>
> Proponents of TDD in the Martin, Beck, Jeffries sense really do
> try and push it too far. I wrote about this some time ago:
> http://pub.gajendra.net/2012/09/stone_age

One thing I have learned since picking up TDD form Beck in the early
2000s it to be pragmatic! Pushing too hard in a new environment is a
short cut to failure..

> Peter Seibel's blog post is required reading on this debacle:
> https://gigamonkeys.wordpress.com/2009/10/05/coders-unit-testing/
>
>>> Two, there is a significant amount of validation that cannot be
>>> approached through unit tests, I'm thinking particularly of
>>> simulation, where we need to run code for hours and calculate the
>>> statistical properties of outcomes.
>>
>> That's correct, there should always be multiple levels of testing, again
>> I don't think anyone claims otherwise.
>
> They most certainly do; repeatedly and in public. It might be
> that they do not believe it themselves, and are only saying so
> because they are (most commonly) consultants and authors who are
> trying to sell engagements and books.

I'll have to take your work on that, the agile process I most closely
follow, XP, certainty does include acceptance testing. I've never
released a product without it.

> But there's a certain kind of cultish devotion that arises from
> this that's unsettling: it's the sort of fervent devotion of the
> converted, who claim that, "if you just genuflected THIS way..."
> then all would be well.

I'm averse to cults...

> The bottom line: unit testing seems to sit at a local maxima on
> the cost/benefit curve, but there are many ways to get similar
> quality results. However, unit testing is insufficient to
> ensure that software is correct and responsive in all respects
> and TDD, in particular, seems like something of an oversold fad.

That part I strongly dispute.

Unit testing offer significant cost benefits, mainly from times saved in
identifying and fixing defects. Fads don't last 15+ years...

--
Ian.


Dan Cross

unread,
Jun 5, 2018, 10:19:24 AM6/5/18
to
In article <fnmpqh...@mid.individual.net>,
Ian Collins <ian-...@hotmail.com> wrote:
>On 05/06/18 06:01, Dan Cross wrote:
>[snip]
>> The bottom line: unit testing seems to sit at a local maxima on
>> the cost/benefit curve, but there are many ways to get similar
>> quality results. However, unit testing is insufficient to
>> ensure that software is correct and responsive in all respects
>> and TDD, in particular, seems like something of an oversold fad.
>
>That part I strongly dispute.
>
>Unit testing offer significant cost benefits, mainly from times saved in
>identifying and fixing defects. Fads don't last 15+ years...

Hmm, this is the second time that someone's commented on on this
part of my post in a way that makes it sound as if I'm saying
that *unit testing* is not worthwhile.

On re-reading my post, it seems what I wrote about *unit
testing* is backwards of what I meant: in particular, when I
wrote that "unit testing seems to sit at a local maxima on the
cost/benefit curve", what I probably should have written,
"benefit/cost curve." That is, unit testing seems to give you
the largest benefit for the least cost in some local part of
the curve that sits very near where most programmers work.

My apologies if that wasn't clear.

That said, I have yet to see a strong argument that TDD is the
most efficient way to get a robust body of tests.

On a side note, I'm curious what your thoughts are on this:
https://pdfs.semanticscholar.org/7745/5588b153bc721015ddfe9ffe82f110988450.pdf

- Dan C.

Dan Cross

unread,
Jun 5, 2018, 10:23:33 AM6/5/18
to
In article <b1b6d2f3-f4cd-4a66...@googlegroups.com>,
Daniel <daniel...@gmail.com> wrote:
>On Monday, June 4, 2018 at 2:02:07 PM UTC-4, Dan Cross wrote:
>> They most certainly do; repeatedly and in public. It might be
>> that they do not believe it themselves, and are only saying so
>> because they are (most commonly) consultants and authors who are
>> trying to sell engagements and books.
>>
>I can believe that about Boston Consulting Group, Accenture, IBM etc
>promoting Agile, but not Robert Martin :-) Having had the pleasure
>of communicating with him on various usenet groups since his C++ Report
>days, I don't believe that he would express an opinion about any matter that
>he didn't actually hold.

I don't know him, save for having read several of his books. I
trust your judgement of the man, though I disagree with him on
many technical points. The few times I've sent comments his way
(all of technical things) he's not responded.

- Dan C.

Richard

unread,
Jun 5, 2018, 12:25:51 PM6/5/18
to
[Please do not mail me a copy of your followup]

Ian Collins <ian-...@hotmail.com> spake the secret code
<fnmpqh...@mid.individual.net> thusly:
Indeed.

I don't know if Uncle Bob's strident position on TDD brings over
more people than it alienates. In my personal observation, it does
both and which outcome you get seems more correlated with the person
reacting to Uncle Bob's approach more than anything else. Speaking
for myself, I simply state the benefits that I personally have
obtained from TDD and recommend it for those reasons. For many years
(20+) I programmed without TDD and was good at it. In the years since
adopting TDD (10+), I've found myself more productive than I was
without TDD.

In the end, it's simply a recommendation. Any reader is free to
ignore it if they wish. If they want to learn how to do TDD
effectively, I'm happy to show them in workshops, etc.

What has consistently surprised me over the years since adopting TDD
is the number of people who feel compelled to erect straw man
arguments against TDD in order to "prove" that it is a waste of time.

In reading through some of the links on this thread, I did find it
incredibly amusing to read how Ron Jeffries blundered through a Sudoku
solver and managed to get sidetracked on a tangent without ever
completing a finished solver. I've bumped into him once or twice on
some mailing lists and found that I didn't really agree with him on
things and so I wasn't surprised he floundered. Maybe that is just my
own confirmation bias, or a little schadenfreude on my part.

I don't take this example as a refutation of TDD, simply as more data
that Ron Jeffries can be an idiot sometimes.

Richard

unread,
Jun 5, 2018, 12:27:41 PM6/5/18
to
[Please do not mail me a copy of your followup]

cr...@spitfire.i.gajendra.net (Dan Cross) spake the secret code
<pf6652$ch2$1...@reader1.panix.com> thusly:

>In article <fnmpqh...@mid.individual.net>,
>Ian Collins <ian-...@hotmail.com> wrote:
>>On 05/06/18 06:01, Dan Cross wrote:
>>[snip]
>>> The bottom line: unit testing seems to sit at a local maxima on
>>> the cost/benefit curve, but there are many ways to get similar
>>> quality results. However, unit testing is insufficient to
>>> ensure that software is correct and responsive in all respects
>>> and TDD, in particular, seems like something of an oversold fad.
>>
>>That part I strongly dispute.
>>
>>Unit testing offer significant cost benefits, mainly from times saved in
>>identifying and fixing defects. Fads don't last 15+ years...
>
>Hmm, this is the second time that someone's commented on on this
>part of my post in a way that makes it sound as if I'm saying
>that *unit testing* is not worthwhile.
>
>On re-reading my post, it seems what I wrote about *unit
>testing* is backwards of what I meant: in particular, when I
>wrote that "unit testing seems to sit at a local maxima on the
>cost/benefit curve", what I probably should have written,
>"benefit/cost curve."

You do realize that if you invert the ratio, it completely reverses
the original point?

Rosario19

unread,
Jun 5, 2018, 2:14:14 PM6/5/18
to
On Sun, 3 Jun 2018 08:22:31 +1200, Ian Collins wrote:

>On 03/06/18 07:53, Mr Flibble wrote:
>> On 02/06/2018 04:00, Ian Collins wrote:
>>> On 02/06/18 14:17, Mr Flibble wrote:
>>>> I see Ian Collins is on his TDD high horse again.  Fecking clueless.
>>>
>>> Feel free to write code that you don't know works.
>>
>> Designing software by incrementally fixing failing tests is NOT designing
>> software.
>
>True.
>
>https://www.thinksys.com/development/tdd-myths-misconceptions/
>
>Point 3.

i'm no one,i wrote nothing good etc, but i think difensive programming
is the way
so: write one function full of check, even for thing never would be
happen too

Dan Cross

unread,
Jun 5, 2018, 2:14:22 PM6/5/18
to
In article <pf6dlj$qjv$2...@news.xmission.com>, Richard <> wrote:
>[Please do not mail me a copy of your followup]
>[snip]
>cr...@spitfire.i.gajendra.net (Dan Cross) spake the secret code
><pf6652$ch2$1...@reader1.panix.com> thusly:
>>Hmm, this is the second time that someone's commented on on this
>>part of my post in a way that makes it sound as if I'm saying
>>that *unit testing* is not worthwhile.
>>
>>On re-reading my post, it seems what I wrote about *unit
>>testing* is backwards of what I meant: in particular, when I
>>wrote that "unit testing seems to sit at a local maxima on the
>>cost/benefit curve", what I probably should have written,
>>"benefit/cost curve."
>
>You do realize that if you invert the ratio, it completely reverses
>the original point?

Yes, hence my clarifying response. People make such mistakes in
this kind of information communication all the time and given
the context of the rest of my post (and even that paragraph),
I'm a bit surprised that you and Ian didn't pick up on the
intent; others seemed to grok it and how one interprets it is,
after all, a sign convention.

But that's neither here not there.

To put it more succinctly: unit testing is good, as it's a low
cost way to get relatively high quality results. But it has its
limitations and shouldn't be regarded as more than one tool out
of a box of many. The value of TDD is far more subjective and
largely unrelated to the value of unit testing, though the two
are often incorrectly conflated. Many of the most strident
proponents of TDD overstate its usefulness and attempt to jam it
into situations it is not well suited for: it is no substitute
for design or algorithm analysis, both of which it is (in my
opinion) poorly suited for.

- Dan C.

Rosario19

unread,
Jun 5, 2018, 2:18:29 PM6/5/18
to
what happen to be now is 2 fase programming

1) write all as i think would be ok, without any testing if not only
compile that code (with code check itself for see if argmìument are
all ok and some check inside)

2) debug the code with a debugger and adgiust all, until it seems run
ok

James Kuyper

unread,
Jun 5, 2018, 2:36:36 PM6/5/18
to
On 06/05/2018 01:52 AM, Juha Nieminen wrote:
> James Kuyper <james...@alumni.caltech.edu> wrote:
>>> Another thing I have noticed about TDD is that it's difficult to test
>>> the behavior of private member functions (without exposing them to
>>> outside code, breaking object-oriented modularity design.
>>
>> You test the private member function by calling whichever public
>> functions call it, directly or indirectly. If execution of the private
>> member function is needed to achieve some required result, it's always
>> possible to perform such a test, by checking to see whether that
>> required result occurred. If execution of a private member function has
>> no testable consequences, then that private member function isn't
>> actually needed.
>
> The private implementation of a class may be quite extensive and complex,
> and the whole idea of TDD is to write unit tests for every single small
> component and function. If you have a test for something that involves
> dozens of functions and data structures, it makes it more difficult to
> find where exactly the problem might be, in the case of a failed test.

Tests are for determining whether requirements are being met. Use a
debugger when the goal is to determine why a requirement is not being
met, and private member functions are no barrier to using a debugger for
that purpose.

> The whole idea of TDD is to make it more obvious where the bug might be,
> which is why unit tests are written for every single function and
> component.

I don't know enough about TDD to address that claim, I was only
addressing the claim that you have to violate the privacy of private
member functions to test whether code violates it's requirements.

Richard

unread,
Jun 5, 2018, 3:47:23 PM6/5/18
to
[Please do not mail me a copy of your followup]

cr...@spitfire.i.gajendra.net (Dan Cross) spake the secret code
<pf6jtk$n6t$1...@reader1.panix.com> thusly:

>To put it more succinctly: unit testing is good, as it's a low
>cost way to get relatively high quality results. But it has its
>limitations and shouldn't be regarded as more than one tool out
>of a box of many.

I can sign up to that; we're in agreement here.

>The value of TDD is far more subjective and
>largely unrelated to the value of unit testing, though the two
>are often incorrectly conflated.

I'd agree with that too. I would add that unit testing falls out
naturally from TDD, while my other attempts to write unit tests
resulted in really fragile tests and other things that left me feeling
like the unit test was just a "tax" on development without providing
me any benefit.

>Many of the most strident
>proponents of TDD overstate its usefulness and attempt to jam it
>into situations it is not well suited for: it is no substitute
>for design or algorithm analysis, both of which it is (in my
>opinion) poorly suited for.

Can you show me a TDD proponent that *is* saying it is a substitute
for design or algorithm analysis?

I've been around many TDD proponents and none of them have ever said
this.

Richard

unread,
Jun 5, 2018, 3:49:56 PM6/5/18
to
[Please do not mail me a copy of your followup]

James Kuyper <james...@alumni.caltech.edu> spake the secret code
<pf6l78$ji7$1...@dont-email.me> thusly:
The general idea around TDD making it more obvious where a bug is
located is that in each cycle you start from a known working
implementation -- to the extent that it is responding to existing
tests. When you write a new test and make that pass, the amount of
code that has changed in your implementation is generally very small,
so pinpointing the location of any newly introduced mistake is fairly
trivial.

This is the main benefit I get from TDD; the time between making the
bug and finding/fixing the bug is reduced to seconds instead of days,
weeks, months, years, or possibly never.

Vir Campestris

unread,
Jun 5, 2018, 6:03:02 PM6/5/18
to
On 05/06/2018 20:49, Richard wrote:
> This is the main benefit I get from TDD; the time between making the
> bug and finding/fixing the bug is reduced to seconds instead of days,
> weeks, months, years, or possibly never.

I appreciate the value of unit tests. Where I disagree with Ron
Jeffreys et. al. is the turn-it-up-to-11 Extreme Programming thing -
where you have to have absolute faith that your units tests will catch
every bug.

Which I do not believe is possible, especially for timing problems (HW
interfaces and multi-threading).

I'm looking at a bug at the moment that fails one time in three.

I once had a bug that hung around for 18 months. Once we understood it
we could set up the correct environment, and it would fail most days.
Not every day though. No reasonable unit test can be run that long.

Andy
--
If you're interested: Code was a disc interface. It went:

Main code:
Start transfer;
Start timer;
Wait for completion.

Disc interrupt routine:
stop timer;
wake main code.

Timer expiry routine:
Set error flag.

The error path was to interrupt the main code on a scheduler interrupt
after starting the transfer, so that it did not get control back until
the IO had completed. The timer was then started after it should have
been stopped, and left running. If there were no other IOs to reset the
timer before it expired the error flag was set - and the _next_ transfer
failed. And the fix was to start the timer before the transfer. Darnn,
that was 30 years ago and I still recall it clearly!

Richard

unread,
Jun 5, 2018, 6:24:17 PM6/5/18
to
[Please do not mail me a copy of your followup]

Vir Campestris <vir.cam...@invalid.invalid> spake the secret code
<pf71ae$6nf$1...@dont-email.me> thusly:

>On 05/06/2018 20:49, Richard wrote:
>> This is the main benefit I get from TDD; the time between making the
>> bug and finding/fixing the bug is reduced to seconds instead of days,
>> weeks, months, years, or possibly never.
>
>I appreciate the value of unit tests. Where I disagree with Ron
>Jeffreys et. al. is the turn-it-up-to-11 Extreme Programming thing -
>where you have to have absolute faith that your units tests will catch
>every bug.

In every incarnation of XP that I've worked in/discussed with colleagues,
unit testing was not the only level of automated testing. There were
also acceptance tests at a minimum and usually some sort of performance
or resource consumption oriented test. Human-driven exploratory testing
would also be used for things like usability. Leave the repetitive
stuff to the computer and allow your human testers to test the stuff
that computers aren't good at.

I've never seen a presentation by an agile advocate that asserted that
unit tests alone were enough. "Growing Object-Oriented Software,
Guided by Tests" by Steve Freeman and Nat Price <https://amzn.to/2JnRruf>
makes it quite clear through all the worked examples that you need
acceptance tests to push "from above" and unit tests to push "from below",
to squish bugs out in the middle. This is one problem with the small
examples of TDD that are used to demonstrate the practice -- they tend to
be so small that only unit tests are shown and no discussion of acceptance
tests are brought into the picture, leaving one with the false impression
that TDD is only about unit tests. If you haven't read the above book,
I would recommend it.

>Which I do not believe is possible, especially for timing problems (HW
>interfaces and multi-threading).

Timing and synchronization stuff is notoriously hard to test; it
exhibits the "heisenbug" quality in automated test suites and it too
tedious and boring to probe manually.

>I once had a bug that hung around for 18 months. Once we understood it
>we could set up the correct environment, and it would fail most days.
>Not every day though. No reasonable unit test can be run that long.

Agreed. This would fall under the category of "system test" at places
where I've worked.

Ian Collins

unread,
Jun 5, 2018, 10:52:52 PM6/5/18
to
From my personal experience, the best argument has been the dislike
programmers have for writing tests for the sake of writing tests. I have
had a number of colleagues who would, if they could get away with it,
flat out refuse to write tests, that was the testers job. They were
however quite happy to adopt TDD because the tests were part of writing
the code... Go figure!

> On a side note, I'm curious what your thoughts are on this:
> https://pdfs.semanticscholar.org/7745/5588b153bc721015ddfe9ffe82f110988450.pdf

I'll parse and comment back later.

--
Ian.

Paavo Helde

unread,
Jun 6, 2018, 7:51:31 AM6/6/18
to
On 6.06.2018 1:02, Vir Campestris wrote:
> On 05/06/2018 20:49, Richard wrote:
>> This is the main benefit I get from TDD; the time between making the
>> bug and finding/fixing the bug is reduced to seconds instead of days,
>> weeks, months, years, or possibly never.
>
> I appreciate the value of unit tests. Where I disagree with Ron
> Jeffreys et. al. is the turn-it-up-to-11 Extreme Programming thing -
> where you have to have absolute faith that your units tests will catch
> every bug.

Unit tests cannot catch every bug because for that the unit tests should
be effectively bug-free themselves, which is hard to achieve given that
unit tests are typically not tested themselves.

For example, some weeks ago I discovered a buggy unit test which had
been in our test suite for over 10 years. It was testing that some
particular operations are failing as expected, and this seemed to work
fine for years. Alas, it appeared that not the operations are failing,
but a test part itself was buggy and failed every time, regardless of
which operation it tested. Moreover, after fixing the test it appeared
that half of the operations it tested should actually succeed instead of
failing. So that was one buggy unit test.

Also, given that the unit test must run in a limited time, it can only
test a finite set of parameters. So, a particularly mischievous "unit"
could just play back recorded responses for this finite set during
testing and fail miserably whenever called with other parameters. See
also: VW exhaust testing; see also: self-learning AI.

Öö Tiib

unread,
Jun 6, 2018, 12:42:53 PM6/6/18
to
On Wednesday, 6 June 2018 14:51:31 UTC+3, Paavo Helde wrote:
>
> For example, some weeks ago I discovered a buggy unit test which had
> been in our test suite for over 10 years. It was testing that some
> particular operations are failing as expected, and this seemed to work
> fine for years. Alas, it appeared that not the operations are failing,
> but a test part itself was buggy and failed every time, regardless of
> which operation it tested. Moreover, after fixing the test it appeared
> that half of the operations it tested should actually succeed instead of
> failing. So that was one buggy unit test.

Have to make it habit at least to code-review the unit tests. It happens
quite often that unit is feed with mock data that is logically inconsistent
(typically because of incomplete edits of copy-paste) and then it tests
that unit gives nice results. Now when someone corrects the unit to sanity
check the input then unit test breaks.

The thing that works is quality driven development. Unit tests are good
tool but people who push that these are silver bullets against numerous
(all) problems should be treated like all the other snake oil salesmen.

Richard

unread,
Jun 6, 2018, 12:43:48 PM6/6/18
to
[Please do not mail me a copy of your followup]

Ian Collins <ian-...@hotmail.com> spake the secret code
<fnp0jo...@mid.individual.net> thusly:

> From my personal experience, the best argument has been the dislike
>programmers have for writing tests for the sake of writing tests. I have
>had a number of colleagues who would, if they could get away with it,
>flat out refuse to write tests, that was the testers job.

This is what I meant when I referred to writing tests after the
implementation as feeling like a "developer tax". I'd already written
my code and debugged it and the test was at that point of no benefit
to me as a developer. It's overhead.

>They were
>however quite happy to adopt TDD because the tests were part of writing
>the code... Go figure!

...because when I write the test first, I get something out of it as a
developer and it's helping me write correct code (and therefore
spending less time in debugging sessions).

Richard

unread,
Jun 6, 2018, 12:47:52 PM6/6/18
to
[Please do not mail me a copy of your followup]

Paavo Helde <myfir...@osa.pri.ee> spake the secret code
<pf8hrn$5ik$1...@dont-email.me> thusly:

>On 6.06.2018 1:02, Vir Campestris wrote:
>> On 05/06/2018 20:49, Richard wrote:
>>> This is the main benefit I get from TDD; the time between making the
>>> bug and finding/fixing the bug is reduced to seconds instead of days,
>>> weeks, months, years, or possibly never.
>>
>> I appreciate the value of unit tests. Where I disagree with Ron
>> Jeffreys et. al. is the turn-it-up-to-11 Extreme Programming thing -
>> where you have to have absolute faith that your units tests will catch
>> every bug.

I can't speak for Jeffries, but this subtle "catch *every* bug"
phrasing sounds like you're overstating the case. I've never heard
any TDD (or unit testing) advocate suggest that *every* bug will be
caught.

It's easy to dismiss UT/TDD advocates when you overstate their position.

>Also, given that the unit test must run in a limited time, it can only
>test a finite set of parameters. So, a particularly mischievous "unit"
>could just play back recorded responses for this finite set during
>testing and fail miserably whenever called with other parameters. See
>also: VW exhaust testing; see also: self-learning AI.

Unit tests are great for testing control flow logic. They're not so
good at things like numerical computation where the input domain is
huge and it's infeasible to test every combination of inputs and
verify the calculation for the outputs. You can use numerical insight
into the computation ("white box" testing) to sprinkle example inputs
at interesting points, but ultimately example-based testing isn't
going to give high confidence. I think this is where approaches like
fuzz testing or property-based testing have an advantage over example
based testing.

Öö Tiib

unread,
Jun 6, 2018, 1:06:38 PM6/6/18
to
On Wednesday, 6 June 2018 19:47:52 UTC+3, Richard wrote:
> [Please do not mail me a copy of your followup]
>
> Paavo Helde <myfir...@osa.pri.ee> spake the secret code
> <pf8hrn$5ik$1...@dont-email.me> thusly:
>
> >On 6.06.2018 1:02, Vir Campestris wrote:
> >> On 05/06/2018 20:49, Richard wrote:
> >>> This is the main benefit I get from TDD; the time between making the
> >>> bug and finding/fixing the bug is reduced to seconds instead of days,
> >>> weeks, months, years, or possibly never.
> >>
> >> I appreciate the value of unit tests. Where I disagree with Ron
> >> Jeffreys et. al. is the turn-it-up-to-11 Extreme Programming thing -
> >> where you have to have absolute faith that your units tests will catch
> >> every bug.
>
> I can't speak for Jeffries, but this subtle "catch *every* bug"
> phrasing sounds like you're overstating the case. I've never heard
> any TDD (or unit testing) advocate suggest that *every* bug will be
> caught.
>
> It's easy to dismiss UT/TDD advocates when you overstate their position.

We see the damage that has been done by such positions in industry.

It is typical that if to send a good quality assurance specialist for a
week to check a product of team that has worked with dream of
silver bullet unit tests (that assure quality) then the release will
delay for month or couple because architectural level issues have
slipped in unnoticed.

It is sometimes difficult to manage expectations of stakeholders
involved because of that damage. It is irrelevant what jeffries
and uncle bobs meant, it matters how pointy haired fatsos did hear
it.

bol...@cylonhq.com

unread,
Jun 7, 2018, 4:37:52 AM6/7/18
to
I'm not sure they are tbh. I've worked in a number of companies that used
unit tests and I saw bugs in all of them. Plus when a coder wrote new code
the unit test was thrown in as an afterthought with little interest in making
it fullproof as opposed to making sure it simply returned OK. That and edge
cases usually being forgortten about anyway. IMO they breed a false sense of
security and complacency so its far better to test the software manually
rather than programmatically.

Ian Collins

unread,
Jun 7, 2018, 6:35:21 AM6/7/18
to
Test all of the software manually for each code change? Good luck with
that...

What you have seen is pretty come situation when tests are bolted on as
an afterthought.

--
Ian

bol...@cylonhq.com

unread,
Jun 7, 2018, 7:38:49 AM6/7/18
to
Yes. I'm talking about professional companies that have proper test teams, not
amateur hour startups who think doing a code update every hour and chucking it
over the wall is a sensible way to work because someone decided to call the
reactive headless chicken mode of development "Agile".

Öö Tiib

unread,
Jun 7, 2018, 2:01:08 PM6/7/18
to
To fix defects that manual testing (or further) found I fully support
test driven development. IOW now make a test first that demonstrates the
problem by failing, fix the code, show that test passes after fix and
let the bug's finder also to recheck it. It was clever bug that slipped
through our defensive programming, unit tests, debugging and our
code reviews. Therefore it is worth the effort to make sure that
it has difficulties to sneak back unnoticed. Also people who disagree
with "test driven design" can't find anything objectionable in that
scenario. We don't design now, we fix a bug correctly and properly,
right? Hot-fixes cause something even more serious on about 50% of
cases.

woodb...@gmail.com

unread,
Jun 7, 2018, 10:56:38 PM6/7/18
to
What you call "professional companies" I would probably call
Goliaths. Most, if not all of them, are increasingly lame.
Upstarts like https://duckduckgo.com and my company are here
to pick up the pieces.


Brian
Ebenezer Enterprises
https://github.com/Ebenezer-group/onwards

bol...@cylonhq.com

unread,
Jun 8, 2018, 4:25:15 AM6/8/18
to
Tell that to Apples $285 billion cash pile. And oddly they don't do a new
release of OS/X or iOS every 5 minutes, they (try) and get it right first.

>Upstarts like https://duckduckgo.com and my company are here
>to pick up the pieces.

I'm sure Google is quaking in its boots.

Besides, 99% of startups that are any good get bought out by a corp anyway
and the rest just provide some hipsters with their organic soya latte money for
a few years then go bust.

Manfred

unread,
Jun 8, 2018, 12:11:38 PM6/8/18
to
This sounds like adding a test case to the unit test (besides fixing the
code), isn't it?

Specifically for TDD, I think that for numerical processing code, or
complex systems where you can't go high in test coverage, it is
inherently limited, meaning that for proper quality you need to model
the problem domain first, and design accordingly.

That said, a good point that I see in TDD is that sometimes it can help
in understanding better the problem domain itself before starting to
develop the actual code. Something like writing a pilot program in a
reversed way.

Richard

unread,
Jun 8, 2018, 3:34:12 PM6/8/18
to
[Please do not mail me a copy of your followup]

On Wed, 6 Jun 2018 09:42:37 -0700 (PDT)
=?UTF-8?B?w5bDtiBUaWli?= <oot...@hot.ee> wrote:

>The thing that works is quality driven development. Unit tests are good
>tool but people who push that these are silver bullets against numerous
>(all) problems should be treated like all the other snake oil salesmen.

Sorry, again, this just feels like a straw man position. I have never
seen anyone try to sell unit testing (before or after writing the
implementation) as a silver bullet.

Richard

unread,
Jun 8, 2018, 3:37:43 PM6/8/18
to
[Please do not mail me a copy of your followup]

Manfred <non...@invalid.add> spake the secret code
<pfe9r5$1lu1$1...@gioia.aioe.org> thusly:

>Specifically for TDD, I think that for numerical processing code, or
>complex systems where you can't go high in test coverage, it is
>inherently limited, meaning that for proper quality you need to model
>the problem domain first, and design accordingly.

Property-based testing and fuzz testing shows some promise here.
However, I've never seen a TDD advocate that proposed it as an
alternative to understanding the problem domain or appropriate design,
so that feels like another straw man point.

Feel free to cite a case where a well known TDD advocate proposes it
as a substitute for understanding the problem domain.

I say "well known" because like anything, there are lots of people who
don't know what they are talking about and call it "TDD" or "Agile" or
whatever. I have literally seen people who don't know what they are
talking about say, in a lightning talk at an agile conference, that
the problem with agile is that it's not waterfall.

Öö Tiib

unread,
Jun 8, 2018, 6:29:04 PM6/8/18
to
On Friday, 8 June 2018 22:34:12 UTC+3, Richard wrote:
> [Please do not mail me a copy of your followup]
>
> On Wed, 6 Jun 2018 09:42:37 -0700 (PDT)
> =?UTF-8?B?w5bDtiBUaWli?= <oot...@hot.ee> wrote:
>
> >The thing that works is quality driven development. Unit tests are good
> >tool but people who push that these are silver bullets against numerous
> >(all) problems should be treated like all the other snake oil salesmen.
>
> Sorry, again, this just feels like a straw man position. I have never
> seen anyone try to sell unit testing (before or after writing the
> implementation) as a silver bullet.

Its not straw man but reality. It is real position actually in work.
I'm indifferent if the uncle bobs and jeffries actually meant it.
I hear them referred as source of that industry-damaging nonsense.
I have used unit tests for decades but now it has been made yet
another silver bullet junk perhaps thanks to those TDD morons not
making their positions clear.

Manfred

unread,
Jun 9, 2018, 9:45:38 PM6/9/18
to
On 06/08/2018 09:37 PM, Richard wrote:
> [Please do not mail me a copy of your followup]
>
> Manfred <non...@invalid.add> spake the secret code
> <pfe9r5$1lu1$1...@gioia.aioe.org> thusly:
>
>> Specifically for TDD, I think that for numerical processing code, or
>> complex systems where you can't go high in test coverage, it is
>> inherently limited, meaning that for proper quality you need to model
>> the problem domain first, and design accordingly.
>
> Property-based testing and fuzz testing shows some promise here.
> However, I've never seen a TDD advocate that proposed it as an
> alternative to understanding the problem domain or appropriate design,
> so that feels like another straw man point.
>
> Feel free to cite a case where a well known TDD advocate proposes it
> as a substitute for understanding the problem domain.

I was thinking about the process itself, not about understanding the
problem domain. Obviously for any development technique such an
understanding is a precondition.
I doubt that for such systems having the process driven by a series of
test cases is a very effective approach.

Juha Nieminen

unread,
Jun 11, 2018, 3:57:13 AM6/11/18
to
Vir Campestris <vir.cam...@invalid.invalid> wrote:
> I once had a bug that hung around for 18 months. Once we understood it
> we could set up the correct environment, and it would fail most days.
> Not every day though. No reasonable unit test can be run that long.

The idea of TDD is not to catch every single bug in existence. Its idea
is to catch simple bugs *immediately* when they are written. The sooner
in the development process a bug is caught, the better.

(TDD also has the beneficial side-effect that it kind of places checks
and guards into the testing procedure so that even if a particular piece
of code doesn't seem to fail the tests written explicitly for it, but
it has an effect on something else elsewhere, which start failing as
a result, the tests written for that something else will potentially
catch the problem immediately.)

Daniel

unread,
Jun 11, 2018, 7:57:08 AM6/11/18
to
On Monday, June 11, 2018 at 3:57:13 AM UTC-4, Juha Nieminen wrote:
>
> The idea of TDD is not to catch every single bug in existence. Its idea
> is to catch simple bugs *immediately* when they are written. The sooner
> in the development process a bug is caught, the better.
>
That's not how TDD evangelists such as Robert Martin characterize TDD. They
characterize it as a methodology where the tests completely capture the
requirements, and simultaneously validate those requirements. Martin doesn't
budge on the tests being the specification for the application, check out
some of the archived discussions on comp.software.extreme-programming.

Also, do skilled developers make "simple bugs"? I don't observe that, at
least for my understanding of "simple bugs". Check out some popular
repositories on github, do you see simple bugs in their issue logs? You'll
see issues for cross platform compiler differences, and defects in attempts
at solving complicated problems, but "simple bugs"? For example, in github
projects that involve text processing, and due to C++'s defining
characteristics near infinite overhead for streams and "you pay for
what you don't need", there seems to be a recurring interest in providing a complete replacement for C++ floating point conversion following netlib code
and ACM papers (e.g. RapidJson), or alternatively to attempt naive floating
conversion (e.g. sajson, pjson, gayson) with code like

if (*s == '.') {
++s;
double fraction = 1;
while (isdigit(*s)) {
fraction *= 0.1;
result += (*s++ - '0') * fraction;
}
}

The latter, of course, can never be made correct, no matter how many unit
tests are made to pass, and the former is hard to clear the issue log
completely.

Daniel

bol...@cylonhq.com

unread,
Jun 11, 2018, 9:57:22 AM6/11/18
to
I'm obviously missing something. Apart from being a pointless rewrite of atof()
and no limit on how long the fraction read can be, what exactly is wrong with
the above code? As far as I can see it will produce the correct result until
fraction becomes zero. You could also do "atoi(s++) / pow(10,strlen(s))" but I
doubt it would be any more accurate or quicker.

Paavo Helde

unread,
Jun 11, 2018, 10:07:10 AM6/11/18
to
On 11.06.2018 14:56, Daniel wrote:
> On Monday, June 11, 2018 at 3:57:13 AM UTC-4, Juha Nieminen wrote:
>>
>> The idea of TDD is not to catch every single bug in existence. Its idea
>> is to catch simple bugs *immediately* when they are written. The sooner
>> in the development process a bug is caught, the better.
>
> Also, do skilled developers make "simple bugs"? I don't observe that, at
> least for my understanding of "simple bugs". Check out some popular
> repositories on github, do you see simple bugs in their issue logs?

This is like proposing that pedestrian bridges should be 20 cm wide with
no railings because skilled walkers can walk straight anyway. Yes they
can, most of the time.

I suspect that if there are no simple bugs in the issue logs this is
because these have been caught by unit tests before commit, that's
exactly what the unit tests are meant for.

Daniel

unread,
Jun 11, 2018, 10:43:46 AM6/11/18
to
On Monday, June 11, 2018 at 9:57:22 AM UTC-4, bol...@cylonhq.com wrote:
> On Mon, 11 Jun 2018 04:56:51 -0700 (PDT)
> Daniel <daniel...@gmail.com> wrote:
> >On Monday, June 11, 2018 at 3:57:13 AM UTC-4, Juha Nieminen wrote:
> >complete replacement for C++ floating point conversion following netlib
> code
> >and ACM papers (e.g. RapidJson), or alternatively to attempt naive
> floating
> >conversion (e.g. sajson, pjson, gayson) with code like
> >
> > if (*s == '.') {
> > ++s;
> > double fraction = 1;
> > while (isdigit(*s)) {
> > fraction *= 0.1;
> > result += (*s++ - '0') * fraction;
> > }
> > }
> >
> >The latter, of course, can never be made correct, no matter how many unit
> >tests are made to pass, and the former is hard to clear the issue log
> >completely.
>
> I'm obviously missing something. ... what exactly is wrong with
> the above code? As far as I can see it will produce the correct result
> until > fraction becomes zero. You could also do "atoi(s++) /
> pow(10,strlen(s))" but I doubt it would be any more accurate or quicker.

As you continue to perform the multiplication operations in the loop, you
are going to compute values that cannot be accurately represented in
floating-point variables. This error will accumulate over time and result in
the output of incorrect digits.

These issues are well known, and code written this way can never pass round trip tests of any breadth. Nevertheless, some projects do use naive
conversion methods, motivated by the sheer slowness of the standard library
conversion functions, such as sprintf, even compared to the dtoa
implementation on netlib by David Gay (http://www.netlib.org/fp/dtoa.c) that
is generally regarded as a safe implementation.

For a discussion of the issues with floating point conversion, see

http://kurtstephens.com/files/p372-steele.pdf

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.72.4656&rep=rep1&type=pdf

https://www.cs.tufts.edu/~nr/cs257/archive/florian-loitsch/printf.pdf

Daniel

Daniel

unread,
Jun 11, 2018, 11:11:47 AM6/11/18
to
On Monday, June 11, 2018 at 10:07:10 AM UTC-4, Paavo Helde wrote:
>
> I suspect that if there are no simple bugs in the issue logs this is
> because these have been caught by unit tests before commit, that's
> exactly what the unit tests are meant for.

While popular github projects are generally well covered by unit tests, they
don't appear to my eye to be following TDD practices, or aspire to "100%
coverage" (whatever that means.) My own view is that the main reason there
are no simple bugs on master is the disinclination of developers to
embarrass themselves in front of the world, and a consequent increase in
alertness and attentiveness.

Daniel

Paavo Helde

unread,
Jun 11, 2018, 2:05:43 PM6/11/18
to
On 11.06.2018 18:11, Daniel wrote:
> On Monday, June 11, 2018 at 10:07:10 AM UTC-4, Paavo Helde wrote:
>>
>> I suspect that if there are no simple bugs in the issue logs this is
>> because these have been caught by unit tests before commit, that's
>> exactly what the unit tests are meant for.
>
> While popular github projects are generally well covered by unit tests, they
> don't appear to my eye to be following TDD practices, or aspire to "100%
> coverage" (whatever that means.)

If they are well covered, this means that the unit tests are there for a
purpose as nobody is forcing the developers to use them. That purpose
might not be TDD (which is only tangentially related) or "100% test
coverage" (which is not a meaningful purpose and is impossible anyway).

For cross-platform projects an obvious purpose of unit tests would be to
make sure that at least the basic functionality works after porting the
code to a new hardware or software environment. However, one can as well
to use the test suite for ensuring that one's changes have not broken
anything important. This has nothing to do with TDD.

Daniel

unread,
Jun 11, 2018, 2:30:34 PM6/11/18
to
On Monday, June 11, 2018 at 2:05:43 PM UTC-4, Paavo Helde wrote:
>
> If they are well covered, this means that the unit tests are there for a
> purpose as nobody is forcing the developers to use them. That purpose
> might not be TDD (which is only tangentially related) or "100% test
> coverage" (which is not a meaningful purpose and is impossible anyway).
>
> For cross-platform projects an obvious purpose of unit tests would be to
> make sure that at least the basic functionality works after porting the
> code to a new hardware or software environment. However, one can as well
> to use the test suite for ensuring that one's changes have not broken
> anything important. This has nothing to do with TDD.

Agreed, on all points :-) Particularly about the "100%", which I find
especially irritating. There are projects that are reporting a "100%"
coverage from something called "coveralls", but I'm pretty sure that's not
actually a measure of, say, the kind of coverage that James Kuyper talked
about in another post.

Daniel

Öö Tiib

unread,
Jun 11, 2018, 4:33:28 PM6/11/18
to
The 100% coverage typically means that every line of code was executed by
tests at least once. That concept is perhaps useful for interpreted
language developers who don't have compilers and so can otherwise commit
code with syntax errors into repo. For good program that "coverall" is
certainly too low margin and so the whole metric is red herring.


>
> Daniel

Juha Nieminen

unread,
Jun 12, 2018, 1:48:48 AM6/12/18
to
Daniel <daniel...@gmail.com> wrote:
> if (*s == '.') {
> ++s;
> double fraction = 1;
> while (isdigit(*s)) {
> fraction *= 0.1;
> result += (*s++ - '0') * fraction;
> }
> }
>
> The latter, of course, can never be made correct, no matter how many unit
> tests are made to pass

TDD unit tests should test what the function is *specified* to do.

"With this input, the function should return this value." You can write a
test for that. If the function is buggy, it's under-specified. Which is a
problem in most programming. The programmer simply has a vague idea of what
a function ought to do, but doesn't think of edge cases, incorrect input,
and so on. There's often a distinct lack of "but what if I give it this
pathological input?" thinking.

In the above code a unit test would specify what happens if the input is
too long to fit into a double. If there's a unit test for that, then the
implementation cannot simply ignore that edge case.

Daniel

unread,
Jun 12, 2018, 1:34:03 PM6/12/18
to
On Tuesday, June 12, 2018 at 1:48:48 AM UTC-4, Juha Nieminen wrote:
> Daniel <daniel...@gmail.com> wrote:
> > if (*s == '.') {
> > ++s;
> > double fraction = 1;
> > while (isdigit(*s)) {
> > fraction *= 0.1;
> > result += (*s++ - '0') * fraction;
> > }
> > }
> >
> > The latter, of course, can never be made correct, no matter how many unit
> > tests are made to pass
>
> In the above code a unit test would specify what happens if the input is
> too long to fit into a double.

It's not that the input is "too long", but rather that certain sequences of
digits, such as "0.1" or "0.2", don't have exact representations as
doubles. It's the accumulation of these errors within the summation that can produce incorrect digits in the result.

TDD people would need some insight into floating point numbers to be
confident that they had sufficient test coverage, as they could easily pick
tests that would fortuitously pass. But nobody with insight into floating
point numbers would ever use the code shown above in the first place. I find
it interesting that the three github projects (sajson, pjson, gason)
actually did.

Perhaps a more common case is ad hoc parsers. Lots of people write ad hoc
parsers that use some regular expression checking, some substring checking,
and write some tests to see that they "work", at least for correct input.
But it doesn't take that much more effort to first write out the grammar in
a formal notation, and then implement a simple parser to process the terms
of that grammar. And you'd get a clear error message for incorrect input
practically for free.

Generally speaking, TDD over emphasizes the role of tests as a specification
of a problem, and their simultaneous validation of that specification. Whatever heuristic value TDD offers, it's biggest claims can't be justified.

Daniel

Dan Cross

unread,
Jun 12, 2018, 8:21:10 PM6/12/18
to
In article <pf6pc0$1ml$1...@news.xmission.com>, Richard <> wrote:
>cr...@spitfire.i.gajendra.net (Dan Cross) spake the secret code
>[snip matters of general agreement]
>>Many of the most strident
>>proponents of TDD overstate its usefulness and attempt to jam it
>>into situations it is not well suited for: it is no substitute
>>for design or algorithm analysis, both of which it is (in my
>>opinion) poorly suited for.
>
>Can you show me a TDD proponent that *is* saying it is a substitute
>for design or algorithm analysis?
>
>I've been around many TDD proponents and none of them have ever said
>this.

See some of the links I posted earlier; I feel that they make
that argument.

- Dan C.

Daniel

unread,
Jun 12, 2018, 10:11:59 PM6/12/18
to
On Tuesday, June 5, 2018 at 3:47:23 PM UTC-4, Richard wrote:
>
> >Many of the most strident
> >proponents of TDD overstate its usefulness and attempt to jam it
> >into situations it is not well suited for: it is no substitute
> >for design or algorithm analysis, both of which it is (in my
> >opinion) poorly suited for.
>
> Can you show me a TDD proponent that *is* saying it is a substitute
> for design or algorithm analysis?
>
There are no doubt many things that TDD proponents haven't said, but what
they have said, in particular, what Robert Martin has said, is that
specifications can be fully expressed by tests, and that the tests can fully
validate the specifications thus expressed. This would fall under the
heading of being a bold claim, and, in my opinion, one that cannot be
substantiated.

Daniel

Juha Nieminen

unread,
Jun 13, 2018, 12:27:57 AM6/13/18
to
Daniel <daniel...@gmail.com> wrote:
> TDD people would need some insight into floating point numbers to be
> confident that they had sufficient test coverage, as they could easily pick
> tests that would fortuitously pass.

Or, maybe, if they had sufficient understanding and experience about TDD,
having to write proper tests would have guided them to think about the
problems and edge cases that converting a string to a floating point has.

As said, the basic idea of TDD is that the behavior of every function should
be accurately specified before it's implemented. Thus it would lead the
programmer to think "hmm, what *should* this function return with this
kind of input?" rather than just go and make some kind of untested
wishy-washy maybe-it-will-work-maybe-it-won't-I-don't-know haphazard
ad-hoc implementation.

As a side note, given that there's a perfectly good implementation of
an ascii string representation to a floating point value both in the C
and C++ standards (which is most probably better and more efficient than
anything they could write), why do those people feel the need to
implement their own?

Ian Collins

unread,
Jun 13, 2018, 5:17:54 AM6/13/18
to
On 13/06/18 05:33, Daniel wrote:
>
> TDD people would need some insight into floating point numbers to be
> confident that they had sufficient test coverage, as they could easily pick
> tests that would fortuitously pass. But nobody with insight into floating
> point numbers would ever use the code shown above in the first place. I find
> it interesting that the three github projects (sajson, pjson, gason)
> actually did.

Are you saying that those who do not practice TDD don't need some
insight into floating point numbers?

> Perhaps a more common case is ad hoc parsers. Lots of people write ad hoc
> parsers that use some regular expression checking, some substring checking,
> and write some tests to see that they "work", at least for correct input.
> But it doesn't take that much more effort to first write out the grammar in
> a formal notation, and then implement a simple parser to process the terms
> of that grammar. And you'd get a clear error message for incorrect input
> practically for free.

In this case, I would map the grammar to tests.

> Generally speaking, TDD over emphasizes the role of tests as a specification
> of a problem, and their simultaneous validation of that specification. >> Whatever heuristic value TDD offers, it's biggest claims can't be
justified.

I think you are over simplify the claims. No one claims tests are the
only specification, they are simply one layer of it. I have implemented
both back of an envelope and detailed specification with TDD. An
example of the latter was an implementation of the strtol family of
functions, which ended up with 48 tests starting out with the error
conditions and building up to base conversions.

--
Ian.

Daniel

unread,
Jun 13, 2018, 9:24:03 AM6/13/18
to
On Wednesday, June 13, 2018 at 5:17:54 AM UTC-4, Ian Collins wrote:
> On 13/06/18 05:33, Daniel wrote:
> >
> > TDD people would need some insight into floating point numbers to be
> > confident that they had sufficient test coverage, as they could easily pick
> > tests that would fortuitously pass. But nobody with insight into floating
> > point numbers would ever use the code shown above in the first place. I find
> > it interesting that the three github projects (sajson, pjson, gason)
> > actually did.
>
> Are you saying that those who do not practice TDD don't need some
> insight into floating point numbers?

I'm pretty sure that I've never said that, ever :-)

>
> I think you are over simplify the claims. No one claims tests are the
> only specification

Robert Martin has, or as near as, check out the old discussions in
comp.software.extreme-programming.

Best regards,
Daniel

Daniel

unread,
Jun 13, 2018, 10:24:44 AM6/13/18
to
On Wednesday, June 13, 2018 at 12:27:57 AM UTC-4, Juha Nieminen wrote:

> As a side note, given that there's a perfectly good implementation of
> an ascii string representation to a floating point value both in the C
> and C++ standards

Actually, there currently aren't any good conversion functions between
double and char suitable for XML or JSON processing, although the C++ 17
from_chars and to_chars may fit the bill in the future. I think it's safe to
say that no open source project (or at least any that has users) uses the
C++ streams classes, which faithfully follow the "infinite cost abstraction"
and "you get what you don't need" principles. Most projects use strtod for
string to double and snprintf for double to string, and jump through hoops
to reverse the affect of localization, for example, by substituting the
decimal point in the locale for a '.' when parsing a json fractional value.
And when using the "g" format to output, fixing up the end of string output
so that they don't illegally end with a '.', or unnecessary trailing zeros
(one zero after a '.' is necessary.)

> why do those people feel the need to implement their own?

Because of benchmarks. Open source tools compete for users, and one factor
is how they compare on benchmarks. Also, open source projects are partly for
the benefit of mankind, and partly as a hobby for their creators, and some
hobbyist/creators like to say they have the fastest parser. It would be nice
if the "zero cost abstraction" wasn't just a funny joke, but there it is.

Curiously, one of the most comprehensive and widely referred to JSON
benchmarks is sponsored by RapidJson, which implements double to string
conversion with Grisu2 from Florian Loitsch ACM Sigplan paper, and string to
double conversion following an ACM paper. And the benchmark file is a file
of floating point numbers. sajson, pjson, and gason will beat RapidJson on
speed, but they don't follow any ACM papers :-)

Daniel



Paavo Helde

unread,
Jun 13, 2018, 2:28:09 PM6/13/18
to
On 12.06.2018 20:33, Daniel wrote:
> On Tuesday, June 12, 2018 at 1:48:48 AM UTC-4, Juha Nieminen wrote:
>> Daniel <daniel...@gmail.com> wrote:
>>> if (*s == '.') {
>>> ++s;
>>> double fraction = 1;
>>> while (isdigit(*s)) {
>>> fraction *= 0.1;
>>> result += (*s++ - '0') * fraction;
>>> }
>>> }
>>>
>>> The latter, of course, can never be made correct, no matter how many unit
>>> tests are made to pass
>>
>> In the above code a unit test would specify what happens if the input is
>> too long to fit into a double.
>
> It's not that the input is "too long", but rather that certain sequences of
> digits, such as "0.1" or "0.2", don't have exact representations as
> doubles. It's the accumulation of these errors within the summation that can produce incorrect digits in the result.

Let me play devil's advocate for a change!

I can understand the beauty in having a "perfect" function for
floating-point binary-decimal conversion. However, in reality one rarely
has exactly determined floating-point numbers so the usefulness of such
a function remains pretty limited. Typically there can be deviations in
the last digits because of multiple reasons, so nobody cares if these
last digits get converted to decimal absolutely perfectly or not as they
contain only noise anyway.

It's true that repeated binary-decimal conversion can cause a large
drift. But that's easy to fix: if it hurts, don't do that! There is no
reason why a number should be converted from binary to decimal and back
thousands of times.

Richard

unread,
Jun 13, 2018, 3:01:35 PM6/13/18
to
[Please do not mail me a copy of your followup]

Paavo Helde <myfir...@osa.pri.ee> spake the secret code
<pfrnnf$baf$1...@dont-email.me> thusly:

>It's true that repeated binary-decimal conversion can cause a large
>drift. But that's easy to fix: if it hurts, don't do that! There is no
>reason why a number should be converted from binary to decimal and back
>thousands of times.

The only case I can think of where this happens, although not for
"thousands of times", is when transporting data between systems in a
text format and the differing systems result in drift in the
floating-point representation because they have different
implementations of "text to floating-point" and "floating-point to
text".

Ian Collins

unread,
Jun 13, 2018, 4:19:41 PM6/13/18
to
Links? I'm pretty sure he doesn't make such a silly claim... You have
to start with some sort of requirement, no matter how vague!

--
Ian.

James Kuyper

unread,
Jun 13, 2018, 7:30:14 PM6/13/18
to
On 06/13/2018 09:23 AM, Daniel wrote:
> On Wednesday, June 13, 2018 at 5:17:54 AM UTC-4, Ian Collins wrote:
...
>> I think you are over simplify the claims. No one claims tests are the
>> only specification
>
> Robert Martin has, or as near as, check out the old discussions in
> comp.software.extreme-programming.

You know what you're referring to - anyone else would have to guess. If
you could identify a specific message in which he made such a claim,
that would make it a lot easier for other people to check it out and
either agree with you or explain to you how you've misinterpreted his words.

Tim Rentsch

unread,
Jun 13, 2018, 9:26:10 PM6/13/18
to
The links you posted made interesting reading. Thank you
for posting them.

Tim Rentsch

unread,
Jun 13, 2018, 9:29:36 PM6/13/18
to
cr...@spitfire.i.gajendra.net (Dan Cross) writes:

> On a side note, I'm curious what your thoughts are on this:
> https://pdfs.semanticscholar.org/7745/5588b153bc721015ddfe9ffe82f110988450.pdf

Another very interesting read. Keep those good links a comin'.
It is loading more messages.
0 new messages