Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Putting project code in many different files

289 views
Skip to first unread message

JiiPee

unread,
Jan 28, 2016, 6:21:38 AM1/28/16
to
How do you organieze code in your project in regarding to a number of
files you have?
Lets say you have 20-30 different options in menu to do something which
each take more than 50 lines of code. Woud you separate them
to different files or put then on a same file? Is it a good idea to have
a lot of smaller files in a project of combine them?

I guess its faster to compile if there are a lot of small files as only
that file needs to be compiled if changed?

Scott Lurndal

unread,
Jan 28, 2016, 9:19:42 AM1/28/16
to
Generally, I have one header file and one source file per class.

Sometimes there is need for a private header file in addition to the
public header file, but that's rare and usually due to circular include
issues in larger projects.

JiiPee

unread,
Jan 28, 2016, 11:12:02 AM1/28/16
to
On 28/01/2016 14:19, Scott Lurndal wrote:
> JiiPee <n...@notvalid.com> writes:
>> How do you organieze code in your project in regarding to a number of
>> files you have?
>> Lets say you have 20-30 different options in menu to do something which
>> each take more than 50 lines of code. Woud you separate them
>> to different files or put then on a same file? Is it a good idea to have
>> a lot of smaller files in a project of combine them?
>>
>> I guess its faster to compile if there are a lot of small files as only
>> that file needs to be compiled if changed?
> Generally, I have one header file and one source file per class.
>
Even if the classes are small and there are a lot of classes? say you
have 30 small classes, would you still put them to a separate files?

Victor Bazarov

unread,
Jan 28, 2016, 11:18:21 AM1/28/16
to
You call 30 "a lot"?

V
--
I do not respond to top-posted replies, please don't ask

Wouter van Ooijen

unread,
Jan 28, 2016, 11:26:14 AM1/28/16
to
Op 28-Jan-16 om 12:21 PM schreef JiiPee:
When compilation time is a real problem that is a good argument for
organizing your code. But that argument might actually favour mid-sized
source files: on one side too many small files means too many runs of
the compiler when a common header is changed, and on the other side too
large a file(s) causes the whole file to be re-compiled when only a
small part is changed. It is a balance....

But when compilation time is NOT an issue, I would favour readability
over anything else. But that again might depend on context. I like
related classes to be in one file (I like to print and read while taking
a bath), others might like to find the source fo a class in a file with
teh same name. Yet others couldn't care either way because their IDE
will find a class definition for them....

Find out what works for you!

Wouter van Ooijen

Paavo Helde

unread,
Jan 28, 2016, 11:32:47 AM1/28/16
to
If I have small helper classes which only make sense when used together
with a certain larger class, then I generally put them in the same .h
and .cpp files with the larger class.

It is true that recompiling a single small file is faster than
recompiling a single large file. However, recompiling a lot of small
files can be significantly slower than recompiling a couple of large
files (especially if precompiled header support is not working well for
any reason).


Scott Lurndal

unread,
Jan 28, 2016, 11:38:40 AM1/28/16
to
$ ls */*.h |wc -l
161

About 10 of these aren't class definitions, the rest are.

Scott Lurndal

unread,
Jan 28, 2016, 11:44:09 AM1/28/16
to
On any modern system, it's not really significantly slower (at least on linux and
unix systems) because the common header files will end up cached in memory after
the first couple of compiles.

It may really suck on windows, but I've been able to successfully avoid
using windows[*] since it was introduced in 1985 (10 years after I started programming).

[*] Except for writing a couple of device drivers for NT 3.51 (Optical-WORM and associated
Jukebox drivers) in the 90's.

Öö Tiib

unread,
Jan 28, 2016, 11:44:17 AM1/28/16
to
30 classes is perhaps a tiny toy tool for trying out something.
Often there are 100-200 classes even in relatively dumb embedded
software.

To split things up into several files is especially good idea on cases like
you described that these are some sort of GUI widgets. GUI changes are
most likely and most frequent during development. It is major pain and
lot of wasted work time when 11 persons edit one large *.rc file for
Windows app that they develop and then try to merge the changes.

Paavo Helde

unread,
Jan 28, 2016, 12:31:28 PM1/28/16
to
On 28.01.2016 18:43, Scott Lurndal wrote:
> Paavo Helde <myfir...@osa.pri.ee> writes:
>> However, recompiling a lot of small
>> files can be significantly slower than recompiling a couple of large
>> files (especially if precompiled header support is not working well for
>> any reason).
>
> On any modern system, it's not really significantly slower (at least on linux and
> unix systems) because the common header files will end up cached in memory after
> the first couple of compiles.

I believe the speed depends very much on how many templates these common
header files contain.

Cheers
Paavo

Ian Collins

unread,
Jan 28, 2016, 2:04:10 PM1/28/16
to
I agree with Scott. Being able to perform parallel or distributed
builds is a big win for more smaller files.

--
Ian Collins

4ndre4

unread,
Jan 28, 2016, 2:15:21 PM1/28/16
to
On 28/01/2016 11:21, JiiPee wrote:
[...]
> I guess its faster to compile if there are a lot of small files as only
> that file needs to be compiled if changed?

One class per file. That's the principle I adopt, borrowed from other
languages that I use (Java in primis). I don't really care about the
number of files in a project. I can always organize them in different
folders, grouped by functionality. Concurrency also leads to less
conflicts/dependencies between developers, if the code is more fragmented.

--
4ndre4
"The use of COBOL cripples the mind; its teaching should, therefore, be
regarded as a criminal offense." (E. Dijkstra)

4ndre4

unread,
Jan 28, 2016, 2:16:13 PM1/28/16
to
On 28/01/2016 11:21, JiiPee wrote:

[...
> I guess its faster to compile if there are a lot of small files as only
> that file needs to be compiled if changed?

...also, don't forget that for unit testing reasons, having one class
per file is a convenient solution.

Jorgen Grahn

unread,
Jan 28, 2016, 3:01:27 PM1/28/16
to
It depends on what they do. If they are closely related, perhaps
siblings, perhaps they need the same helper functions which noone else
needs, perhaps they will change together ... then I'll probably put
them in one .h file and one .cc file.

But small size is not enough reason.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .

Jorgen Grahn

unread,
Jan 28, 2016, 3:41:11 PM1/28/16
to
On Thu, 2016-01-28, 4ndre4 wrote:
> On 28/01/2016 11:21, JiiPee wrote:
>
> [...
>> I guess its faster to compile if there are a lot of small files as only
>> that file needs to be compiled if changed?
>
> ...also, don't forget that for unit testing reasons, having one class
> per file is a convenient solution.

Why? I don't see any connection or limitation there. Seems pretty
orthogonal to me.

The only thing would be, if a source file containing test cases starts
with a '#include <foo.h>', you'd expect the tests to cover all of the
functionality exposed via foo.h. But if the classes (and functions)
are so closely related that I want them all in foo.h, I probably want
to keep the tests together, too ...

Ian Collins

unread,
Jan 28, 2016, 3:51:09 PM1/28/16
to
Jorgen Grahn wrote:
> On Thu, 2016-01-28, 4ndre4 wrote:
>> On 28/01/2016 11:21, JiiPee wrote:
>>
>> [...
>>> I guess its faster to compile if there are a lot of small files as only
>>> that file needs to be compiled if changed?
>>
>> ...also, don't forget that for unit testing reasons, having one class
>> per file is a convenient solution.
>
> Why? I don't see any connection or limitation there. Seems pretty
> orthogonal to me.

Mocking.

If you are testing class A which depends on class B you don't want the
definition of A an B in the same file.

--
Ian Collins

Scott Lurndal

unread,
Jan 28, 2016, 4:04:03 PM1/28/16
to
A very valid point, if your tools support parallel builds.

Jorgen Grahn

unread,
Jan 28, 2016, 5:22:50 PM1/28/16
to
That could be a reason in some cases I guess, although for me classes
in the same file tend to be siblings with similarities rather than
dependencies.

I still would like to see "4ndre4"'s argument, too.

Jorgen Grahn

unread,
Jan 28, 2016, 5:35:12 PM1/28/16
to
On Thu, 2016-01-28, Ian Collins wrote:
Do you always want to break that dependency during unit testing?

E.g. if it's aggregation -- class A contains a B, or inherits from B.
The way I see that situation, when I'm testing A as a black box, that
aggregation is just part of A's implementation.

If that sounds like a naive question, it's because I'm very unsure
what people in general mean by "unit testing". I still don't see
a consensus.

Jorgen Grahn

unread,
Jan 28, 2016, 6:17:26 PM1/28/16
to
On Thu, 2016-01-28, Scott Lurndal wrote:
> Ian Collins <ian-...@hotmail.com> writes:
...
>>I agree with Scott. Being able to perform parallel or distributed
>>builds is a big win for more smaller files.

You don't need /that/ many files for that effect to kick in, though.
Just enough to keep a handful of CPUs busy. It's hard IME to write
C++ code so that that becomes a problem.

(Although I don't know how that works when people use link-time
optimization, more or less delaying compilation until the linking
phase.)

> A very valid point, if your tools support parallel builds.

Are there any left that don't? GNU Make has done it for ages, and
it's an obviously useful feature these days, when everybody has a
bunch of mostly unused CPUs ...

Jerry Stuckle

unread,
Jan 28, 2016, 6:34:41 PM1/28/16
to
Unit testing is testing small pieces of the code. For OO, as an
example, a unit test may be a single class. Once that class passes, it
can be used in testing other classes.

So in your example, first you would test B by itself. Then you would
test A.

In fact, you may not even have A when you're testing B. If you place
them in the same file, once you finish with A, you need to go back and
test B again, because it's file has changed.


--
==================
Remove the "x" from my email address
Jerry Stuckle
jstu...@attglobal.net
==================

JiiPee

unread,
Jan 28, 2016, 7:53:06 PM1/28/16
to
On 28/01/2016 19:15, 4ndre4 wrote:
> On 28/01/2016 11:21, JiiPee wrote:
> [...]
>> I guess its faster to compile if there are a lot of small files as only
>> that file needs to be compiled if changed?
>
> One class per file. That's the principle I adopt, borrowed from other
> languages that I use (Java in primis). I don't really care about the
> number of files in a project. I can always organize them in different
> folders, grouped by functionality. Concurrency also leads to less
> conflicts/dependencies between developers, if the code is more
> fragmented.
>

ok. I also thought that having a lot of files is not so much a problem,
although have not much experience in it. I guess its just a matter of
naming the files well.

JiiPee

unread,
Jan 28, 2016, 7:53:06 PM1/28/16
to
good point

Öö Tiib

unread,
Jan 29, 2016, 2:33:23 AM1/29/16
to
We typically name the folders by namespace and files by class that
the file implements (or declares on case of .h file). If we rename
the class then we rename the file as well.


Ian Collins

unread,
Jan 29, 2016, 4:48:27 AM1/29/16
to
Jorgen Grahn wrote:
> On Thu, 2016-01-28, Ian Collins wrote:
>> Jorgen Grahn wrote:
>>> On Thu, 2016-01-28, 4ndre4 wrote:
>>>> On 28/01/2016 11:21, JiiPee wrote:
>>>>
>>>> [...
>>>>> I guess its faster to compile if there are a lot of small files as only
>>>>> that file needs to be compiled if changed?
>>>>
>>>> ...also, don't forget that for unit testing reasons, having one class
>>>> per file is a convenient solution.
>>>
>>> Why? I don't see any connection or limitation there. Seems pretty
>>> orthogonal to me.
>>
>> Mocking.
>>
>> If you are testing class A which depends on class B you don't want the
>> definition of A an B in the same file.
>
> Do you always want to break that dependency during unit testing?
>
> E.g. if it's aggregation -- class A contains a B, or inherits from B.
> The way I see that situation, when I'm testing A as a black box, that
> aggregation is just part of A's implementation.
>
> If that sounds like a naive question, it's because I'm very unsure
> what people in general mean by "unit testing". I still don't see
> a consensus.

The term is rather vague, although in C++ a class is a good candidate
for a unit.

I think the case is clearer for encapsulation than it is for
inheritance. I will always mock an encapsulate class, with inheritance
I will mock the base class if it is in another project or library
otherwise I tend to be slack.

Most of my code has five files for each class: the header and
definition, the test header and definition and an interface definition
used to build a mock of the class. Whether a base/encapsulated class
class has been tested is irrelevant, you still need a mock object to
simulate error conditions.

--
Ian Collins

JiiPee

unread,
Jan 29, 2016, 5:02:49 AM1/29/16
to
yes the class name is obvious.
Do you normally have only one namespace for the project which you use to
all classes you create? or several namespaces? I have not used
namespaces so much, but obviously for one library I used one
namespace.... but how about for the project?
>
>

Zaphod Beeblebrox

unread,
Jan 29, 2016, 7:22:38 AM1/29/16
to
On Thursday, 28 January 2016 20:41:11 UTC, Jorgen Grahn wrote:

[...]
Cause, if my file is Parser.cpp, I want to have a ParserTest.cpp that covers ONLY the unit tests for Parser.cpp and nothing else. I think it's convenient to follow a simple convention: one class per file, either source or test.

4ndre4

unread,
Jan 29, 2016, 1:06:32 PM1/29/16
to
On 28/01/2016 22:22, Jorgen Grahn wrote:

[...]
> I still would like to see "4ndre4"'s argument, too.

It's a matter of conventions. I said it's convenient, because the
convention I follow is to have one class per file and therefore one test
class per file. Normally, if I have my Validator class in my
Validator.h/.cpp, I want my ValidatorTest.h/.cpp and that covers ONLY
the functionality in my Validator class. As for the word "unit", it is
true that the term is vague. Most of the times, a "unit" is a single method.

4ndre4

unread,
Jan 30, 2016, 6:21:25 AM1/30/16
to
On 28/01/2016 23:34, Jerry Stuckle wrote:
[...]
>Once that class passes, it
> can be used in testing other classes.

What do you mean?

> In fact, you may not even have A when you're testing B. If you place
> them in the same file, once you finish with A, you need to go back and
> test B again, because it's file has changed.

What?!

Jorgen Grahn

unread,
Jan 30, 2016, 9:13:19 AM1/30/16
to
On Fri, 2016-01-29, 4ndre4 wrote:
> On 28/01/2016 22:22, Jorgen Grahn wrote:
>
> [...]
>> I still would like to see "4ndre4"'s argument, too.

For reference, the claim was simply

...also, don't forget that for unit testing reasons, having one
class per file is a convenient solution.

> It's a matter of conventions. I said it's convenient, because the
> convention I follow is to have one class per file and therefore one test
> class per file. Normally, if I have my Validator class in my
> Validator.h/.cpp, I want my ValidatorTest.h/.cpp and that covers ONLY
> the functionality in my Validator class.

Ok, so it's convenient because it's 1+1+1: three things that belong
together instead of just a pair.

I agree -- for most cases. But sometimes (let's say 5%) some other
consideration makes following the convention suboptimal.

[Philosophy alert]

I seem to be more willing to make exceptions than many programmers
(or perhaps more explicit about it). To me, a convention needs to be
broken as soon as it's not a good fit. That, to me, actually makes
the convention /more/ useful, because when you see it, you know it's
fitting.

Or you could say that I want the edge cases to stand out, not
masquerade as normal cases.

All that is also related to the quote "a foolish consistency is the
hobgoblin of little minds" -- except I wish Ralph Waldo Emerson had
phrased it in a less insulting way.

Öö Tiib

unread,
Jan 30, 2016, 11:23:38 AM1/30/16
to
I like every class to be in some named namespace. I like different
namespace for each module. Sometimes I will have nested namespaces
when there are sub-modules.

4ndre4

unread,
Jan 30, 2016, 12:50:24 PM1/30/16
to
On 30/01/2016 14:12, Jorgen Grahn wrote:
[...]
> I agree -- for most cases. But sometimes (let's say 5%) some other
> consideration makes following the convention suboptimal.

Such as?

Jerry Stuckle

unread,
Jan 30, 2016, 1:09:17 PM1/30/16
to
On 1/30/2016 6:21 AM, 4ndre4 wrote:
> On 28/01/2016 23:34, Jerry Stuckle wrote:
> [...]
>> Once that class passes, it
>> can be used in testing other classes.
>
> What do you mean?
>

Once you have tested B and are sure it works, you can use it in other
tests. For instance, if A is derived from B or has B as a member,
before you can thoroughly test A you have to ensure B works.

>> In fact, you may not even have A when you're testing B. If you place
>> them in the same file, once you finish with A, you need to go back and
>> test B again, because it's file has changed.
>
> What?!
>

Yes. Good quality control dictates that any time a file is changed,
tests on everything in that file must be repeated. Someone may say they
didn't change B, and they may not have - intentionally, at least. But
you don't know for sure without testing, and not testing is how bugs
sneak into "working code".

4ndre4

unread,
Jan 30, 2016, 1:16:23 PM1/30/16
to
On 30/01/2016 18:09, Jerry Stuckle wrote:
[...]
> Once you have tested B and are sure it works, you can use it in other
> tests. For instance, if A is derived from B or has B as a member,
> before you can thoroughly test A you have to ensure B works.

No, it doesn't matter. The order you run the tests does not really
matter on classes. And you don't strictly "use" other classes in unit
tests. Collaborations between classes should be mocked. The tests for
class B can run independently from those for class A, even though A
inherits from B.

> Yes. Good quality control dictates that any time a file is changed,
> tests on everything in that file must be repeated.

Why should Something.cpp be ever changed or even touched, if I am
running the tests contained in SomethingTest.cpp?

Paavo Helde

unread,
Jan 30, 2016, 2:01:47 PM1/30/16
to
On 30.01.2016 20:09, Jerry Stuckle wrote:
> On 1/30/2016 6:21 AM, 4ndre4 wrote:
>> On 28/01/2016 23:34, Jerry Stuckle wrote:
>> [...]
>>> Once that class passes, it
>>> can be used in testing other classes.
>>
>> What do you mean?
>>
>
> Once you have tested B and are sure it works, you can use it in other
> tests. For instance, if A is derived from B or has B as a member,
> before you can thoroughly test A you have to ensure B works.

I think this holds more at the development stage (and assuming B is not
mocked), not at the later stage when hidden bugs get revealed by any
reason. And this does not contradict putting other classes in the same
file in any way.

>>> In fact, you may not even have A when you're testing B. If you place
>>> them in the same file, once you finish with A, you need to go back and
>>> test B again, because it's file has changed.

But of course. All the unit tests run anyway automatically each time
when anything changes in the repository. So there is no problem with
testing B again, this will be done uncountable times.

Cheers
Paavo


Jerry Stuckle

unread,
Jan 30, 2016, 5:28:27 PM1/30/16
to
On 1/30/2016 1:16 PM, 4ndre4 wrote:
> On 30/01/2016 18:09, Jerry Stuckle wrote:
> [...]
>> Once you have tested B and are sure it works, you can use it in other
>> tests. For instance, if A is derived from B or has B as a member,
>> before you can thoroughly test A you have to ensure B works.
>
> No, it doesn't matter. The order you run the tests does not really
> matter on classes. And you don't strictly "use" other classes in unit
> tests. Collaborations between classes should be mocked. The tests for
> class B can run independently from those for class A, even though A
> inherits from B.
>

It does matter, and it's not at all unusual for a class to have other
classes as members. You start out testing single classes. Then you can
test classes which include the first class.

And you can run the tests for class B independent of class A - but since
A is derived from B, you can't test A without knowing B is working.

>> Yes. Good quality control dictates that any time a file is changed,
>> tests on everything in that file must be repeated.
>
> Why should Something.cpp be ever changed or even touched, if I am
> running the tests contained in SomethingTest.cpp?
>

Please read the thread. We are talking about when there are multiple
classes in one file.

Jerry Stuckle

unread,
Jan 30, 2016, 5:32:03 PM1/30/16
to
On 1/30/2016 2:01 PM, Paavo Helde wrote:
> On 30.01.2016 20:09, Jerry Stuckle wrote:
>> On 1/30/2016 6:21 AM, 4ndre4 wrote:
>>> On 28/01/2016 23:34, Jerry Stuckle wrote:
>>> [...]
>>>> Once that class passes, it
>>>> can be used in testing other classes.
>>>
>>> What do you mean?
>>>
>>
>> Once you have tested B and are sure it works, you can use it in other
>> tests. For instance, if A is derived from B or has B as a member,
>> before you can thoroughly test A you have to ensure B works.
>
> I think this holds more at the development stage (and assuming B is not
> mocked), not at the later stage when hidden bugs get revealed by any
> reason. And this does not contradict putting other classes in the same
> file in any way.
>

Testing never stops until the product is ready to ship. Even then, it
may not stop.

And any time a file is changed, all classes in that file must be tested
to ensure there no bugs have been inadvertently introduced.

>>>> In fact, you may not even have A when you're testing B. If you place
>>>> them in the same file, once you finish with A, you need to go back and
>>>> test B again, because it's file has changed.
>
> But of course. All the unit tests run anyway automatically each time
> when anything changes in the repository. So there is no problem with
> testing B again, this will be done uncountable times.
>
> Cheers
> Paavo
>
>

Yes, if the testing is automated (and it is in most large projects),
that is true. But it still takes time and manpower to ensure the tests
get run and to verify the results.

Ian Collins

unread,
Jan 30, 2016, 5:33:37 PM1/30/16
to
Jerry Stuckle wrote:
> On 1/30/2016 1:16 PM, 4ndre4 wrote:
>> On 30/01/2016 18:09, Jerry Stuckle wrote:
>> [...]
>>> Once you have tested B and are sure it works, you can use it in other
>>> tests. For instance, if A is derived from B or has B as a member,
>>> before you can thoroughly test A you have to ensure B works.
>>
>> No, it doesn't matter. The order you run the tests does not really
>> matter on classes. And you don't strictly "use" other classes in unit
>> tests. Collaborations between classes should be mocked. The tests for
>> class B can run independently from those for class A, even though A
>> inherits from B.
>>
>
> It does matter, and it's not at all unusual for a class to have other
> classes as members. You start out testing single classes. Then you can
> test classes which include the first class.
>
> And you can run the tests for class B independent of class A - but since
> A is derived from B, you can't test A without knowing B is working.

You can, you just use a mock of B. You can't property test A without
being able to control the behaviour of the B methods it is using.

--
Ian Collins

Jerry Stuckle

unread,
Jan 30, 2016, 5:54:28 PM1/30/16
to
But a mock of B may or may not behave the same. Initial tests of A
would be OK, but not conclusive, since A's operation depends on B. And
the interaction between the two is part of the unit test of A.

Ian Collins

unread,
Jan 30, 2016, 6:09:23 PM1/30/16
to
Jerry Stuckle wrote:
> On 1/30/2016 5:33 PM, Ian Collins wrote:
>> Jerry Stuckle wrote:
>>> On 1/30/2016 1:16 PM, 4ndre4 wrote:
>>>> On 30/01/2016 18:09, Jerry Stuckle wrote:
>>>> [...]
>>>>> Once you have tested B and are sure it works, you can use it in other
>>>>> tests. For instance, if A is derived from B or has B as a member,
>>>>> before you can thoroughly test A you have to ensure B works.
>>>>
>>>> No, it doesn't matter. The order you run the tests does not really
>>>> matter on classes. And you don't strictly "use" other classes in unit
>>>> tests. Collaborations between classes should be mocked. The tests for
>>>> class B can run independently from those for class A, even though A
>>>> inherits from B.
>>>>
>>>
>>> It does matter, and it's not at all unusual for a class to have other
>>> classes as members. You start out testing single classes. Then you can
>>> test classes which include the first class.
>>>
>>> And you can run the tests for class B independent of class A - but since
>>> A is derived from B, you can't test A without knowing B is working.
>>
>> You can, you just use a mock of B. You can't property test A without
>> being able to control the behaviour of the B methods it is using.
>>
>
> But a mock of B may or may not behave the same.

A mock of B will behave exactly as the tests define it.

> Initial tests of A
> would be OK, but not conclusive, since A's operation depends on B. And
> the interaction between the two is part of the unit test of A.

Consider the case where B isn't part of the project code but a third
part library.

--
Ian Collins

Jerry Stuckle

unread,
Jan 30, 2016, 9:41:08 PM1/30/16
to
Which may or may not be the actual behavior of the class.

>> Initial tests of A
>> would be OK, but not conclusive, since A's operation depends on B. And
>> the interaction between the two is part of the unit test of A.
>
> Consider the case where B isn't part of the project code but a third
> part library.
>

We're not talking about something that isn't part of the project.

Ian Collins

unread,
Jan 31, 2016, 1:14:32 AM1/31/16
to
Which would be pretty f'ing useless.

--
Ian Collins

Alf P. Steinbach

unread,
Jan 31, 2016, 2:15:02 AM1/31/16
to
No, it's necessary to test the pieces of each class working together, at
the level of unit testing. Traditional integration testing is at a
higher level. If this is not done then one risks that the tests for B do
not guarantee that B will behave the same as the mock-up of B for the A
testing, e.g. due to these tests not completely nailing down the B
behavior in all test cases and test sequences.

As a perhaps silly example, B might be a string class with encoding
checks. One of which is `is_ascii` (comes to mind because I've recently
posted about such a function). Perhaps tests for B fail to test
`is_ascii` for the edge case of empty string. Perhaps the mockup returns
`false` in this case, causing testing of A with mockup to succeed, while
the real B and design intention is return of `true`.

To discover that the B tests are incomplete, the real B needs to be
exercised in its intended environment. It's like, no plan survives
contact with the enemy. No set of tests survives contact with reality,
although it can survive contact with just mockups, which after all are
coded to just support the tests at hand, aren't they?

So in that respect Jerry Stuckle is right, as I see it.

But Jerry can possibly be wrong regarding his statement that "before you
can thoroughly test A you have to ensure B works”. It depends very much
on what “thoroughly” means here. I suspect that Jerry's intended meaning
of “thoroughly” was and is that A is tested with the real B, while Ian's
meaning of “thoroughly” is that A is sort of thoroughly tested with a
mock-up of B, and that testing with real B is something else, maybe
called something else?


Cheers, wondering,

- Alf

Ian Collins

unread,
Jan 31, 2016, 2:42:27 AM1/31/16
to
Alf P. Steinbach wrote:
>
> But Jerry can possibly be wrong regarding his statement that "before you
> can thoroughly test A you have to ensure B works”. It depends very much
> on what “thoroughly” means here. I suspect that Jerry's intended meaning
> of “thoroughly” was and is that A is tested with the real B, while Ian's
> meaning of “thoroughly” is that A is sort of thoroughly tested with a
> mock-up of B, and that testing with real B is something else, maybe
> called something else?

You can't thoroughly test A with a real B because you can't control the
behaviour of the real B's methods. They will do what they naturally do,
but if your test requires one of B's methods to fail in some way you
either have to pollute B with the code to synthesise a failure, or mock it.

Yes at some point (typically in integration testing) you will be testing
the real code in combination, but for unit testing A, you need to mock B.

--
Ian Collins

Jorgen Grahn

unread,
Jan 31, 2016, 3:11:47 AM1/31/16
to
On Sat, 2016-01-30, 4ndre4 wrote:
> On 30/01/2016 14:12, Jorgen Grahn wrote:
> [...]
>> I agree -- for most cases. But sometimes (let's say 5%) some other
>> consideration makes following the convention suboptimal.
>
> Such as?

Actually, the considerations are always the same: readability,
maintainability and so on. I meant to say sometimes the 1:1 rule
doesn't optimize for that.

My example elsewhere in the thread was a set of small sibling classes
which are similar, but cannot be a template. But I'm sure there are
other examples.

Jorgen Grahn

unread,
Jan 31, 2016, 8:24:16 AM1/31/16
to
^^^
(I assume you meant "encapsulated" there.)

To me, that sounds, well, extreme. Where would you stop? If you
don't mock std::string, std::map etc, what's the difference between
them and one of your own classes (which you have tested elsewhere?)

class Foo { Bar bar; ... };

I've always figured bar is an implementation detail of Foo; that its
state is part of Foo's state. So I ignore it when I'm testing Foo
(assuming Bar itself doesn't do I/O or something).

Question: if these are two different ways of thinking about unit
tests for classes, do they have names? Can I refer to books, when
arguing with coworkers about how to proceed with the testing? It's
so frustrating when you only have your own opinions and prejudices
as a starting point.

(One entry point could perhaps be this and related entries:
http://c2.com/cgi/wiki?StandardDefinitionOfUnitTest
but there's no consensus there either.)

I've tried to think about benefits of mocking the encapsulated bar ...
one thing would be that you make it explicit what contract Foo expects
Bar to keep. You can also document what happens to Foo if Bar doesn't
keep the contract, but I'm unsure if that has any value. I suspect
you can list better arguments. You gave one below:

> Whether a base/encapsulated class class has been tested is
> irrelevant, you still need a mock object to simulate error
> conditions.

Hm ... yes, I can easily see memory allocation failures fitting in
there. But is there anything else, typically? It seems to me that a
class Bar which doesn't need mocking for /really/ obvious reasons like
it's doing I/O, will fail due to
- programming errors (covered by Bar's unit tests)
- incorrect usage (covered by Foo's unit tests)
- memory allocation problems

Jorgen Grahn

unread,
Jan 31, 2016, 8:32:25 AM1/31/16
to
On Sat, 2016-01-30, 4ndre4 wrote:
> On 28/01/2016 23:34, Jerry Stuckle wrote:
> [...]
>>Once that class passes, it
>> can be used in testing other classes.
>
> What do you mean?
>
>> In fact, you may not even have A when you're testing B. If you place
>> them in the same file, once you finish with A, you need to go back and
>> test B again, because it's file has changed.
>
> What?!

Big deal; you run your unit tests constantly anyway.
(In my case; "make check" and the tests are built and executed, and
they print just three lines of "everything is fine" if all tests still
pass.)

Jorgen Grahn

unread,
Jan 31, 2016, 9:00:40 AM1/31/16
to
On Sat, 2016-01-30, Jerry Stuckle wrote:
> On 1/30/2016 2:01 PM, Paavo Helde wrote:
>> On 30.01.2016 20:09, Jerry Stuckle wrote:
>>> On 1/30/2016 6:21 AM, 4ndre4 wrote:
>>>> On 28/01/2016 23:34, Jerry Stuckle wrote:

>>>>> In fact, you may not even have A when you're testing B. If you place
>>>>> them in the same file, once you finish with A, you need to go back and
>>>>> test B again, because it's file has changed.
>>
>> But of course. All the unit tests run anyway automatically each time
>> when anything changes in the repository. So there is no problem with
>> testing B again, this will be done uncountable times.
>
> Yes, if the testing is automated (and it is in most large projects),

There is no reason it should be manual in small or medium-sized
projects ... building and running unit tests is one of the most easily
automated tasks I can think of, provided you have half-decent tools.

(Except if you're cross-compiling and this is the only thing you want
to run on the host. But then doing it manually would be even harder.)

> that is true. But it still takes time and manpower to ensure the tests
> get run and to verify the results.

IMO, one major key to get unit tests to work in practice is:
- They're trivial to run as part of the build (e.g. everyone on the
project knows "make all test" will always run the correct set
of tests).
- They don't take a lot of extra time, compared to plain incremental
builds.
- The output when everything is fine doesn't fill your screen with
information you won't read ("hey, did you know these 4711 tests
all passed -- again?" followed by a complete list).

Other things (like reports and statistics from nightly builds) are
optional, but without all three bullets above, unit tests will never
work well (IMO, of course).

4ndre4

unread,
Jan 31, 2016, 1:42:59 PM1/31/16
to
On 30/01/2016 22:28, Jerry Stuckle wrote:

[...]
> It does matter, and it's not at all unusual for a class to have other
> classes as members. You start out testing single classes. Then you can
> test classes which include the first class.

No, it doesn't matter in the least. Tests are normally executed in
random order. The unit tests for class A are supposed to test ONLY the
behaviour for class A and not assume any dependency. If any dependency
is needed, it should be mocked.

>> Why should Something.cpp be ever changed or even touched, if I am
>> running the tests contained in SomethingTest.cpp?
>
> Please read the thread. We are talking about when there are multiple
> classes in one file.

My question stands.

Ian Collins

unread,
Jan 31, 2016, 4:44:29 PM1/31/16
to
Yes.

> To me, that sounds, well, extreme. Where would you stop? If you
> don't mock std::string, std::map etc, what's the difference between
> them and one of your own classes (which you have tested elsewhere?)
>
> class Foo { Bar bar; ... };
>
> I've always figured bar is an implementation detail of Foo; that its
> state is part of Foo's state. So I ignore it when I'm testing Foo
> (assuming Bar itself doesn't do I/O or something).

There's the thing: most of the work I do these days is one or two steps
above the operating system interface, so there's a strong chance that
Bar will be doing I/O or something. The same degree of separation
applies to most embedded systems.

> Question: if these are two different ways of thinking about unit
> tests for classes, do they have names? Can I refer to books, when
> arguing with coworkers about how to proceed with the testing? It's
> so frustrating when you only have your own opinions and prejudices
> as a starting point.

Doesn't anyone have colleagues any more? Sorry, I just hate the term
"coworkers"!

Back to the topic: what to reuse and what to mock? This is as much a
philosophical discussion as a technical one. Going too far in either
direction will result in either an incomprehensible mess or under
testing. Any project team will have to find their own happy place and
where that place lives between the two extremes will depend on the
domain and the personalities of the team members.

The last team I worked with were all detail focused (pedants if you
like) and working on a safety critical control system. This combination
naturally led to mocking everything except the class under test. Each
class had 5 files associated with it; the header and definition, the
test header and tests and the fifth being an (XML) interface description
used by the test framework to generate mocks.

> (One entry point could perhaps be this and related entries:
> http://c2.com/cgi/wiki?StandardDefinitionOfUnitTest
> but there's no consensus there either.)
>
> I've tried to think about benefits of mocking the encapsulated bar ...
> one thing would be that you make it explicit what contract Foo expects
> Bar to keep. You can also document what happens to Foo if Bar doesn't
> keep the contract, but I'm unsure if that has any value. I suspect
> you can list better arguments. You gave one below:

Another common case is where you want Bar to be in a particular state
when the method under test is called. If for example Bar implements a
state machine you may have to jump through hoops to get the real Bar
into the correct state.

>> Whether a base/encapsulated class class has been tested is
>> irrelevant, you still need a mock object to simulate error
>> conditions.
>
> Hm ... yes, I can easily see memory allocation failures fitting in
> there. But is there anything else, typically? It seems to me that a
> class Bar which doesn't need mocking for /really/ obvious reasons like
> it's doing I/O, will fail due to
> - programming errors (covered by Bar's unit tests)
> - incorrect usage (covered by Foo's unit tests)
> - memory allocation problems

Another case I often have is a function that filters a list of files,
say comparing entries from different filesystem snapshots. I want the
objects (or system call) that returns the file data to give me the data
(file type, size, timestamps) I need for the test.

Hope this helps,

--
Ian Collins

Jerry Stuckle

unread,
Jan 31, 2016, 6:09:34 PM1/31/16
to
Which is exactly why you wouldn't be using a mock of B. You'd use the
real, *tested* B.

Jerry Stuckle

unread,
Jan 31, 2016, 6:13:25 PM1/31/16
to
Alf, What I mean is that A is tested with the real B. If A depends on
B, you can't ensure A works properly if you don't have a working B to
test with.

Jerry Stuckle

unread,
Jan 31, 2016, 6:15:24 PM1/31/16
to
The behavior of the real B's methods when used with A are the same in
testing as in production. If the test requires one of B's methods to
fail, then it will fail in both test and production. If it doesn't, you
have a problem - in both testing and production.

Jerry Stuckle

unread,
Jan 31, 2016, 6:17:46 PM1/31/16
to
On 1/31/2016 1:42 PM, 4ndre4 wrote:
> On 30/01/2016 22:28, Jerry Stuckle wrote:
>
> [...]
>> It does matter, and it's not at all unusual for a class to have other
>> classes as members. You start out testing single classes. Then you can
>> test classes which include the first class.
>
> No, it doesn't matter in the least. Tests are normally executed in
> random order. The unit tests for class A are supposed to test ONLY the
> behaviour for class A and not assume any dependency. If any dependency
> is needed, it should be mocked.
>

You cannot properly test a class if behavior of dependent classes is
mocked. You cannot ensure the dependent class's behavior will be the
same when the real dependency is used.

Tests are not in completely random order. Before testing a dependent
class, the class it is dependent on requires testing.

>>> Why should Something.cpp be ever changed or even touched, if I am
>>> running the tests contained in SomethingTest.cpp?
>>
>> Please read the thread. We are talking about when there are multiple
>> classes in one file.
>
> My question stands.
>

Your question is meaningless because it's not part of what we are
discussing.

Jerry Stuckle

unread,
Jan 31, 2016, 6:25:50 PM1/31/16
to
On 1/31/2016 9:00 AM, Jorgen Grahn wrote:
> On Sat, 2016-01-30, Jerry Stuckle wrote:
>> On 1/30/2016 2:01 PM, Paavo Helde wrote:
>>> On 30.01.2016 20:09, Jerry Stuckle wrote:
>>>> On 1/30/2016 6:21 AM, 4ndre4 wrote:
>>>>> On 28/01/2016 23:34, Jerry Stuckle wrote:
>
>>>>>> In fact, you may not even have A when you're testing B. If you place
>>>>>> them in the same file, once you finish with A, you need to go back and
>>>>>> test B again, because it's file has changed.
>>>
>>> But of course. All the unit tests run anyway automatically each time
>>> when anything changes in the repository. So there is no problem with
>>> testing B again, this will be done uncountable times.
>>
>> Yes, if the testing is automated (and it is in most large projects),
>
> There is no reason it should be manual in small or medium-sized
> projects ... building and running unit tests is one of the most easily
> automated tasks I can think of, provided you have half-decent tools.
>

It depends. I've seen a lot of small projects where the effort of
automating the test exceeds the manual effort required to do the testing.

> (Except if you're cross-compiling and this is the only thing you want
> to run on the host. But then doing it manually would be even harder.)
>

Maybe, maybe not. The tests would be the same, even when
cross-compiling. And, in fact, cross-compilation is even more likely to
be manually tested because the tools to do it automatically may not be
available on the target system, system constraints may not allow for the
tests to be run, or any of a number of other reasons.

>> that is true. But it still takes time and manpower to ensure the tests
>> get run and to verify the results.
>
> IMO, one major key to get unit tests to work in practice is:
> - They're trivial to run as part of the build (e.g. everyone on the
> project knows "make all test" will always run the correct set
> of tests).
> - They don't take a lot of extra time, compared to plain incremental
> builds.
> - The output when everything is fine doesn't fill your screen with
> information you won't read ("hey, did you know these 4711 tests
> all passed -- again?" followed by a complete list).
>

Tests should never be part of the build and should be performed by a
separate group, with the tests designed based on the documentation.

Programmers do a lot of building - you don't need to test with every
build. And the people writing the code should never be part of the test
environment - and aren't, in well-run projects. If the programmer
writes the tests, he/she will have a tendency to write the tests based
on the code they wrote (natural inclination). A separate group writing
the test will not do that.

> Other things (like reports and statistics from nightly builds) are
> optional, but without all three bullets above, unit tests will never
> work well (IMO, of course).
>
> /Jorgen
>

Builds may be nightly - not necessarily. And by the time you get far
enough to do a nightly build, you should already have a lot of tests run.

Ian Collins

unread,
Jan 31, 2016, 6:34:31 PM1/31/16
to
Jerry Stuckle wrote:
> On 1/31/2016 2:42 AM, Ian Collins wrote:
>> Alf P. Steinbach wrote:
>>>
>>> But Jerry can possibly be wrong regarding his statement that "before you
>>> can thoroughly test A you have to ensure B works”. It depends very much
>>> on what “thoroughly” means here. I suspect that Jerry's intended meaning
>>> of “thoroughly” was and is that A is tested with the real B, while Ian's
>>> meaning of “thoroughly” is that A is sort of thoroughly tested with a
>>> mock-up of B, and that testing with real B is something else, maybe
>>> called something else?
>>
>> You can't thoroughly test A with a real B because you can't control the
>> behaviour of the real B's methods. They will do what they naturally do,
>> but if your test requires one of B's methods to fail in some way you
>> either have to pollute B with the code to synthesise a failure, or mock it.
>>
>> Yes at some point (typically in integration testing) you will be testing
>> the real code in combination, but for unit testing A, you need to mock B.
>>
>
> The behavior of the real B's methods when used with A are the same in
> testing as in production. If the test requires one of B's methods to
> fail, then it will fail in both test and production.

So how do you get one of B's methods to throw an out of memory
exception? Or a <you name it> exception? Or return the result you
expect it to return after 100 calls?

--
Ian Collins

Ian Collins

unread,
Jan 31, 2016, 6:36:39 PM1/31/16
to
If a real B didn't behave as it's tests define it, it wouldn't be a
tested B, would it?

--
Ian Collins

Ian Collins

unread,
Jan 31, 2016, 6:37:52 PM1/31/16
to
Jerry Stuckle wrote:
> On 1/31/2016 1:42 PM, 4ndre4 wrote:
>> On 30/01/2016 22:28, Jerry Stuckle wrote:
>>
>> [...]
>>> It does matter, and it's not at all unusual for a class to have other
>>> classes as members. You start out testing single classes. Then you can
>>> test classes which include the first class.
>>
>> No, it doesn't matter in the least. Tests are normally executed in
>> random order. The unit tests for class A are supposed to test ONLY the
>> behaviour for class A and not assume any dependency. If any dependency
>> is needed, it should be mocked.
>>
>
> You cannot properly test a class if behavior of dependent classes is
> mocked. You cannot ensure the dependent class's behavior will be the
> same when the real dependency is used.

If that is the case, your process is broken.

--
Ian Collins

Paavo Helde

unread,
Jan 31, 2016, 6:38:33 PM1/31/16
to
On 1.02.2016 1:13, Jerry Stuckle wrote:
>
> Alf, What I mean is that A is tested with the real B. If A depends on
> B, you can't ensure A works properly if you don't have a working B to
> test with.

You can never ensure that something works 100% properly. At best you can
hope to have a good enough test suite to catch most of the bugs. If a
bug surfaces in B, then there is not really much difference if it is the
unit test for A or for B which catches the problem. To have separate
unit tests for A and B is just to reduce the cyclomatic complexity in
writing the tests, there is no deep philosophical reason to separate them.

In TDD, if you develop a feature provided by A, you write a test for the
feature using A. The test first fails. Then you start to add code to
make the test pass. During that you see that one of the reasons why the
test fails is that B is not yet written. So you write B, adapt A to use
it and make the test pass. There is nothing in TDD that says you have to
figure out the class dependencies beforehand and start with lowest level
classes. TDD does not care about such implementation details. To ensure
better test coverage, you may later add a separate test for B, but this
would then be a pure unit test, not a TDD test (as it does not cover a
feature, but a technical detail).

Cheers
Paavo

Ian Collins

unread,
Jan 31, 2016, 6:43:12 PM1/31/16
to
Jerry Stuckle wrote:
>
> Tests should never be part of the build and should be performed by a
> separate group, with the tests designed based on the documentation.

*Unit* test can (and should if your process requires test first coding)
be written and run by the developers. Integration, system and
acceptance (black box) testing may be performed by others if the team
include testers.

--
Ian Collins

Jerry Stuckle

unread,
Jan 31, 2016, 6:43:14 PM1/31/16
to
The same way you would in production - you create the scenario where the
exception would occur, and cause it. If it's impossible to test, then
it's impossible for the scenario to occur in production, anyway.

And if you expect a specific return after 100 calls, then you call it
100 times.

Anything that can happen in production can be tested for. Whether you
test for the extreme cases or not is up to you.

Ian Collins

unread,
Jan 31, 2016, 6:50:09 PM1/31/16
to
So you consume all of the memory on your test box, what then?

> And if you expect a specific return after 100 calls, then you call it
> 100 times.

..and if each call would fetch and parse a file from the internet?

> Anything that can happen in production can be tested for. Whether you
> test for the extreme cases or not is up to you.

Anything can be tested if you can afford the cost. Unit tests generally
can't afford to trash the host they are running on or create a DOS
attack on a remote server...

--
Ian Collins

Jerry Stuckle

unread,
Jan 31, 2016, 6:50:11 PM1/31/16
to
On 1/31/2016 6:38 PM, Paavo Helde wrote:
> On 1.02.2016 1:13, Jerry Stuckle wrote:
>>
>> Alf, What I mean is that A is tested with the real B. If A depends on
>> B, you can't ensure A works properly if you don't have a working B to
>> test with.
>
> You can never ensure that something works 100% properly. At best you can
> hope to have a good enough test suite to catch most of the bugs. If a
> bug surfaces in B, then there is not really much difference if it is the
> unit test for A or for B which catches the problem. To have separate
> unit tests for A and B is just to reduce the cyclomatic complexity in
> writing the tests, there is no deep philosophical reason to separate them.
>

That's very true. But there is a very good reason for testing
separately - once you get rid of the bugs in the independent class, you
can be pretty sure any bugs which show up are in the dependent class.
If you don't test the independent class first, you have no idea where
the problem lies. And as you get more and more classes involved the
problem becomes more difficult.

> In TDD, if you develop a feature provided by A, you write a test for the
> feature using A. The test first fails. Then you start to add code to
> make the test pass. During that you see that one of the reasons why the
> test fails is that B is not yet written. So you write B, adapt A to use
> it and make the test pass. There is nothing in TDD that says you have to
> figure out the class dependencies beforehand and start with lowest level
> classes. TDD does not care about such implementation details. To ensure
> better test coverage, you may later add a separate test for B, but this
> would then be a pure unit test, not a TDD test (as it does not cover a
> feature, but a technical detail).
>
> Cheers
> Paavo
>

The design will define the class dependencies. And every time you have
to rewrite A to adapt it to be, you are wasting programmer time, test
group time, and potentially inserting more bugs.

A proper design means the independent classes can be written first, and
the dependent ones later. This eliminates the need for programmers to
rewrite dependent code.

Jerry Stuckle

unread,
Jan 31, 2016, 6:51:05 PM1/31/16
to
On 1/31/2016 6:36 PM, Ian Collins wrote:
> Jerry Stuckle wrote:
>> On 1/31/2016 1:14 AM, Ian Collins wrote:
>>> Jerry Stuckle wrote:
>>>> On 1/30/2016 6:09 PM, Ian Collins wrote:
>>>>> Jerry Stuckle wrote:
>>>>>> On 1/30/2016 5:33 PM, Ian Collins wrote:
>>>>>>> Jerry Stuckle wrote:
>>>>>>>> On 1/30/2016 1:16 PM, 4ndre4 wrote:
>>>>>>>>> On 30/01/2016 18:09, Jerry Stuckle wrote:
>>>>>>
>>>>>> But a mock of B may or may not behave the same.
>>>>>
>>>>> A mock of B will behave exactly as the tests define it.
>>>>
>>>> Which may or may not be the actual behavior of the class.
>>>
>>> Which would be pretty f'ing useless.
>>>
>>
>> Which is exactly why you wouldn't be using a mock of B. You'd use the
>> real, *tested* B.
>
> If a real B didn't behave as it's tests define it, it wouldn't be a
> tested B, would it?
>

No. But a mock B wouldn't be used, because it doesn't necessarily
behave the same as the real B. If it did, it would be a real B.

Jerry Stuckle

unread,
Jan 31, 2016, 6:54:04 PM1/31/16
to
If you think otherwise, then your process is broken.

Say you get the dependent class working with a mock independent class.
Now you create the independent class and test it. The dependent class
now fails tests because the mock independent class didn't perform the
proper actions.

So now you have to go back and rewrite the dependent class to work with
the real independent class instead of the mock one. You have now wasted
programmer and test group time by having to rewrite "working" code, and
introduced the possibility of even more bugs.

Jerry Stuckle

unread,
Jan 31, 2016, 6:55:47 PM1/31/16
to
Not a chance. Never in any of the major projects I've been involved in
have any tests been performed by the developers. That's where the bugs
come in, because the developer of a class may not have the same
understanding of what the class should do (which is more common than you
would think).

Developers will tend to write tests to the code. Test groups will write
tests to the documentation. Two entirely different things.

Ian Collins

unread,
Jan 31, 2016, 7:03:17 PM1/31/16
to
Jerry Stuckle wrote:
>
> Say you get the dependent class working with a mock independent class.
> Now you create the independent class and test it. The dependent class
> now fails tests because the mock independent class didn't perform the
> proper actions.

So you didn't implement your mock correctly, go back and fix it.

> So now you have to go back and rewrite the dependent class to work with
> the real independent class instead of the mock one. You have now wasted
> programmer and test group time by having to rewrite "working" code, and
> introduced the possibility of even more bugs.

You have wasted time implementing a mock that didn't do what it said on
the tin.

--
Ian Collins

Ian Collins

unread,
Jan 31, 2016, 7:06:59 PM1/31/16
to
Jerry Stuckle wrote:
> On 1/31/2016 6:42 PM, Ian Collins wrote:
>> Jerry Stuckle wrote:
>>>
>>> Tests should never be part of the build and should be performed by a
>>> separate group, with the tests designed based on the documentation.
>>
>> *Unit* test can (and should if your process requires test first coding)
>> be written and run by the developers. Integration, system and
>> acceptance (black box) testing may be performed by others if the team
>> include testers.
>>
>
> Not a chance. Never in any of the major projects I've been involved in
> have any tests been performed by the developers.

There you go, on every project I've working on in the past decade and a
bit we have been using TDD, so all of the unit tests have been written
by the developers. All of the system and acceptance tests have been
written by our testers.

> That's where the bugs
> come in, because the developer of a class may not have the same
> understanding of what the class should do (which is more common than you
> would think).
>
> Developers will tend to write tests to the code. Test groups will write
> tests to the documentation. Two entirely different things.

Which is why we write the tests first.

--
Ian Collins

Alf P. Steinbach

unread,
Jan 31, 2016, 8:55:12 PM1/31/16
to
On 2/1/2016 12:49 AM, Ian Collins wrote:
> Jerry Stuckle wrote:
>>
>> The same way you would in production - you create the scenario where the
>> exception would occur, and cause it. If it's impossible to test, then
>> it's impossible for the scenario to occur in production, anyway.
>
> So you consume all of the memory on your test box, what then?

Well this was called “stress testing”, some 15 years ago.

Checking… Yep, still.
https://en.wikipedia.org/wiki/Stress_testing_%28software%29

But yes, I agree, it would be impractical to ONLY wait for a rare but
important condition to occur, so as to observe it. Better to force it. I
guess that's the same as with an engine: the parts have to fit, but one
doesn't use actual fitness as primary testing, because that would be
impractical. So one measures. And only if the measurements say OK, does
one then assemble the thing and test the complete reality.

Yes?


Cheers,

- Alf

Jerry Stuckle

unread,
Jan 31, 2016, 9:50:25 PM1/31/16
to
Then you test.

>> And if you expect a specific return after 100 calls, then you call it
>> 100 times.
>
> ..and if each call would fetch and parse a file from the internet?
>

Then it fetches and parses a file from the internet.

>> Anything that can happen in production can be tested for. Whether you
>> test for the extreme cases or not is up to you.
>
> Anything can be tested if you can afford the cost. Unit tests generally
> can't afford to trash the host they are running on or create a DOS
> attack on a remote server...
>

If a unit test trashes the host or creates a DOS attack on a remote
server, then the test is incorrect or the class fails. If it can happen
during testing, it can happen during production.

Jerry Stuckle

unread,
Jan 31, 2016, 9:52:17 PM1/31/16
to
On 1/31/2016 7:03 PM, Ian Collins wrote:
> Jerry Stuckle wrote:
>>
>> Say you get the dependent class working with a mock independent class.
>> Now you create the independent class and test it. The dependent class
>> now fails tests because the mock independent class didn't perform the
>> proper actions.
>
> So you didn't implement your mock correctly, go back and fix it.
>

And how do you know if it is implemented correctly if it isn't the full
class?

>> So now you have to go back and rewrite the dependent class to work with
>> the real independent class instead of the mock one. You have now wasted
>> programmer and test group time by having to rewrite "working" code, and
>> introduced the possibility of even more bugs.
>
> You have wasted time implementing a mock that didn't do what it said on
> the tin.
>

No, you have wasted time implementing a mock which doesn't perform the
same as the real unit.

Tell me - how is your mock class going to do the same work as the real
class if it isn't the real class? It's a lot more than just returning a
value from a method call.

Jerry Stuckle

unread,
Jan 31, 2016, 9:53:49 PM1/31/16
to
On 1/31/2016 7:06 PM, Ian Collins wrote:
> Jerry Stuckle wrote:
>> On 1/31/2016 6:42 PM, Ian Collins wrote:
>>> Jerry Stuckle wrote:
>>>>
>>>> Tests should never be part of the build and should be performed by a
>>>> separate group, with the tests designed based on the documentation.
>>>
>>> *Unit* test can (and should if your process requires test first coding)
>>> be written and run by the developers. Integration, system and
>>> acceptance (black box) testing may be performed by others if the team
>>> include testers.
>>>
>>
>> Not a chance. Never in any of the major projects I've been involved in
>> have any tests been performed by the developers.
>
> There you go, on every project I've working on in the past decade and a
> bit we have been using TDD, so all of the unit tests have been written
> by the developers. All of the system and acceptance tests have been
> written by our testers.
>

Too bad. I'm sure you had a fair amount of bugs in the code.

>> That's where the bugs
>> come in, because the developer of a class may not have the same
>> understanding of what the class should do (which is more common than you
>> would think).
>>
>> Developers will tend to write tests to the code. Test groups will write
>> tests to the documentation. Two entirely different things.
>
> Which is why we write the tests first.
>

Which is exactly why developers do not write the tests.

Ian Collins

unread,
Jan 31, 2016, 11:04:32 PM1/31/16
to
Jerry Stuckle wrote:
> On 1/31/2016 6:49 PM, Ian Collins wrote:
>> Jerry Stuckle wrote:
>>> On 1/31/2016 6:34 PM, Ian Collins wrote:
>>>>
>>>> So how do you get one of B's methods to throw an out of memory
>>>> exception? Or a <you name it> exception? Or return the result you
>>>> expect it to return after 100 calls?
>>>>
>>>
>>> The same way you would in production - you create the scenario where the
>>> exception would occur, and cause it. If it's impossible to test, then
>>> it's impossible for the scenario to occur in production, anyway.
>>
>> So you consume all of the memory on your test box, what then?
>>
>
> Then you test.

On a shared development system? Yeah right.

>>> And if you expect a specific return after 100 calls, then you call it
>>> 100 times.
>>
>> ..and if each call would fetch and parse a file from the internet?
>>
>
> Then it fetches and parses a file from the internet.

Evey one of the dozens or hundreds of times a day you run your unit tests?

>>> Anything that can happen in production can be tested for. Whether you
>>> test for the extreme cases or not is up to you.
>>
>> Anything can be tested if you can afford the cost. Unit tests generally
>> can't afford to trash the host they are running on or create a DOS
>> attack on a remote server...
>>
>
> If a unit test trashes the host or creates a DOS attack on a remote
> server, then the test is incorrect or the class fails. If it can happen
> during testing, it can happen during production.

I don't think you understand the concept of layered testing. The topic
of this discussion has been unit testing, not acceptance or system
testing. The code being tested may not even be targeted at the system
you are developing on (especially if you are unit testing embedded
code). The development may not even have access to the devices that are
feeding data into the code under test. You have to mock.

--
Ian Collins

Ian Collins

unread,
Jan 31, 2016, 11:07:01 PM1/31/16
to
Alf P. Steinbach wrote:
> On 2/1/2016 12:49 AM, Ian Collins wrote:
>> Jerry Stuckle wrote:
>>>
>>> The same way you would in production - you create the scenario where the
>>> exception would occur, and cause it. If it's impossible to test, then
>>> it's impossible for the scenario to occur in production, anyway.
>>
>> So you consume all of the memory on your test box, what then?
>
> Well this was called “stress testing”, some 15 years ago.
>
> Checking… Yep, still.
> https://en.wikipedia.org/wiki/Stress_testing_%28software%29

Exactly: stress testing != unit testing!

> But yes, I agree, it would be impractical to ONLY wait for a rare but
> important condition to occur, so as to observe it. Better to force it. I
> guess that's the same as with an engine: the parts have to fit, but one
> doesn't use actual fitness as primary testing, because that would be
> impractical. So one measures. And only if the measurements say OK, does
> one then assemble the thing and test the complete reality.
>
> Yes?

Yes. If the engine is an embedded application, the unit tests are the
measurements.

--
Ian Collins

Ian Collins

unread,
Jan 31, 2016, 11:12:08 PM1/31/16
to
Jerry Stuckle wrote:
> On 1/31/2016 7:03 PM, Ian Collins wrote:
>> Jerry Stuckle wrote:
>>>
>>> Say you get the dependent class working with a mock independent class.
>>> Now you create the independent class and test it. The dependent class
>>> now fails tests because the mock independent class didn't perform the
>>> proper actions.
>>
>> So you didn't implement your mock correctly, go back and fix it.
>>
>
> And how do you know if it is implemented correctly if it isn't the full
> class?

Er, you have a specification?

Don't forget a mock is often a simple (even machine generated) piece of
code with behavior driven by the tests.

> Tell me - how is your mock class going to do the same work as the real
> class if it isn't the real class? It's a lot more than just returning a
> value from a method call.

It isn't. It is going to do what the tests require it to do. If the
real code parses a file and returns an object, the mock may simply
return a predefined (by the tests) object or sequence of objects. Or if
the tests require the mock to throw an exception for a particular test,
that is all it will do.

--
Ian Collins

Ian Collins

unread,
Jan 31, 2016, 11:14:41 PM1/31/16
to
Jerry Stuckle wrote:
> On 1/31/2016 7:06 PM, Ian Collins wrote:
>> Jerry Stuckle wrote:
>>> On 1/31/2016 6:42 PM, Ian Collins wrote:
>>>> Jerry Stuckle wrote:
>>>>>
>>>>> Tests should never be part of the build and should be performed by a
>>>>> separate group, with the tests designed based on the documentation.
>>>>
>>>> *Unit* test can (and should if your process requires test first coding)
>>>> be written and run by the developers. Integration, system and
>>>> acceptance (black box) testing may be performed by others if the team
>>>> include testers.
>>>>
>>>
>>> Not a chance. Never in any of the major projects I've been involved in
>>> have any tests been performed by the developers.
>>
>> There you go, on every project I've working on in the past decade and a
>> bit we have been using TDD, so all of the unit tests have been written
>> by the developers. All of the system and acceptance tests have been
>> written by our testers.
>>
>
> Too bad. I'm sure you had a fair amount of bugs in the code.

Barely any: manufacturers of embedded systems don't like product recalls
which is why we had so many tests. If the process hadn't been
effective, it would have been repeated for many years.

--
Ian Collins

Jerry Stuckle

unread,
Jan 31, 2016, 11:16:34 PM1/31/16
to
Manufacturers of operating systems and heavily used applications such as
IBM (where I first learned how to develop and test) don't like product
recalls, either - and are quite stringent in their testing.

Neither are any of the other companies I have consulted for - both as a
developer and a project manager.

Jerry Stuckle

unread,
Jan 31, 2016, 11:18:09 PM1/31/16
to
On 1/31/2016 11:06 PM, Ian Collins wrote:
> Alf P. Steinbach wrote:
>> On 2/1/2016 12:49 AM, Ian Collins wrote:
>>> Jerry Stuckle wrote:
>>>>
>>>> The same way you would in production - you create the scenario where
>>>> the
>>>> exception would occur, and cause it. If it's impossible to test, then
>>>> it's impossible for the scenario to occur in production, anyway.
>>>
>>> So you consume all of the memory on your test box, what then?
>>
>> Well this was called “stress testing”, some 15 years ago.
>>
>> Checking… Yep, still.
>> https://en.wikipedia.org/wiki/Stress_testing_%28software%29
>
> Exactly: stress testing != unit testing!
>

If you need to test a class for out of memory processing, you need to
stress test it.

>> But yes, I agree, it would be impractical to ONLY wait for a rare but
>> important condition to occur, so as to observe it. Better to force it. I
>> guess that's the same as with an engine: the parts have to fit, but one
>> doesn't use actual fitness as primary testing, because that would be
>> impractical. So one measures. And only if the measurements say OK, does
>> one then assemble the thing and test the complete reality.
>>
>> Yes?
>
> Yes. If the engine is an embedded application, the unit tests are the
> measurements.
>

Who said anything about embedded applications? This is the first time
it has been mentioned in the whole thread. Are you trying to change the
rules again? I see you've tried multiple times so far.

Jerry Stuckle

unread,
Jan 31, 2016, 11:21:39 PM1/31/16
to
On 1/31/2016 11:04 PM, Ian Collins wrote:
> Jerry Stuckle wrote:
>> On 1/31/2016 6:49 PM, Ian Collins wrote:
>>> Jerry Stuckle wrote:
>>>> On 1/31/2016 6:34 PM, Ian Collins wrote:
>>>>>
>>>>> So how do you get one of B's methods to throw an out of memory
>>>>> exception? Or a <you name it> exception? Or return the result you
>>>>> expect it to return after 100 calls?
>>>>>
>>>>
>>>> The same way you would in production - you create the scenario where
>>>> the
>>>> exception would occur, and cause it. If it's impossible to test, then
>>>> it's impossible for the scenario to occur in production, anyway.
>>>
>>> So you consume all of the memory on your test box, what then?
>>>
>>
>> Then you test.
>
> On a shared development system? Yeah right.
>

You test on a test system. Like all testing should be done.

>>>> And if you expect a specific return after 100 calls, then you call it
>>>> 100 times.
>>>
>>> ..and if each call would fetch and parse a file from the internet?
>>>
>>
>> Then it fetches and parses a file from the internet.
>
> Evey one of the dozens or hundreds of times a day you run your unit tests?
>

If necessary, yes.

>>>> Anything that can happen in production can be tested for. Whether you
>>>> test for the extreme cases or not is up to you.
>>>
>>> Anything can be tested if you can afford the cost. Unit tests generally
>>> can't afford to trash the host they are running on or create a DOS
>>> attack on a remote server...
>>>
>>
>> If a unit test trashes the host or creates a DOS attack on a remote
>> server, then the test is incorrect or the class fails. If it can happen
>> during testing, it can happen during production.
>
> I don't think you understand the concept of layered testing. The topic
> of this discussion has been unit testing, not acceptance or system
> testing. The code being tested may not even be targeted at the system
> you are developing on (especially if you are unit testing embedded
> code). The development may not even have access to the devices that are
> feeding data into the code under test. You have to mock.
>

Oh, I understand it, all right. I first encountered it as a developer
for IBM in around 1982.

But once again you're trying to change the topic by bringing up embedded
systems. Nothing about embedded systems has been mentioned until you
brought it up. But I've seen you're good at trying to change the topic.

However, the same is true with embedded systems - at least all of the
ones I've worked on as a consultant in the last 25 or so years.

And mocking a device is NOT the same as mocking a class other code is
dependent on. Do you understand the difference?

Jerry Stuckle

unread,
Jan 31, 2016, 11:25:27 PM1/31/16
to
On 1/31/2016 11:11 PM, Ian Collins wrote:
> Jerry Stuckle wrote:
>> On 1/31/2016 7:03 PM, Ian Collins wrote:
>>> Jerry Stuckle wrote:
>>>>
>>>> Say you get the dependent class working with a mock independent class.
>>>> Now you create the independent class and test it. The dependent class
>>>> now fails tests because the mock independent class didn't perform the
>>>> proper actions.
>>>
>>> So you didn't implement your mock correctly, go back and fix it.
>>>
>>
>> And how do you know if it is implemented correctly if it isn't the full
>> class?
>
> Er, you have a specification?
>
> Don't forget a mock is often a simple (even machine generated) piece of
> code with behavior driven by the tests.
>

If the class completely follows the specifications, then it is not a
mock class. If it does not completely follow the specifications, you
cannot thoroughly test dependent classes.

>> Tell me - how is your mock class going to do the same work as the real
>> class if it isn't the real class? It's a lot more than just returning a
>> value from a method call.
>
> It isn't. It is going to do what the tests require it to do. If the
> real code parses a file and returns an object, the mock may simply
> return a predefined (by the tests) object or sequence of objects. Or if
> the tests require the mock to throw an exception for a particular test,
> that is all it will do.
>

But if the class has to process a file, then you cannot test it properly
unless you process that file. Returning predefined objects or sequence
of objects does not properly process the input file - and return the
appropriate information. So all you're doing is testing a subset of the
possible results. Such a process is always prone to errors.

Proper testing would include a real class with real input, both good and
bad, and the appropriate processing for all of it.

Ian Collins

unread,
Jan 31, 2016, 11:38:33 PM1/31/16
to
I don't think you do, otherwise you would realise the difference between
unit and system testing. You most certainly do not want your unit tests
to stress test your development system, or do anything that consumes
significant time or resources. Unit tests should run as quickly as
possible, if the take too long to run, you will back off running them.
I develop with "make" = "make test", so my unit tests are run for ever
code change.

> But once again you're trying to change the topic by bringing up embedded
> systems. Nothing about embedded systems has been mentioned until you
> brought it up. But I've seen you're good at trying to change the topic.

That's a poor strawman, there was no change in topic.

I could just as easily substituted any piece of software. Note I said
"especially if you are unit testing embedded code" not "only if you are
unit testing embedded code". Embedded code is simple an extreme example
of something you wouldn't unit test on target.

> However, the same is true with embedded systems - at least all of the
> ones I've worked on as a consultant in the last 25 or so years.

> And mocking a device is NOT the same as mocking a class other code is
> dependent on. Do you understand the difference?

From the perspective of software other than its driver, a device is a
class other code. Device driver tests use mock hardware, the code that
uses driver class mocks the driver class and so on up the stack.

--
Ian Collins

Ian Collins

unread,
Jan 31, 2016, 11:48:27 PM1/31/16
to
Jerry Stuckle wrote:
> On 1/31/2016 11:11 PM, Ian Collins wrote:
>> Jerry Stuckle wrote:
>>> On 1/31/2016 7:03 PM, Ian Collins wrote:
>>>> Jerry Stuckle wrote:
>>>>>
>>>>> Say you get the dependent class working with a mock independent class.
>>>>> Now you create the independent class and test it. The dependent class
>>>>> now fails tests because the mock independent class didn't perform the
>>>>> proper actions.
>>>>
>>>> So you didn't implement your mock correctly, go back and fix it.
>>>>
>>>
>>> And how do you know if it is implemented correctly if it isn't the full
>>> class?
>>
>> Er, you have a specification?
>>
>> Don't forget a mock is often a simple (even machine generated) piece of
>> code with behavior driven by the tests.
>>
>
> If the class completely follows the specifications, then it is not a
> mock class. If it does not completely follow the specifications, you
> cannot thoroughly test dependent classes.

You can (and thousands of us do) because it *pretends* to implement the
specification. I suggest you do some research on mock objects and how
they are used.

>>> Tell me - how is your mock class going to do the same work as the real
>>> class if it isn't the real class? It's a lot more than just returning a
>>> value from a method call.
>>
>> It isn't. It is going to do what the tests require it to do. If the
>> real code parses a file and returns an object, the mock may simply
>> return a predefined (by the tests) object or sequence of objects. Or if
>> the tests require the mock to throw an exception for a particular test,
>> that is all it will do.
>>
>
> But if the class has to process a file, then you cannot test it properly
> unless you process that file.

Quite - because you aren't testing it! You are testing the code that
process the resulting object.

> Returning predefined objects or sequence
> of objects does not properly process the input file - and return the
> appropriate information. So all you're doing is testing a subset of the
> possible results. Such a process is always prone to errors.

You've lost the plot, *we are not testing the file processing*. We are
testing the processing of the result of the file processing.

> Proper testing would include a real class with real input, both good and
> bad, and the appropriate processing for all of it.

Indeed it does, but not unit testing something that has no interest in
how those files are processed.

I've said it before but it bears repeating: layered testing.

--
Ian Collins

Paavo Helde

unread,
Feb 1, 2016, 2:52:19 AM2/1/16
to
On 1.02.2016 1:50, Jerry Stuckle wrote:
> On 1/31/2016 6:38 PM, Paavo Helde wrote:
>> On 1.02.2016 1:13, Jerry Stuckle wrote:
>>>
>>> Alf, What I mean is that A is tested with the real B. If A depends on
>>> B, you can't ensure A works properly if you don't have a working B to
>>> test with.
>>
>> You can never ensure that something works 100% properly. At best you can
>> hope to have a good enough test suite to catch most of the bugs. If a
>> bug surfaces in B, then there is not really much difference if it is the
>> unit test for A or for B which catches the problem. To have separate
>> unit tests for A and B is just to reduce the cyclomatic complexity in
>> writing the tests, there is no deep philosophical reason to separate them.
>>
>
> That's very true. But there is a very good reason for testing
> separately - once you get rid of the bugs in the independent class, you
> can be pretty sure any bugs which show up are in the dependent class.
> If you don't test the independent class first, you have no idea where
> the problem lies. And as you get more and more classes involved the
> problem becomes more difficult.

The debugger will show me where the problem lies. But you are right, if
there are separate tests it would be easier to pinpoint the problem. For
example, if after some unrelated change 10 tests start to fail, among
them the tests for A and B, then it makes sense to start with B and its
test as B is the simplest class and likely the core of the issue.

>> In TDD, if you develop a feature provided by A, you write a test for the
>> feature using A. The test first fails. Then you start to add code to
>> make the test pass. During that you see that one of the reasons why the
>> test fails is that B is not yet written. So you write B, adapt A to use
>> it and make the test pass. There is nothing in TDD that says you have to
>> figure out the class dependencies beforehand and start with lowest level
>> classes. TDD does not care about such implementation details. To ensure
>> better test coverage, you may later add a separate test for B, but this
>> would then be a pure unit test, not a TDD test (as it does not cover a
>> feature, but a technical detail).
>
> The design will define the class dependencies. And every time you have
> to rewrite A to adapt it to be, you are wasting programmer time, test
> group time, and potentially inserting more bugs.

The code is changing every day because of the changing requirements,
changing priorities, changing environment, etc. You cannot design
everything up-front.

The extensive automated test suites are there exactly for allowing to
change, adapt and refactor the code base without the fear of introducing
new bugs. Ideally, the codebase would become wax instead of layers of
petrified sediments. It is very easy to reshape wax.

>
> A proper design means the independent classes can be written first, and
> the dependent ones later. This eliminates the need for programmers to
> rewrite dependent code.

Agreed - if you can get the design 100% right before starting the coding
it would be perfect. However, in my experience this rarely happens, and
even then there would be later changes because of unforeseeable factors.



David Brown

unread,
Feb 1, 2016, 5:32:03 AM2/1/16
to
On 01/02/16 00:55, Jerry Stuckle wrote:
> On 1/31/2016 6:42 PM, Ian Collins wrote:
>> Jerry Stuckle wrote:
>>>
>>> Tests should never be part of the build and should be performed by a
>>> separate group, with the tests designed based on the documentation.
>>
>> *Unit* test can (and should if your process requires test first coding)
>> be written and run by the developers. Integration, system and
>> acceptance (black box) testing may be performed by others if the team
>> include testers.
>>
>
> Not a chance. Never in any of the major projects I've been involved in
> have any tests been performed by the developers.

Our systems and integration test people would be pretty annoyed if our
developers handed them code that had not been tested. Do you think
developers just write some code, see that it compiles without warnings,
and say "give that a shot and see if it works" ? Before passing code on
to the next layer of testers, the developers will do everything
practically possible (using TDD, unit tests, automated test benches,
simulations, whatever) to ensure there are no known bugs and their tests
are as complete as possible.

Öö Tiib

unread,
Feb 1, 2016, 6:38:03 AM2/1/16
to
On Sunday, 31 January 2016 23:44:29 UTC+2, Ian Collins wrote:
> Doesn't anyone have colleagues any more? Sorry, I just hate the term
> "coworkers"!

No! Colleagues still win 200:19
http://www.googlefight.com/coworker-vs-colleague.php

The term I typically use sounds root-by-stem translated as "workfellow",
"workmate" or "workpartner". Only "workmate" is what I have actually heard
said in English. Sorry, if I translate it sometimes as "coworker".

Jerry Stuckle

unread,
Feb 1, 2016, 10:39:22 AM2/1/16
to
I realize the difference, all right. What you don't recognize is the
importance of independent testing.

No, you don't what unit tests to stress test your development system.
That's why you have a test group and test system(s). Additionally, the
test group can test under many more different conditions.

Your example of an out of memory exception is good. The developer is
responsible for making something work and may not even consider that
possibility, not code for it and not test it. But the test group is
responsible for trying to break things, and will look for ways to make
it fail. They will often test for things the developer never considered.

>> But once again you're trying to change the topic by bringing up embedded
>> systems. Nothing about embedded systems has been mentioned until you
>> brought it up. But I've seen you're good at trying to change the topic.
>
> That's a poor strawman, there was no change in topic.
>

I'm not the one trying to change the topic. No one mentioned embedded
systems until you did.

> I could just as easily substituted any piece of software. Note I said
> "especially if you are unit testing embedded code" not "only if you are
> unit testing embedded code". Embedded code is simple an extreme example
> of something you wouldn't unit test on target.
>

Embedded code is something I definitely WOULD test on target. And it
would be the responsibility of the test group to do the testing.

>> However, the same is true with embedded systems - at least all of the
>> ones I've worked on as a consultant in the last 25 or so years.
>
>> And mocking a device is NOT the same as mocking a class other code is
>> dependent on. Do you understand the difference?
>
> From the perspective of software other than its driver, a device is a
> class other code. Device driver tests use mock hardware, the code that
> uses driver class mocks the driver class and so on up the stack.
>

A hardware device is noting like a class. The device has specific
responses for specific commands, including error conditions. These are
well defined by the manufacturer. I write device drivers (working on
some code which includes one right now). Device driver tests use real
hardware.

I do mock the hardware driver when working on code using the device; for
instance, the current code I am working on is ARM based and fetches two
bytes from a specified sensor on request. This is easy to mock because
the interface is quite simple, and allows me to do some development on
my laptop. But testing is still done on the device, by the test group.
And the real device driver is written and tested on the device.

Jerry Stuckle

unread,
Feb 1, 2016, 10:50:44 AM2/1/16
to
Having to trace through multiple classes is a waste of good programmer
(or test group) time. The faster the problem can be identified and
fixed, the more productive the programmers are.

But unrelated changes should not affect other units. If they do, the
changes are not unrelated. You have to determine what the relationship
is. It might have nothing to do with B.

>>> In TDD, if you develop a feature provided by A, you write a test for the
>>> feature using A. The test first fails. Then you start to add code to
>>> make the test pass. During that you see that one of the reasons why the
>>> test fails is that B is not yet written. So you write B, adapt A to use
>>> it and make the test pass. There is nothing in TDD that says you have to
>>> figure out the class dependencies beforehand and start with lowest level
>>> classes. TDD does not care about such implementation details. To ensure
>>> better test coverage, you may later add a separate test for B, but this
>>> would then be a pure unit test, not a TDD test (as it does not cover a
>>> feature, but a technical detail).
>>
>> The design will define the class dependencies. And every time you have
>> to rewrite A to adapt it to be, you are wasting programmer time, test
>> group time, and potentially inserting more bugs.
>
> The code is changing every day because of the changing requirements,
> changing priorities, changing environment, etc. You cannot design
> everything up-front.
>

Yes, you can. And in fact, there is a huge advantage to doing it. When
the requirements change (and yes, they will), you can look at the design
documentation and know exactly what will be affected. You can then
concentrate on those areas. It means fewer bugs because you don't miss
something, and less wasted time because you aren't looking at things
which don't need to be changed.

> The extensive automated test suites are there exactly for allowing to
> change, adapt and refactor the code base without the fear of introducing
> new bugs. Ideally, the codebase would become wax instead of layers of
> petrified sediments. It is very easy to reshape wax.
>

But every time you change code, you introduce the possibility of one or
more errors.

Additionally, the design documentation provides the information for the
test group to create the tests, and when the documentation changes, the
test group knows what changes need to be made to the tests.

>>
>> A proper design means the independent classes can be written first, and
>> the dependent ones later. This eliminates the need for programmers to
>> rewrite dependent code.
>
> Agreed - if you can get the design 100% right before starting the coding
> it would be perfect. However, in my experience this rarely happens, and
> even then there would be later changes because of unforeseeable factors.
>
>

You can get the design right, but it takes practice. You have to know
the right questions to ask to get the detailed specifications necessary
to create the design. And more often than not, you have to train the
customer in how to define the specifications.

Jerry Stuckle

unread,
Feb 1, 2016, 10:59:16 AM2/1/16
to
On 1/31/2016 11:48 PM, Ian Collins wrote:
> Jerry Stuckle wrote:
>> On 1/31/2016 11:11 PM, Ian Collins wrote:
>>> Jerry Stuckle wrote:
>>>> On 1/31/2016 7:03 PM, Ian Collins wrote:
>>>>> Jerry Stuckle wrote:
>>>>>>
>>>>>> Say you get the dependent class working with a mock independent
>>>>>> class.
>>>>>> Now you create the independent class and test it. The dependent
>>>>>> class
>>>>>> now fails tests because the mock independent class didn't perform the
>>>>>> proper actions.
>>>>>
>>>>> So you didn't implement your mock correctly, go back and fix it.
>>>>>
>>>>
>>>> And how do you know if it is implemented correctly if it isn't the full
>>>> class?
>>>
>>> Er, you have a specification?
>>>
>>> Don't forget a mock is often a simple (even machine generated) piece of
>>> code with behavior driven by the tests.
>>>
>>
>> If the class completely follows the specifications, then it is not a
>> mock class. If it does not completely follow the specifications, you
>> cannot thoroughly test dependent classes.
>
> You can (and thousands of us do) because it *pretends* to implement the
> specification. I suggest you do some research on mock objects and how
> they are used.
>

I don't need to do any research on it. I've seen it tried. All it has
done is waste time and unnecessarily create bugs because the mock
classes cannot, by definition, completely follow the specification.

Your claim there are thousands of you doing it wrong is immaterial.

>>>> Tell me - how is your mock class going to do the same work as the real
>>>> class if it isn't the real class? It's a lot more than just
>>>> returning a
>>>> value from a method call.
>>>
>>> It isn't. It is going to do what the tests require it to do. If the
>>> real code parses a file and returns an object, the mock may simply
>>> return a predefined (by the tests) object or sequence of objects. Or if
>>> the tests require the mock to throw an exception for a particular test,
>>> that is all it will do.
>>>
>>
>> But if the class has to process a file, then you cannot test it properly
>> unless you process that file.
>
> Quite - because you aren't testing it! You are testing the code that
> process the resulting object.
>

And you can't test the resulting object because a mock class never can
duplicate the behavior of the real class. It can only do some of it.
And that means potential problems are missed.

>> Returning predefined objects or sequence
>> of objects does not properly process the input file - and return the
>> appropriate information. So all you're doing is testing a subset of the
>> possible results. Such a process is always prone to errors.
>
> You've lost the plot, *we are not testing the file processing*. We are
> testing the processing of the result of the file processing.
>

And you can't be sure the output you get from the mock class is the same
as what real files will provide in all cases. Only in some cases. And
since other cases can't be tested, you leave the possibility of bugs
(unnecessarily).

Let's go back to your memory problem. What if processing the input file
causes an out of memory condition? Does your mock class emulate that?
Or what if data is missing, out of order, or not what was expected? Or
the file is truncated? Or any of a number of things a proper test suite
would check?

>> Proper testing would include a real class with real input, both good and
>> bad, and the appropriate processing for all of it.
>
> Indeed it does, but not unit testing something that has no interest in
> how those files are processed.
>
> I've said it before but it bears repeating: layered testing.
>

Yes, you have. Now if you'd only learn how to do it correctly.

Jerry Stuckle

unread,
Feb 1, 2016, 11:02:53 AM2/1/16
to
Too bad. The systems I've worked on pass code onto the test group quite
early and regularly. Developer's time is too valuable to be spent
testing code. That's not to say the developers never do any tests on
their code; just minimal - enough to see that it does what they
expected, anyway. Then the code goes to the test group who's
responsibility to make it fail.

Paavo Helde

unread,
Feb 1, 2016, 11:14:25 AM2/1/16
to
On 1.02.2016 17:50, Jerry Stuckle wrote:
>
> But every time you change code, you introduce the possibility of one or
> more errors.

If you have a proper test suite it will catch all such errors.

Note that if you are afraid of changing code because it might break
something, then you cannot do code refactoring and code redesign. This
means that given enough time the codebase will turn unmanageable,
regardless of how good the design and specifications.

Geoff

unread,
Feb 1, 2016, 1:03:54 PM2/1/16
to
On Sun, 31 Jan 2016 23:16:32 -0500, Jerry Stuckle
<jstu...@attglobal.net> wrote:

>Manufacturers of operating systems and heavily used applications such as
>IBM (where I first learned how to develop and test) don't like product
>recalls, either - and are quite stringent in their testing.
>
>Neither are any of the other companies I have consulted for - both as a
>developer and a project manager.

That must be why IBM software is so reliable and never needs bug
fixes. :)

Öö Tiib

unread,
Feb 1, 2016, 1:54:24 PM2/1/16
to
Somehow when IBM and software are in same sentence then only horror comes
to mind ... Rational Rose, ClearQuest, Lotus Notes, WebSphere ... crazy
stuff done with apparently zero self-awareness whatsoever.

Ian Collins

unread,
Feb 1, 2016, 2:31:05 PM2/1/16
to
So you claim, but you continue (is is it choose?) to a display a basic
ignorance of the practice.

> No, you don't what unit tests to stress test your development system.
> That's why you have a test group and test system(s). Additionally, the
> test group can test under many more different conditions.

Where did I say otherwise? Exact quote please.

> Your example of an out of memory exception is good. The developer is
> responsible for making something work and may not even consider that
> possibility, not code for it and not test it. But the test group is
> responsible for trying to break things, and will look for ways to make
> it fail. They will often test for things the developer never considered.

Where did I say otherwise? Exact quote please.

>>> But once again you're trying to change the topic by bringing up embedded
>>> systems. Nothing about embedded systems has been mentioned until you
>>> brought it up. But I've seen you're good at trying to change the topic.
>>
>> That's a poor strawman, there was no change in topic.
>>
>
> I'm not the one trying to change the topic. No one mentioned embedded
> systems until you did.

Please explain how mentioning embedded systems changed the topic.

>> I could just as easily substituted any piece of software. Note I said
>> "especially if you are unit testing embedded code" not "only if you are
>> unit testing embedded code". Embedded code is simple an extreme example
>> of something you wouldn't unit test on target.
>>
>
> Embedded code is something I definitely WOULD test on target. And it
> would be the responsibility of the test group to do the testing.

Here again you choice to ignore the fact that there is more than one
level of testing. It's clear that you didn't do the homework.

>>> However, the same is true with embedded systems - at least all of the
>>> ones I've worked on as a consultant in the last 25 or so years.
>>
>>> And mocking a device is NOT the same as mocking a class other code is
>>> dependent on. Do you understand the difference?
>>
>> From the perspective of software other than its driver, a device is a
>> class other code. Device driver tests use mock hardware, the code that
>> uses driver class mocks the driver class and so on up the stack.
>
> A hardware device is noting like a class.

You appear to be have trouble with your reading comprehension, so I'll
say it again: "From the perspective of software other than its driver, a
device is a class other code". There, is that clear now?

> The device has specific
> responses for specific commands, including error conditions.

Do you write code that has indeterminate responses for specific
requests? I hope not.

> These are
> well defined by the manufacturer.

Do you write code without a specification? No, you have requirements
that are well defined by the customer.

> I write device drivers (working on
> some code which includes one right now). Device driver tests use real
> hardware.
>
> I do mock the hardware driver when working on code using the device; for
> instance, the current code I am working on is ARM based and fetches two
> bytes from a specified sensor on request. This is easy to mock because
> the interface is quite simple, and allows me to do some development on
> my laptop.

Ah, so it is simply a class other code. You appear to be contradicting
yourself....

> But testing is still done on the device, by the test group.
> And the real device driver is written and tested on the device.
>

...as well as continuing the 80s practice of lobbing code over the wall
to someone else to test.

--
Ian Collins

Ian Collins

unread,
Feb 1, 2016, 2:44:32 PM2/1/16
to
Jerry Stuckle wrote:
> On 1/31/2016 11:48 PM, Ian Collins wrote:
>> Jerry Stuckle wrote:
>>> On 1/31/2016 11:11 PM, Ian Collins wrote:
>>>>
>>>> Don't forget a mock is often a simple (even machine generated) piece of
>>>> code with behavior driven by the tests.
>>>>
>>>
>>> If the class completely follows the specifications, then it is not a
>>> mock class. If it does not completely follow the specifications, you
>>> cannot thoroughly test dependent classes.
>>
>> You can (and thousands of us do) because it *pretends* to implement the
>> specification. I suggest you do some research on mock objects and how
>> they are used.
>>
>
> I don't need to do any research on it. I've seen it tried. All it has
> done is waste time and unnecessarily create bugs because the mock
> classes cannot, by definition, completely follow the specification.

So you've tried it, done it wrong and given up. Too bad.

>>> But if the class has to process a file, then you cannot test it properly
>>> unless you process that file.
>>
>> Quite - because you aren't testing it! You are testing the code that
>> process the resulting object.
>>
>
> And you can't test the resulting object because a mock class never can
> duplicate the behavior of the real class. It can only do some of it.
> And that means potential problems are missed.

I can't? Bugger, what have I been doing all these years? of course I can.

>>> Returning predefined objects or sequence
>>> of objects does not properly process the input file - and return the
>>> appropriate information. So all you're doing is testing a subset of the
>>> possible results. Such a process is always prone to errors.
>>
>> You've lost the plot, *we are not testing the file processing*. We are
>> testing the processing of the result of the file processing.
>>
>
> And you can't be sure the output you get from the mock class is the same
> as what real files will provide in all cases. Only in some cases. And
> since other cases can't be tested, you leave the possibility of bugs
> (unnecessarily).

Can't I? The transformation will be well defined and will be testing in
the real classes tests. Ever heard of a contract?

> Let's go back to your memory problem. What if processing the input file
> causes an out of memory condition? Does your mock class emulate that?

Yes, I would simply write something like:

test::FOO_decode::willThrow( std::bad_alloc );

before calling the code that will call the decoder.

> Or what if data is missing, out of order, or not what was expected? Or
> the file is truncated? Or any of a number of things a proper test suite
> would check?

It will - in the tests for the real object.

>>> Proper testing would include a real class with real input, both good and
>>> bad, and the appropriate processing for all of it.
>>
>> Indeed it does, but not unit testing something that has no interest in
>> how those files are processed.
>>
>> I've said it before but it bears repeating: layered testing.
>>
>
> Yes, you have. Now if you'd only learn how to do it correctly.

I'm lucky in that I already do. You don't appear to have got beyond
understanding what it is.

--
Ian Collins

Ian Collins

unread,
Feb 1, 2016, 2:46:13 PM2/1/16
to
Jerry Stuckle wrote:
>
> Manufacturers of operating systems and heavily used applications such as
> IBM (where I first learned how to develop and test) don't like product
> recalls, either - and are quite stringent in their testing.
>
> Neither are any of the other companies I have consulted for - both as a
> developer and a project manager.

Ah-ha, that explains it: in this part of the world, we would never let a
project manager anywhere near our code!

--
Ian Collins

Jerry Stuckle

unread,
Feb 1, 2016, 2:53:15 PM2/1/16
to
Not at all.

>> No, you don't what unit tests to stress test your development system.
>> That's why you have a test group and test system(s). Additionally, the
>> test group can test under many more different conditions.
>
> Where did I say otherwise? Exact quote please.
>

You said you wouldn't stress test code on a development system. Look
back and see.

>> Your example of an out of memory exception is good. The developer is
>> responsible for making something work and may not even consider that
>> possibility, not code for it and not test it. But the test group is
>> responsible for trying to break things, and will look for ways to make
>> it fail. They will often test for things the developer never considered.
>
> Where did I say otherwise? Exact quote please.
>

You said you can't test for out of memory on a development system. Look
back and see.

>>>> But once again you're trying to change the topic by bringing up
>>>> embedded
>>>> systems. Nothing about embedded systems has been mentioned until you
>>>> brought it up. But I've seen you're good at trying to change the
>>>> topic.
>>>
>>> That's a poor strawman, there was no change in topic.
>>>
>>
>> I'm not the one trying to change the topic. No one mentioned embedded
>> systems until you did.
>
> Please explain how mentioning embedded systems changed the topic.
>

No one was talking about embedded systems before you brought it up.
Look back and see.

>>> I could just as easily substituted any piece of software. Note I said
>>> "especially if you are unit testing embedded code" not "only if you are
>>> unit testing embedded code". Embedded code is simple an extreme example
>>> of something you wouldn't unit test on target.
>>>
>>
>> Embedded code is something I definitely WOULD test on target. And it
>> would be the responsibility of the test group to do the testing.
>
> Here again you choice to ignore the fact that there is more than one
> level of testing. It's clear that you didn't do the homework.
>

Oh, yes, I've done the homework. And I've seen the results of your type
of testing. A couple of times I've taken over failing projects because
the developers were doing the testing instead of writing the code.
Projects that were late and over budget.

>>>> However, the same is true with embedded systems - at least all of the
>>>> ones I've worked on as a consultant in the last 25 or so years.
>>>
>>>> And mocking a device is NOT the same as mocking a class other code is
>>>> dependent on. Do you understand the difference?
>>>
>>> From the perspective of software other than its driver, a device is a
>>> class other code. Device driver tests use mock hardware, the code that
>>> uses driver class mocks the driver class and so on up the stack.
>>
>> A hardware device is noting like a class.
>
> You appear to be have trouble with your reading comprehension, so I'll
> say it again: "From the perspective of software other than its driver, a
> device is a class other code". There, is that clear now?
>

Yes, it is perfectly clear you have no idea what you're talking about.

>> The device has specific
>> responses for specific commands, including error conditions.
>
> Do you write code that has indeterminate responses for specific
> requests? I hope not.
>

No, I don't. I write code that works.


>> These are
>> well defined by the manufacturer.
>
> Do you write code without a specification? No, you have requirements
> that are well defined by the customer.
>

Nope. But I don't try to mock classes, either. It just leads to more
work and more bugs.

>> I write device drivers (working on
>> some code which includes one right now). Device driver tests use real
>> hardware.
>>
>> I do mock the hardware driver when working on code using the device; for
>> instance, the current code I am working on is ARM based and fetches two
>> bytes from a specified sensor on request. This is easy to mock because
>> the interface is quite simple, and allows me to do some development on
>> my laptop.
>
> Ah, so it is simply a class other code. You appear to be contradicting
> yourself....
>

Nope. Not at all. But you obviously don't understand the difference
between hardware and software.

>> But testing is still done on the device, by the test group.
>> And the real device driver is written and tested on the device.
>>
>
> ...as well as continuing the 80s practice of lobbing code over the wall
> to someone else to test.
>

Yep, sending the code to the people who are experts in testing and
leaving the development to those who are experts in developing.

Jerry Stuckle

unread,
Feb 1, 2016, 2:55:21 PM2/1/16
to
If you do, yes. However, that's more work to correct the error and test
again. Wasted time for developers and testers.

I'm not saying you don't change or refactor code. I'm saying you
shouldn't do it needlessly because you didn't test independent classes
before dependent ones.

Jerry Stuckle

unread,
Feb 1, 2016, 3:01:33 PM2/1/16
to
On 2/1/2016 2:44 PM, Ian Collins wrote:
> Jerry Stuckle wrote:
>> On 1/31/2016 11:48 PM, Ian Collins wrote:
>>> Jerry Stuckle wrote:
>>>> On 1/31/2016 11:11 PM, Ian Collins wrote:
>>>>>
>>>>> Don't forget a mock is often a simple (even machine generated)
>>>>> piece of
>>>>> code with behavior driven by the tests.
>>>>>
>>>>
>>>> If the class completely follows the specifications, then it is not a
>>>> mock class. If it does not completely follow the specifications, you
>>>> cannot thoroughly test dependent classes.
>>>
>>> You can (and thousands of us do) because it *pretends* to implement the
>>> specification. I suggest you do some research on mock objects and how
>>> they are used.
>>>
>>
>> I don't need to do any research on it. I've seen it tried. All it has
>> done is waste time and unnecessarily create bugs because the mock
>> classes cannot, by definition, completely follow the specification.
>
> So you've tried it, done it wrong and given up. Too bad.
>

Nope, I've seen it done before, by more than one group. I've even been
called in a couple of times to manage a project that was mismanaged that
way. Projects that were late and over budget.

>>>> But if the class has to process a file, then you cannot test it
>>>> properly
>>>> unless you process that file.
>>>
>>> Quite - because you aren't testing it! You are testing the code that
>>> process the resulting object.
>>>
>>
>> And you can't test the resulting object because a mock class never can
>> duplicate the behavior of the real class. It can only do some of it.
>> And that means potential problems are missed.
>
> I can't? Bugger, what have I been doing all these years? of course I can.
>

And you've been wasting time and money doing it. Or writing crap code.
I'm not sure which. Maybe both.

>>>> Returning predefined objects or sequence
>>>> of objects does not properly process the input file - and return the
>>>> appropriate information. So all you're doing is testing a subset of
>>>> the
>>>> possible results. Such a process is always prone to errors.
>>>
>>> You've lost the plot, *we are not testing the file processing*. We are
>>> testing the processing of the result of the file processing.
>>>
>>
>> And you can't be sure the output you get from the mock class is the same
>> as what real files will provide in all cases. Only in some cases. And
>> since other cases can't be tested, you leave the possibility of bugs
>> (unnecessarily).
>
> Can't I? The transformation will be well defined and will be testing in
> the real classes tests. Ever heard of a contract?
>

Yes, I've heard of contracts. I've also seen people try your way and
fail miserably. And they thought they were doing it "right" also, until
they saw otherwise.

>> Let's go back to your memory problem. What if processing the input file
>> causes an out of memory condition? Does your mock class emulate that?
>
> Yes, I would simply write something like:
>
> test::FOO_decode::willThrow( std::bad_alloc );
>
> before calling the code that will call the decoder.
>

But that doesn't emulate the out of memory condition. It just throws an
exception. Many other things can go wrong when you truly are out of
memory - things you aren't testing for.

For instance - what happens if the class you're testing allocates memory
while processing the out of memory condition? Since you aren't really
out of memory, that allocation will succeed - where in real life it will
fail. You have just introduced a bug.

>> Or what if data is missing, out of order, or not what was expected? Or
>> the file is truncated? Or any of a number of things a proper test suite
>> would check?
>
> It will - in the tests for the real object.
>

Which means you will have multiple classes to test to see which one is
failing. A waste of tester and developer time.

>>>> Proper testing would include a real class with real input, both good
>>>> and
>>>> bad, and the appropriate processing for all of it.
>>>
>>> Indeed it does, but not unit testing something that has no interest in
>>> how those files are processed.
>>>
>>> I've said it before but it bears repeating: layered testing.
>>>
>>
>> Yes, you have. Now if you'd only learn how to do it correctly.
>
> I'm lucky in that I already do. You don't appear to have got beyond
> understanding what it is.
>

You seem to be quite willing to spend your employer's or client's money
unnecessarily. My job has been to ensure the projects come in on time,
within budget and as free of bugs as possible.

It looks like we have two different goals.

Jerry Stuckle

unread,
Feb 1, 2016, 3:02:54 PM2/1/16
to
Any software will have bugs. But IBM's have fewer bugs per K LOC than
average.

Jerry Stuckle

unread,
Feb 1, 2016, 3:03:51 PM2/1/16
to
And if you follow back, every one of those was a product produced by
another company that IBM bought for one reason or another - not
necessarily the products you mentioned.
It is loading more messages.
0 new messages