Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

What has C++ become?

1 view
Skip to first unread message

plen...@yahoo.com

unread,
May 31, 2008, 12:36:45 PM5/31/08
to
I was looking over someone's C++ code today and despite
having written perfectly readable C++ code myself,
the stuff I was looking at was worse than legalese.
The people who are guiding the development of C++
have really made a mess of things, I mean templates
and competing libraries and all that just render the
code impossible to comprehend. Sure there is
going to be a certain amount of complexity,
that's a given, but if code is not readable except by
a kind of clergy then there is something wrong with
the language. Of course, I suppose the code I was
looking at could have been deliberately obfuscated
so that the developer could maintain control over it,
but shouldn't a language (or its libraries) be designed
to prevent that?

jason.c...@gmail.com

unread,
May 31, 2008, 12:50:03 PM5/31/08
to

Not everybody is a good C++ writer. It's a combination of two things:

1) A person not using the language as effectively as possible (and for
many people there is nothing you can do about it other than accept it
as inevitable). A poor legal writer can produce unusually unclear
legal documents (some legalese is easier to understand than others).

2) The ability to read it well (once you see enough strange looking
code it starts to make more sense and doesn't look as confusing). An
experienced lawyer can read even the most obtuse legalese document
without much of a problem.

The best thing that can be done to prevent that is to stop the problem
at it's source (#1). Rather than trolling newsgroups with questions
like this, which don't help anything at all, instead look for people
with legitimate problems in their code and give them constructive
criticism and suggestions.

Jason

Erik Wikström

unread,
May 31, 2008, 1:25:37 PM5/31/08
to
On 2008-05-31 18:36, plen...@yahoo.com wrote:
> I was looking over someone's C++ code today and despite
> having written perfectly readable C++ code myself,
> the stuff I was looking at was worse than legalese.
> The people who are guiding the development of C++
> have really made a mess of things, I mean templates
> and competing libraries and all that just render the
> code impossible to comprehend.

Sure, templates can be a bit hard to read before you get used to them
(and template meta-programming even harder) but considering how powerful
they are I do not think they are overly complex.

As for competing libraries, that is something that all moderately
successful languages have to deal with. There are always someone who
thinks that the standard libraries are not good enough and starts their
own. In some languages it is even worse with multiple standard libraries.

> Sure there is
> going to be a certain amount of complexity,
> that's a given, but if code is not readable except by
> a kind of clergy then there is something wrong with
> the language.

Most code in non-trivial systems is hard to read if you are not familiar
with the specific domain and the structure of the code. Good coding
guidelines and a clear architecture will mitigate this but can never
remove it entirely.

--
Erik Wikström

peter koch

unread,
May 31, 2008, 3:57:36 PM5/31/08
to

Now I don't know what kind of code you did look at, but I remember
first time I looked at a C program: it looked more or less like
gibberish (I used to program in a Pascal variant with some assembly
woven in). A few days practice, and I found the code readable, and
after a few months I even liked it better than Pascal.
So what it all came up to was a need to familiarise myself with the
syntax and get acquainted to the principles behind C.
I guess it is the same stuff that troubles you. Writing templated code
is somewhat different from writing normal code: much more must take
place at compile-time, but when you learn the tricks and the way
things work, it is not so difficult again.
Also, if you did look at some code that was either library code or
code that was supposed to be supported to many (possibly old)
platforms, you will likely see code that is seemingly somewhat
obfuscated.
Probably, a few days of dissecting the code will make you well
comfortable with it: if not, you are welcome to ask questions here (or
to your collegues, of course).

/Peter

James Kanze

unread,
May 31, 2008, 7:38:04 PM5/31/08
to
On May 31, 7:25 pm, Erik Wikström <Erik-wikst...@telia.com> wrote:

> On 2008-05-31 18:36, plenty...@yahoo.com wrote:

> > I was looking over someone's C++ code today and despite
> > having written perfectly readable C++ code myself,
> > the stuff I was looking at was worse than legalese.
> > The people who are guiding the development of C++
> > have really made a mess of things, I mean templates
> > and competing libraries and all that just render the
> > code impossible to comprehend.

> Sure, templates can be a bit hard to read before you get used
> to them (and template meta-programming even harder) but
> considering how powerful they are I do not think they are
> overly complex.

It's always a costs-benefits tradeoff. Making the code harder
to read is a definite cost. Afterwards, you have to weigh the
benefits, and see if they are worth it.

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

co...@mailvault.com

unread,
Jun 1, 2008, 1:01:04 AM6/1/08
to
On May 31, 10:36 am, plenty...@yahoo.com wrote:
> I was looking over someone's C++ code today and despite
> having written perfectly readable C++ code myself,
> the stuff I was looking at was worse than legalese.
> The people who are guiding the development of C++
> have really made a mess of things, I mean templates
> and competing libraries and all that just render the
> code impossible to comprehend.


I think competing libraries are going to be around for
a while. Hybrid cars are selling well lately and
I don't think that is going to change anytime soon.
Perhaps surprisingly, the increased complexity and
knowledge required to develop and maintain a hybrid
doesn't overwhelm the relative total cost of ownership
given high energy prices. By relative I mean relative
to a car with only a gas engine.

For whatever reason, the gas engine/"traditional C++"
approach is not producing efficient results in some
contexts -- http://webEbenezer.net/comparison.html.
We're working on a new version of those tests that uses
Boost 1.35 and MSVC9. The preliminary results show no
significant differences from those using Boost 1.34.1 and
MSVC8. We're also planning to expand the test cases.
There are other reasons beside run-time performance that
our approach may be successful. We believe our approach
will also help improve build times. We aren't there yet,
though as no one has provided automated support for
integrating our services into the build process.

> Sure there is
> going to be a certain amount of complexity,
> that's a given, but if code is not readable except by
> a kind of clergy then there is something wrong with
> the language.

I agree there is room for improvement with the language, but
still don't really agree with your conclusion. Your mention
of clergy is interesting... when it comes to providing
thoughtful and helpful services, a good priest is essential.

> Of course, I suppose the code I was
> looking at could have been deliberately obfuscated
> so that the developer could maintain control over it,

Unfortunately, I think that happens. It requires good
leadership to deal with someone who behaves that way.


Brian Wood
Ebenezer Enterprises
www.webEbenezer.net

plen...@yahoo.com

unread,
Jun 1, 2008, 7:34:58 PM6/1/08
to
On May 31, 3:57 pm, peter koch <peter.koch.lar...@gmail.com> wrote:

> Now I don't know what kind of code you did look at, but I remember
> first time I looked at a C program: it looked more or less like
> gibberish (I used to program in a Pascal variant with some assembly
> woven in).

I recall having the same experience, the *first* time I looked
at a C program, having before that seen only Pascal,
Modula-2, Basic and assembly. But I've seen C++ many times
now, albeit mostly my own which is deliberately readable.


plen...@yahoo.com

unread,
Jun 1, 2008, 7:36:35 PM6/1/08
to
On Jun 1, 1:01 am, c...@mailvault.com wrote:

> > Of course, I suppose the code I was
> > looking at could have been deliberately obfuscated
> > so that the developer could maintain control over it,
>
> Unfortunately, I think that happens.  It requires good
> leadership to deal with someone who behaves that way.

I should mention that this is open source code,
so there is no leadership. Could be worse though,
could be the Linux kernel.

Walter Bright

unread,
Jun 2, 2008, 1:41:06 AM6/2/08
to
James Kanze wrote:
> On May 31, 7:25 pm, Erik Wikström <Erik-wikst...@telia.com> wrote:
>> Sure, templates can be a bit hard to read before you get used
>> to them (and template meta-programming even harder) but
>> considering how powerful they are I do not think they are
>> overly complex.
>
> It's always a costs-benefits tradeoff. Making the code harder
> to read is a definite cost. Afterwards, you have to weigh the
> benefits, and see if they are worth it.

I don't believe readability is a cost benefit tradeoff. I attended Scott
Meyers' presentation at NWCPP (slides here:
http://www.nwcpp.org/Downloads/2008/code_features.pdf). Scott mentioned
that he'd had help from TMP experts in creating the code examples, so we
can discount the idea that the readability problems are caused by lack
of programmer ability in C++ TMP.

After looking at it for a while, it seems to me that there is no way to
lay out the whitespace to make it look right. C++ TMP simply eats up far
too much horizontal space.

Second of all, once you figure out what it is doing, what it is doing is
rather simple. It is just a very poor notation for that (which is
consistent with TMP for C++ being discovered rather than designed).

Is it necessary to have such a poor notation? I don't believe so. C++
TMP is an FP language, and other FP languages tend to have much better
notation.

----------
Walter Bright
Digital Mars C, C++, D programming language compilers

Juha Nieminen

unread,
Jun 2, 2008, 4:44:33 AM6/2/08
to
plen...@yahoo.com wrote:
> I was looking over someone's C++ code today and despite
> having written perfectly readable C++ code myself,
> the stuff I was looking at was worse than legalese.

"Someone's C++ code"? Are you sure that "someone" is an experienced
C++ programmer who knows how to write good-quality understandable C++?

Anyone can write incomprehensible code with any language. And what is
worse, most people actually do.

> The people who are guiding the development of C++
> have really made a mess of things, I mean templates
> and competing libraries and all that just render the
> code impossible to comprehend.

What "competing libraries"?

And as for templates making a "mess of things", I'd say that's more
often than not just a myth. My personal experience is that templates
actually *simplify* things in most cases, they don't complicate things.
Just a small example:

int table[100];
...
std::sort(table, table+100);

I believe that's pretty simple and understandable code, or would you
disagree? (Never mind the "table+100" pointer trickery. That's not the
point here.)

Well, you know what? That's template code. It's precisely *because* of
templates that that code can be so simple as it is. Without templates it
would have to be much more complicated (compare to C's qsort()).

Maybe you think using <> makes template code "a mess"? I don't
understand why. Is this somehow unclear:

std::vector<int> table;
table.push_back(5);

What's so unclear about that? I think it's perfectly clear and legible
code. How else would you want it to be?

> Sure there is
> going to be a certain amount of complexity,
> that's a given, but if code is not readable except by
> a kind of clergy then there is something wrong with
> the language.

It's impossible to design a language so that it cannot be written in
an unclear way. It's always possible to write obfuscated code.

However, that doesn't mean it's impossible to write clear code.

James Kanze

unread,
Jun 2, 2008, 4:58:25 AM6/2/08
to
On Jun 2, 7:41 am, Walter Bright <wal...@digitalmars-nospamm.com>
wrote:

> James Kanze wrote:
> > On May 31, 7:25 pm, Erik Wikström <Erik-wikst...@telia.com> wrote:
> >> Sure, templates can be a bit hard to read before you get used
> >> to them (and template meta-programming even harder) but
> >> considering how powerful they are I do not think they are
> >> overly complex.

> > It's always a costs-benefits tradeoff. Making the code harder
> > to read is a definite cost. Afterwards, you have to weigh the
> > benefits, and see if they are worth it.

> I don't believe readability is a cost benefit tradeoff.

It is in the sense that it's not binary. Totally unreadable
code has such high cost that nothing can outweigh it, but there
are times when you might give up a little bit of readability
(without the code becoming totally unreadable) if the other
benefits are large enough.

> I attended Scott Meyers' presentation at NWCPP (slides
> here:http://www.nwcpp.org/Downloads/2008/code_features.pdf).
> Scott mentioned that he'd had help from TMP experts in
> creating the code examples, so we can discount the idea that
> the readability problems are caused by lack of programmer
> ability in C++ TMP.

> After looking at it for a while, it seems to me that there is
> no way to lay out the whitespace to make it look right. C++
> TMP simply eats up far too much horizontal space.

More to the point, the "language" isn't really designed for what
it is being used for. You need all sorts of strange constructs
to do fundamentally simple things, like loop or a conditional.

> Second of all, once you figure out what it is doing, what it
> is doing is rather simple. It is just a very poor notation for
> that (which is consistent with TMP for C++ being discovered
> rather than designed).

> Is it necessary to have such a poor notation? I don't believe
> so. C++ TMP is an FP language, and other FP languages tend to
> have much better notation.

Exactly. Because they were designed with that in mind.

What it does mean is that you don't use TMP, at least in its
more complete forms, unless the benefits are extremely high, and
even then, probably only in contexts where you can be sure that
only real experts (who can cope with the loss of readability)
have to maintain it.

Walter Bright

unread,
Jun 2, 2008, 6:27:47 AM6/2/08
to
Juha Nieminen wrote:
> Maybe you think using <> makes template code "a mess"? I don't
> understand why.

It's because of the parsing ambiguities that come from using < > as a
parameter delimiter.

> Is this somehow unclear:
>
> std::vector<int> table;
> table.push_back(5);
>
> What's so unclear about that? I think it's perfectly clear and legible
> code. How else would you want it to be?

It's the wordiness of it. If the code gets more complicated than such
trivial examples, it gets rather hard to visualize. I would want it to
use a much more compact notation, like maybe:

int[] table;
table ~= 5;

To me, it's like the difference between

assign(a,add(b,c))

and

a=b+c

Matthias Buelow

unread,
Jun 2, 2008, 8:51:49 AM6/2/08
to
plen...@yahoo.com wrote:

> The people who are guiding the development of C++
> have really made a mess of things, I mean templates

IMHO, the opposite has happened.. it has been hammered into shape over
the years. My "last" experience with C++, before I resumed my use of it
about a year ago, had been from the early 90ies and the current C++ is,
while being a larger and still not very pretty language, imho more
usable than back then (of course that might also be because my memory
betrays me, or the old compilers I used [Turbo and Zortech C++ in that
case] were Not Very Good back then, or whatever.)

Yannick Tremblay

unread,
Jun 2, 2008, 10:05:09 AM6/2/08
to
In article <BfCdnUj63_iCVt7V...@comcast.com>,

Walter Bright <wal...@digitalmars-nospamm.com> wrote:
>Juha Nieminen wrote:
>> Maybe you think using <> makes template code "a mess"? I don't
>> understand why.
>
>It's because of the parsing ambiguities that come from using < > as a
>parameter delimiter.
>
>> Is this somehow unclear:
>>
>> std::vector<int> table;
>> table.push_back(5);
>>
>> What's so unclear about that? I think it's perfectly clear and legible
>> code. How else would you want it to be?
>
>It's the wordiness of it. If the code gets more complicated than such
>trivial examples, it gets rather hard to visualize. I would want it to
>use a much more compact notation, like maybe:
>
> int[] table;
> table ~= 5;

The problem with compact notation is that they can only fill a very
small number of cases.

So OK, we replace std::vector by [] and push_back by ~= What about
the rest of vector method ? What about the other containers?

std::deque<int> queue; // int @ queue ?
queue.push_back(5); // queue ~= 5; OK
queue.push_front(8); // queue =~ 8 ??
int a = queue.front(); // int a ~= queue ????


>To me, it's like the difference between
>
> assign(a,add(b,c))
>
>and
>
> a=b+c

operator=() for a std::vector does exactly what one should expect.
std::vector<int> a;
// fill a with some data
std::vector<int> b;
b = a; // nice and intuitive.

What do you think should happen on:

std::vector<int> c;
c = a + b; // ??

Various peoples will say: sum elements individually, concatenation of
the two vectors ??? The fact is that '+' for vectors have no natural
(universal?) meaning, using it as a short hand for something else is
more likely to obfuscate the code rather than a more verbose solution.

Yannick

Noah Roberts

unread,
Jun 2, 2008, 12:04:01 PM6/2/08
to
Erik Wikström wrote:
> On 2008-05-31 18:36, plen...@yahoo.com wrote:
>> I was looking over someone's C++ code today and despite
>> having written perfectly readable C++ code myself,
>> the stuff I was looking at was worse than legalese.
>> The people who are guiding the development of C++
>> have really made a mess of things, I mean templates
>> and competing libraries and all that just render the
>> code impossible to comprehend.
>
> Sure, templates can be a bit hard to read before you get used to them
> (and template meta-programming even harder) but considering how powerful
> they are I do not think they are overly complex.

Both of these things simply require an understanding of the language
and, in the case of TMP, the conventions used. The authors of MPL
created and documented, quite well I think, the underlying concepts of
metafunctions and their associated algorithms and tools. An
understanding of these concepts makes TMP code completely understandable.

The problem I think that is going on here, without actually seeing any
code sample, is that people are expecting that they can just learn C++
and then work in the field for 20 years without having to learn anything
new.

Noah Roberts

unread,
Jun 2, 2008, 12:10:03 PM6/2/08
to
Walter Bright wrote:
> James Kanze wrote:
>> On May 31, 7:25 pm, Erik Wikström <Erik-wikst...@telia.com> wrote:
>>> Sure, templates can be a bit hard to read before you get used
>>> to them (and template meta-programming even harder) but
>>> considering how powerful they are I do not think they are
>>> overly complex.
>>
>> It's always a costs-benefits tradeoff. Making the code harder
>> to read is a definite cost. Afterwards, you have to weigh the
>> benefits, and see if they are worth it.
>
> I don't believe readability is a cost benefit tradeoff. I attended Scott
> Meyers' presentation at NWCPP (slides here:
> http://www.nwcpp.org/Downloads/2008/code_features.pdf). Scott mentioned
> that he'd had help from TMP experts in creating the code examples, so we
> can discount the idea that the readability problems are caused by lack
> of programmer ability in C++ TMP.

I also attended that discussion, but the first one in 07...Red Green.
At least in that talk it seemed to be that Meyers specifically had help
in dealing with certain aspects of the TMP library and more
specifically, with things that should have worked but did not.

Furthermore, it wouldn't surprise me if Scott initially had trouble
understanding or working in TMP because it is an utterly new technique
that is very different than anything else people normally do in C++.
The closest one might come to the kind of coding that you are doing
might be LISP or Scheme, except you are unable to assign to anything.
However, by learning the concepts behind the TMP method, what a
metafunction is and things like that, the code really becomes rather
easy to comprehend.

And I'm not a C++ "expert". There are a LOT of people that know the
language better than I do.

Erik Wikström

unread,
Jun 2, 2008, 12:32:08 PM6/2/08
to
On 2008-06-02 12:27, Walter Bright wrote:
> Juha Nieminen wrote:
>> Maybe you think using <> makes template code "a mess"? I don't
>> understand why.
>
> It's because of the parsing ambiguities that come from using < > as a
> parameter delimiter.
>
>> Is this somehow unclear:
>>
>> std::vector<int> table;
>> table.push_back(5);
>>
>> What's so unclear about that? I think it's perfectly clear and legible
>> code. How else would you want it to be?
>
> It's the wordiness of it. If the code gets more complicated than such
> trivial examples, it gets rather hard to visualize. I would want it to
> use a much more compact notation, like maybe:
>
> int[] table;
> table ~= 5;

The natural interpretation of the above would in C++ be "table != 5".
Assigning non-intuitive meanings to operators is much worse than a lack
of compactness. If you really want an operator use either += or <<.

--
Erik Wikström

Roland Pibinger

unread,
Jun 2, 2008, 1:27:47 PM6/2/08
to
On Sun, 1 Jun 2008 16:34:58 -0700 (PDT), plen...@yahoo.com wrote:
>I recall having the same experience, the *first* time I looked
>at a C program, having before that seen only Pascal,
>Modula-2, Basic and assembly. But I've seen C++ many times
>now, albeit mostly my own which is deliberately readable.

You can safely ignore this geek style 'template programming' because
it will never reach the mundane area of real-world programming.


--
Roland Pibinger
"The best software is simple, elegant, and full of drama" - Grady Booch

Fernando Gómez

unread,
Jun 2, 2008, 2:10:09 PM6/2/08
to
On Jun 2, 12:27 pm, rpbg...@yahoo.com (Roland Pibinger) wrote:

> On Sun, 1 Jun 2008 16:34:58 -0700 (PDT), plenty...@yahoo.com wrote:
> >I recall having the same experience, the *first* time I looked
> >at a C program, having before that seen only Pascal,
> >Modula-2, Basic and assembly. But I've seen C++ many times
> >now, albeit mostly my own which is deliberately readable.
>
> You can safely ignore this geek style 'template programming' because
> it will never reach the mundane area of real-world programming.
>

Yeah, like, you know, WTL, Loki or the Standard C++ Library. Those are
clear examples of imaginary-world programming.

:)

Hope you weren't serious about that...

Walter Bright

unread,
Jun 2, 2008, 2:18:09 PM6/2/08
to
Yannick Tremblay wrote:
> What do you think should happen on:
>
> std::vector<int> c;
> c = a + b; // ??
>
> Various peoples will say: sum elements individually, concatenation of
> the two vectors ??? The fact is that '+' for vectors have no natural
> (universal?) meaning, using it as a short hand for something else is
> more likely to obfuscate the code rather than a more verbose solution.

You're quite right. That's why the D programming language introduced the
operators ~ and ~= to mean concatenate and append, respectively. That
eliminates the meaning ambiguity in the + and += operators.

Ramon F Herrera

unread,
Jun 2, 2008, 10:45:07 PM6/2/08
to
On May 31, 12:36 pm, plenty...@yahoo.com wrote:


It has been said many times before. The solution (or cure) to C++ is
the KISS principle.

-RFH

Michael DOUBEZ

unread,
Jun 3, 2008, 3:21:42 AM6/3/08
to
Walter Bright a écrit :

Another usual argument with using + for concatenation is that one expect
commutativity (a+b==b+a) but a.append(b)!= b.append(a) .

--
Michael

Juha Nieminen

unread,
Jun 3, 2008, 4:56:01 AM6/3/08
to
Walter Bright wrote:
> It's the wordiness of it.

I disagree. Using longer keywords and notation does not make the code
unclear, but all the contrary: It makes the code more understandable and
unambiguous. When you try to minimize the length of elements what you
end up is basically an unreadable obfuscated regexp.

I think that your suggestion itself is a perfect example of that:

> table ~= 5;

Yes, that uses less characters than "table.push_back(5);". However,
why would that be any clearer and more understandable? On the contrary,
it's more obfuscated.

I have never understood the fascination some people (and almost 100%
of beginner programmers) have with trying to minimize the size of their
source code. They will sometimes go to ridiculous extents to try to make
the code as short as possible, at the cost of making it completely
obfuscated.

Brevity does not improve readability, but all the contrary.

Juha Nieminen

unread,
Jun 3, 2008, 4:57:48 AM6/3/08
to
Michael DOUBEZ wrote:
> Another usual argument with using + for concatenation is that one expect
> commutativity (a+b==b+a) but a.append(b)!= b.append(a) .

OTOH, multiplication of matrices is not commutative, yet it may make
sense to still support the * operator for matrix types...

Kai-Uwe Bux

unread,
Jun 3, 2008, 5:18:12 AM6/3/08
to
Juha Nieminen wrote:

> Walter Bright wrote:
>> It's the wordiness of it.
>
> I disagree. Using longer keywords and notation does not make the code
> unclear, but all the contrary: It makes the code more understandable and
> unambiguous. When you try to minimize the length of elements what you
> end up is basically an unreadable obfuscated regexp.

Yes and no. I really like the verbosity of Modula 2 in control flow as
opposed to the use of "{" and "}". However, when it comes to template
template parameters, I have trouble getting a reasonable layout to work
simply because its using too much horizontal space. A baby case example is
something like

typedef typename
allocator_type::template rebind< ListNode >::other node_allocator;

Regardless of where I put the line break, it always looks somewhat
suboptimal.


> I think that your suggestion itself is a perfect example of that:
>
>> table ~= 5;
>
> Yes, that uses less characters than "table.push_back(5);". However,
> why would that be any clearer and more understandable? On the contrary,
> it's more obfuscated.

Well, that depends, too. In D, "~" denotes concatenation. It makes perfect
sense, not to use "+" for that, and "~" feels somewhat right. Now, with
that convention in place, table ~= 5 is not obfuscated at all.


> I have never understood the fascination some people (and almost 100%
> of beginner programmers) have with trying to minimize the size of their
> source code. They will sometimes go to ridiculous extents to try to make
> the code as short as possible, at the cost of making it completely
> obfuscated.

On that, I agree. But that does by no means imply that the syntax of C++ is
doing a good job in supporting clear and understandable coding of template
stuff.


> Brevity does not improve readability, but all the contrary.

Overboarding use of horizontal space making it hard to put line breaks in
appropriate places also does not improve readability.


Best

Kai-Uwe Bux

James Kanze

unread,
Jun 3, 2008, 5:37:08 AM6/3/08
to
On Jun 2, 7:27 pm, rpbg...@yahoo.com (Roland Pibinger) wrote:

> On Sun, 1 Jun 2008 16:34:58 -0700 (PDT), plenty...@yahoo.com wrote:
> >I recall having the same experience, the *first* time I
> >looked at a C program, having before that seen only Pascal,
> >Modula-2, Basic and assembly. But I've seen C++ many times
> >now, albeit mostly my own which is deliberately readable.

> You can safely ignore this geek style 'template programming'
> because it will never reach the mundane area of real-world
> programming.

First, you can't ignore anything, because you never know where
it will crop up. And like most things, it will be more or less
readable, depending on who wrote it.

What is true is that at the application level, there is very
little need for meta-programming; it is mostly used in low level
libraries (like the standard library). What is also true is
that some of its more extreme use does push readability, even
when written by an expert (but there are also some simple,
everyday idioms which even average programs should be able to
master). And what is certainly true is that it is being used
(probably too much, even in places where it isn't needed).

James Kanze

unread,
Jun 3, 2008, 5:43:21 AM6/3/08
to
On Jun 2, 4:05 pm, ytrem...@nyx.nyx.net (Yannick Tremblay) wrote:
> In article <BfCdnUj63_iCVt7VnZ2dnUVZ_vSdn...@comcast.com>,

> Walter Bright <wal...@digitalmars-nospamm.com> wrote:
> >Juha Nieminen wrote:
> >> Maybe you think using <> makes template code "a mess"? I don't
> >> understand why.

> >It's because of the parsing ambiguities that come from using < > as a
> >parameter delimiter.

> >> Is this somehow unclear:

> >> std::vector<int> table;
> >> table.push_back(5);

> >> What's so unclear about that? I think it's perfectly clear
> >> and legible code. How else would you want it to be?

> >It's the wordiness of it. If the code gets more complicated
> >than such trivial examples, it gets rather hard to visualize.
> >I would want it to use a much more compact notation, like
> >maybe:

> > int[] table;
> > table ~= 5;

> The problem with compact notation is that they can only fill a very
> small number of cases.

The problem with compact notation is that it quickly leads to
obfuscation. Witness perl and APL. Many of the problems with
C++ today is that there was an attempt to make the notation too
compact in the past. Things like:
int*p;
int a[10];
rather than:
variable p: pointer to int ;
variable a: array[ 10 ] of int ;
The result is a declaration syntax which causes untold problems,
not just to human readers, but also to compilers.

James Kanze

unread,
Jun 3, 2008, 5:51:51 AM6/3/08
to
On Jun 3, 10:56 am, Juha Nieminen <nos...@thanks.invalid> wrote:
> Walter Bright wrote:
> > It's the wordiness of it.

> I disagree. Using longer keywords and notation does not make
> the code unclear, but all the contrary: It makes the code more
> understandable and unambiguous. When you try to minimize the
> length of elements what you end up is basically an unreadable
> obfuscated regexp.

Yes. Typically, perl looks more like transmission noise that it
does a program.

In the (now distant) past, there was an argument for reducing
the number of characters. If you've ever heard a listing output
to a teletype, you'll understand. But I know of no programmer
today who develops code on a teletype. (But then, everyone I
know is in either western Europe or North America. Perhaps in
less priviledged regions.)

> I think that your suggestion itself is a perfect example of that:

> > table ~= 5;

> Yes, that uses less characters than "table.push_back(5);".
> However, why would that be any clearer and more
> understandable? On the contrary, it's more obfuscated.

From the looks of things, Walter would like APL. A language
known for read only programs.

> I have never understood the fascination some people (and
> almost 100% of beginner programmers) have with trying to
> minimize the size of their source code. They will sometimes go
> to ridiculous extents to try to make the code as short as
> possible, at the cost of making it completely obfuscated.

> Brevity does not improve readability, but all the contrary.

Brevity, correctly applied, can improve readability. As my high
school English teacher used to say, "good writing is clear and
concise". The problem with verbosity, however, isn't the lenght
(in characters) of the words (tokens). The problem with much
template programming is that it deals with several different
levels at the same time, each with it's own vocabulary, so you
need a lot more words. And it's a fundamental problem; I don't
think that there is a real solution.

Walter Bright

unread,
Jun 3, 2008, 6:12:02 AM6/3/08
to
James Kanze wrote:
> The problem with compact notation is that it quickly leads to
> obfuscation. Witness perl and APL.

Those problems are the result of lack of redundancy in the language, not
compactness. See my article:
http://dobbscodetalk.com/index.php?option=com_myblog&show=Redundancy-in-Programming-Languages.html&Itemid=29


> Many of the problems with
> C++ today is that there was an attempt to make the notation too
> compact in the past. Things like:
> int*p;
> int a[10];
> rather than:
> variable p: pointer to int ;
> variable a: array[ 10 ] of int ;
> The result is a declaration syntax which causes untold problems,
> not just to human readers, but also to compilers.

Again, that has nothing to do with compactness and everything to do with
ambiguity in the grammars. I don't believe that replacing operators with
long words makes code clearer - didn't Cobol demonstrate that?

Walter Bright

unread,
Jun 3, 2008, 6:27:59 AM6/3/08
to
James Kanze wrote:
> In the (now distant) past, there was an argument for reducing
> the number of characters. If you've ever heard a listing output
> to a teletype, you'll understand. But I know of no programmer
> today who develops code on a teletype. (But then, everyone I
> know is in either western Europe or North America. Perhaps in
> less priviledged regions.)

I've had many programmers tell me that their style of programming is
based on how much they can see on their screens. As screens have gotten
bigger, their mental "unit of code" has increased to match. I know I
used to make all my functions fit in 24 lines or less, now 60 lines is
typical.

Take a look at Scott's slides again. I don't see any reasonable way of
formatting it to look decent on a screen.


>> I think that your suggestion itself is a perfect example of that:
>
>>> table ~= 5;
>
>> Yes, that uses less characters than "table.push_back(5);".
>> However, why would that be any clearer and more
>> understandable? On the contrary, it's more obfuscated.
>
> From the looks of things, Walter would like APL. A language
> known for read only programs.

Since the D programming language looks nothing like APL (or Perl), I
don't understand your comment at all. Also, the way arrays work in D,
which includes using the ~ and ~= operators, are frequently cited by D
users as one of the main reasons they like D. Do you really believe that
~ as concatenation and ~= for append is "obfuscation" ?


> The problem with verbosity, however, isn't the lenght
> (in characters) of the words (tokens).

It's both the length and the fact that there are so many tokens needed
to be strung together to perform basic operations. What you're trying to
accomplish gets lost in all the < > and ::.

Matthias Buelow

unread,
Jun 3, 2008, 7:24:43 AM6/3/08
to
Erik Wikström wrote:

>> int[] table;
>> table ~= 5;
>
> The natural interpretation of the above would in C++ be "table != 5".
> Assigning non-intuitive meanings to operators is much worse than a lack
> of compactness. If you really want an operator use either += or <<.

I agree; I associate some kind of negation with ~, not concatenation.
For ~=, I'd probably go with the C-style interpretation of = ... ~, or
interpret it as some kind of congruence operator.

Vidar Hasfjord

unread,
Jun 3, 2008, 9:15:10 AM6/3/08
to
On Jun 3, 11:12 am, Walter Bright <wal...@digitalmars-nospamm.com>
wrote:
> [...]

> I don't believe that replacing operators with
> long words makes code clearer - didn't Cobol demonstrate that?

In my view, the essential observation is that the keywords here, 'to'
and 'of', highlights an irregularity in the language; constructs
reserved for special built-in features. Whether such irregularities
are words or operators doesn't matter.

For example, I find the C++ type constructor operators for arrays ([])
and pointers (*) just as unnecessary and cluttering. They were needed
when C was invented, but now C++ has templates for parameterized
types. Arrays are parameterized types. So are pointers. So an ideal
regular language would use the same syntax:

pointer [int] p;
array [int, 10] a;
my_2d_array [int, int, 10, 10] m;

This also allows you to reserve square brackets for all parameterized
compile-time structures (templates). That would eliminate parser
irregularities with angle brackets. Then use Fortran style syntax for
indexing, i.e. p (0), p (1). Interestingly, this extends elegantly to
multi-dimensional arrays, m (0, 1), while the current C++ square
bracket operator does not; since it accepts only one parameter. It
seems that, with regularity, less is more.

These few syntax improvements along with obolishing some of the most
problematic type system irregularities (such as array decay and other
unwise conversion rules) would go a long way in making C++ a simpler
language both syntactically and semantically.

Of course, it wouldn't be C++ anymore; but hey, why not define "C++
Level 2" and some migration path via interoperability features?
Modules, when they are introduced to the language, may make this more
feasable, perhaps.

Regards,
Vidar Hasfjord

Vidar Hasfjord

unread,
Jun 3, 2008, 9:48:13 AM6/3/08
to
On Jun 3, 10:18 am, Kai-Uwe Bux <jkherci...@gmx.net> wrote:
> [...]

> I have trouble getting a reasonable layout to work
> simply because its using too much horizontal space. A baby case example is
> something like
>
>   typedef typename
>   allocator_type::template rebind< ListNode >::other node_allocator;

Getting rid of the redundant 'typename' and 'template' keywords would
help a lot. As I understand it a cleaner syntax and the use of
concepts would allow the expression to be inferred without these.

The other fundamental problem with C++ meta-functions is that it is
based on this cumbersome conventions:

typedef meta_function <arg1, arg2>::result r;

The ideal syntax would be:

alias r = meta_function <arg1, arg2>;

The problem is to distinguish between a meta-function reference and
the type (result) that it produces. (Aside: This is the same problem
as for regular functions; distinguishing the use of the function as a
function call that evaluates to the result and the value of the
function itself; which in C++ decays to a function pointer.)

It seems that the new C++09 alias syntax provides what's needed:

template <typename T1, typename T2>
alias meta_function = ...;

Are the full gamut of template features available for templated
aliases? Such as partial specialization etc.?

Regards,
Vidar Hasfjord

kwikius

unread,
Jun 3, 2008, 10:02:38 AM6/3/08
to
On Jun 3, 2:15 pm, Vidar Hasfjord <vattilah-gro...@yahoo.co.uk> wrote:

<...>

> For example, I find the C++ type constructor operators for arrays ([])
> and pointers (*) just as unnecessary and cluttering. They were needed
> when C was invented, but now C++ has templates for parameterized
> types. Arrays are parameterized types. So are pointers. So an ideal
> regular language would use the same syntax:
>
>   pointer [int] p;
>   array [int, 10] a;
>   my_2d_array [int, int, 10, 10] m;

Interesting dude ... :-)


regards
Andy Little

kwikius

unread,
Jun 3, 2008, 10:16:21 AM6/3/08
to
On Jun 3, 2:48 pm, Vidar Hasfjord <vattilah-gro...@yahoo.co.uk> wrote:

<...>

>   typedef meta_function <arg1, arg2>::result r;


>
> The ideal syntax would be:
>
>   alias r = meta_function <arg1, arg2>;

I've been toying about with this stuff myself. I've been looking at LL
grammars for a C++ like language without the crud.

AFAICS typename is useful because the parser needs to know that a name
is a type within a template and it solves the problem in a nice LL
way. The problem is that it isnt really regular in C++ (eg also have
typedef which is rather horrible from C useage too)

In my doodlings a metafunction invocation has the same syntax as a
function:

typename r = meta_function(arg1,arg2);

Note that the context provided by typename means that initaliser must
be a type not value. In fact a metafunction can be an ordinary
function where it just gives returntype. (Can though just define the
metafunction) Also works with operators;

typename A = ...;
typename B = ...;
typename C = ..;

typename plus_type = A + B + C;

For legibility you can add an optional (or maybe required for clarity)
prefix:

typename r = typefn metafunction(arg1,arg2);

typename plus_type = typefn A + B + C;


regards
Andy Little


Noah Roberts

unread,
Jun 3, 2008, 11:39:47 AM6/3/08
to
James Kanze wrote:
> On Jun 2, 7:27 pm, rpbg...@yahoo.com (Roland Pibinger) wrote:
>> On Sun, 1 Jun 2008 16:34:58 -0700 (PDT), plenty...@yahoo.com wrote:
>>> I recall having the same experience, the *first* time I
>>> looked at a C program, having before that seen only Pascal,
>>> Modula-2, Basic and assembly. But I've seen C++ many times
>>> now, albeit mostly my own which is deliberately readable.
>
>> You can safely ignore this geek style 'template programming'
>> because it will never reach the mundane area of real-world
>> programming.
>
> First, you can't ignore anything, because you never know where
> it will crop up. And like most things, it will be more or less
> readable, depending on who wrote it.
>
> What is true is that at the application level, there is very
> little need for meta-programming; it is mostly used in low level
> libraries (like the standard library).

Well, first of all, I don't think that the standard library, where it
actually makes use of generic/meta programming techniques, is "low
level". It is very much application level - stacks, lists,
vectors...this isn't hardware talking stuff. There is nothing low level
about abstract data types. It is exactly the opposite of low level in
my opinion.

Second, I disagree that there's little need for it in the application
level programming. We, where I work, actually use it a moderate amount
and to great advantage. For instance, we are an engineering firm and
use a data type that uses metaprogramming techniques to provide type
safe dimensional analysis. Since adopting this it has already saved us
numerous man hours in debugging.

We use boost::units and some other stuff that I wrote on top of it.
Other areas it is used is in a variety of generic functions that use
enable_if to choose or disqualify template instantiations.

So as one that is not afraid of TMP and uses it *in the application
layer* I really have to disagree with those claiming it has no place there.

> What is also true is
> that some of its more extreme use does push readability, even
> when written by an expert (but there are also some simple,
> everyday idioms which even average programs should be able to
> master).

Which is why I recommend getting and reading the TMP book by Abrahams
and Gurtovoy. Knowing the concepts that they developed enable one to
understand TMP much better, assuming the developer is using such
concepts and hasn't written their own. Even then, there's really only
so many ways you can do TMP and so learning theirs is a good step toward
understanding anyone's, including the STL that uses blobs.

It's a different kind of code. It isn't easy to do. But it isn't
readability that is the problem here, it's understanding that style of
language. It is a skill that all C++ developers should attempt to
become familiar with for it will only get more and more common as more
and more people adopt and refine the generic programming paradigm in
C++. Many of the new language features were put in specifically FOR
this kind of development.


> And what is certainly true is that it is being used
> (probably too much, even in places where it isn't needed).

And it is also certainly true that it is NOT being used in many places
where it should be.

Noah Roberts

unread,
Jun 3, 2008, 11:56:56 AM6/3/08
to
Walter Bright wrote:
> James Kanze wrote:
>> In the (now distant) past, there was an argument for reducing
>> the number of characters. If you've ever heard a listing output
>> to a teletype, you'll understand. But I know of no programmer
>> today who develops code on a teletype. (But then, everyone I
>> know is in either western Europe or North America. Perhaps in
>> less priviledged regions.)
>
> I've had many programmers tell me that their style of programming is
> based on how much they can see on their screens. As screens have gotten
> bigger, their mental "unit of code" has increased to match. I know I
> used to make all my functions fit in 24 lines or less, now 60 lines is
> typical.

That is way too many. It's not about what will fit on your screen, but
what fits in the brain in one go. Long functions do not.


>
> Take a look at Scott's slides again. I don't see any reasonable way of
> formatting it to look decent on a screen.

Well, I prefer something more like so:

template<typename S, typename T> // compute index of T in S
struct IndexOf
: mpl::distance
<
typename mpl::begin<S>::type
, typename mpl::find<S, T>::type
>
{};

But to each his own. Neither is particularly difficult to understand.

Erik Wikström

unread,
Jun 3, 2008, 12:30:28 PM6/3/08
to
On 2008-06-03 15:15, Vidar Hasfjord wrote:
> On Jun 3, 11:12 am, Walter Bright <wal...@digitalmars-nospamm.com>
> wrote:
>> [...]
>> I don't believe that replacing operators with
>> long words makes code clearer - didn't Cobol demonstrate that?
>
> In my view, the essential observation is that the keywords here, 'to'
> and 'of', highlights an irregularity in the language; constructs
> reserved for special built-in features. Whether such irregularities
> are words or operators doesn't matter.
>
> For example, I find the C++ type constructor operators for arrays ([])
> and pointers (*) just as unnecessary and cluttering. They were needed
> when C was invented, but now C++ has templates for parameterized
> types. Arrays are parameterized types. So are pointers. So an ideal
> regular language would use the same syntax:
>
> pointer [int] p;
> array [int, 10] a;
> my_2d_array [int, int, 10, 10] m;
>
> This also allows you to reserve square brackets for all parameterized
> compile-time structures (templates). That would eliminate parser
> irregularities with angle brackets.

Those have been removed in the next standard already, no need for square
brackets.

> Then use Fortran style syntax for
> indexing, i.e. p (0), p (1). Interestingly, this extends elegantly to
> multi-dimensional arrays, m (0, 1), while the current C++ square
> bracket operator does not; since it accepts only one parameter. It
> seems that, with regularity, less is more.

I seriously doubt that there is any technical problem with allowing the
[] operator to accept more than one argument, it seems to me more like
an arbitrary decision once made which none bothers to challenge.

--
Erik Wikström

Bo Persson

unread,
Jun 3, 2008, 2:40:26 PM6/3/08
to

It has been challenged, but not enough.

I have seen published search results by people trying to find code
using an overloaded comma operator inside the brackets. None found, if
I remember correctly.

However, both C and C++ already have a defined meaning for a[x,y],
though pretty useless. The proposers of a new meaning just haven't
been persistent enough, to convince a majority.

Recent radical changes to the keyword 'auto' might have set a new
precedent though. :-)


Bo Persson


Walter Bright

unread,
Jun 3, 2008, 2:53:24 PM6/3/08
to
Noah Roberts wrote:
> Well, I prefer something more like so:
>
> template<typename S, typename T> // compute index of T in S
> struct IndexOf
> : mpl::distance
> <
> typename mpl::begin<S>::type
> , typename mpl::find<S, T>::type
> >
> {};
>
> But to each his own. Neither is particularly difficult to understand.

Suppose we could write it as:

int IndexOf(S, T)
{
return distance(begin(S), find(S, T));
}

Would you prefer the latter? I sure would.

Walter Bright

unread,
Jun 3, 2008, 3:07:59 PM6/3/08
to

Since ~ and ~= are not used as binary operators in C, there really isn't
any significant baggage associated with it. It doesn't take long at all
to get used to it as concatenation.

After all, how long did it take anyone to get used to * meaning multiply
as a binary operator, and pointer indirection as a unary one? Is there
something intuitive about * meaning indirection for non-C programmers?
Or how about & meaning 'and' and 'address of' ?

Before 1990, who had ever used < > to denote argument lists before?

Noah Roberts

unread,
Jun 3, 2008, 7:40:25 PM6/3/08
to

I really don't see what that has to do with anything unless you want to
just bitch and moan, pretending that TMP is the same as runtime programming.

Noah Roberts

unread,
Jun 3, 2008, 7:46:35 PM6/3/08
to
Bo Persson wrote:

> I have seen published search results by people trying to find code
> using an overloaded comma operator inside the brackets. None found, if
> I remember correctly.

There was a post of one that made use of a special type...MagicInt if I
recall correctly. Took place in a discussion on () vs. [] for matrix
abstractions a year or two ago here in this group.

Walter Bright

unread,
Jun 3, 2008, 8:07:10 PM6/3/08
to
Noah Roberts wrote:

> Walter Bright wrote:
>>> Well, I prefer something more like so:
>>>
>>> template<typename S, typename T> // compute index of T in S
>>> struct IndexOf
>>> : mpl::distance
>>> <
>>> typename mpl::begin<S>::type
>>> , typename mpl::find<S, T>::type
>>> >
>>> {};
>>>
>>> But to each his own. Neither is particularly difficult to understand.
>>
>> Suppose we could write it as:
>>
>> int IndexOf(S, T)
>> {
>> return distance(begin(S), find(S, T));
>> }
>>
>> Would you prefer the latter? I sure would.
>
> I really don't see what that has to do with anything

I'm trying to show that complicated syntax is not fundamental to TMP.

> unless you want to
> just bitch and moan, pretending that TMP is the same as runtime
> programming.

I don't know of a property of compile time programming that requires it
to be more complex than runtime programming.

Vidar Hasfjord

unread,
Jun 4, 2008, 12:35:34 AM6/4/08
to
On Jun 4, 1:07 am, Walter Bright <wal...@digitalmars-nospamm.com>
wrote:
> [...]

> I don't know of a property of compile time programming that requires it
> to be more complex than runtime programming.

That's a good point. But an interesting follow-up question to that is:
Should it be different?

Having thought about this superficially my intuition is that the
compile-time domain and the runtime domain calls for different
programming paradigms. In a way I think C++ just stumbled upon the
holy grail: It found the ideal domain for functional programming
languages; the compile-time domain.

Meta-programming operates in a formal domain; the domain of the type-
system of the language. Functional programming is a perfect fit here
as it is suited for formal proofs etc. Hence I don't think imperative
features, such as mutating constructs and side-effects, belong at
compile-time.

C++ has introduced a new era where the compiler is an execution
platform in itself. So the challenge for future language development
is to make the meta-language; the functional programming language that
runs in the compiler, optimal both for programmer expressiveness and
clarity and for efficient compile-time execution.

Regards,
Vidar Hasfjord

Vidar Hasfjord

unread,
Jun 4, 2008, 1:01:27 AM6/4/08
to
On Jun 3, 3:16 pm, kwikius <a...@servocomm.freeserve.co.uk> wrote:
> [...]

> I've been toying about with this stuff myself. I've been looking at LL
> grammars for a C++ like language without the crud.

That is very interesting. Do you have any published work, or links to
other work in this area, that I can look at?

I am only aware of SPECS (Significantly Prettier and Easier C++
Syntax) by Werther and Conway. I think that syntax could be made even
simpler by dropping the type constructor operators for pointers and
arrays. That would free the square brackets for template use (SPECS
uses <[]>; unambiguous, but verbose). It is still a good effort.

Regards,
Vidar Hasfjord

Walter Bright

unread,
Jun 4, 2008, 1:27:21 AM6/4/08
to
Vidar Hasfjord wrote:
> Having thought about this superficially my intuition is that the
> compile-time domain and the runtime domain calls for different
> programming paradigms.

That means that the programmer needs to learn two languages instead of
one. To ask for this one must have some pretty compelling benefits of it
to justify it.

> In a way I think C++ just stumbled upon the
> holy grail: It found the ideal domain for functional programming
> languages; the compile-time domain.

I don't understand why, for instance:

int foo(int x)
{
x = 3 + x;
for (int i = 0; i < 10; i++)
x += 7;
return x;
}

is inappropriate for compile time execution. Of course, it could be
rewritten in FP style, but since I'm programming in C++, why shouldn't
straightforward C++ work?


> Meta-programming operates in a formal domain; the domain of the type-
> system of the language. Functional programming is a perfect fit here
> as it is suited for formal proofs etc. Hence I don't think imperative
> features, such as mutating constructs and side-effects, belong at
> compile-time.

Regardless of the merits of FP programming, regular C++ programming is
not FP and it is not necessary for compile time evaluation to be FP.

> C++ has introduced a new era where the compiler is an execution
> platform in itself. So the challenge for future language development
> is to make the meta-language; the functional programming language that
> runs in the compiler, optimal both for programmer expressiveness and
> clarity and for efficient compile-time execution.

C++ has opened the door on that, I agree. Where I don't agree is that
C++ has stumbled on the ideal way of doing compile time programming, or
that FP is ideal for it, or that compile time programming should be in a
different language than runtime programming.

I am not an expert in C++ TMP. But when I do examine examples of it, it
always seems like a lot of effort is expended doing rather simple things.

Furthermore, because every step in evaluating TMP requires the
construction of a unique template instantiation, this puts some rather
onerous memory and time constraints on doing more complicated things
with TMP. Perhaps these are merely technical limitations that will be
surmounted by advancing compiler technology, but I just don't see how
the result could be better than thousands of times slower than executing
the equivalent at run time.

kwikius

unread,
Jun 4, 2008, 4:33:40 AM6/4/08
to
On Jun 4, 6:01 am, Vidar Hasfjord <vattilah-gro...@yahoo.co.uk> wrote:
> On Jun 3, 3:16 pm, kwikius <a...@servocomm.freeserve.co.uk> wrote:
>
> > [...]
> > I've been toying about with this stuff myself. I've been looking at LL
> > grammars for a C++ like language without the crud.
>
> That is very interesting. Do you have any published work, or links to
> other work in this area, that I can look at?

I havent got anything publishable on my own doodlings, however for LL
grammar I seriously recommend SLK

http://home.earthlink.net/~slkpg/

It has nice properties. Various languages supported.. The actions are
largely separated from the grammar source and this puts more emphasis
on the grammar than semantics, which should I conjecture lead to a
cleaner grammar than e.g Bison. Also grammar source files are nice and
compact to pass around for discussion...

> I am only aware of SPECS (Significantly Prettier and Easier C++
> Syntax) by Werther and Conway. I think that syntax could be made even
> simpler by dropping the type constructor operators for pointers and
> arrays. That would free the square brackets for template use (SPECS
> uses <[]>; unambiguous, but verbose). It is still a good effort.

I will certainly take a look. Unfortunately I have a lot of other work
currently so I can't spend the time I would like to on it.... but IMO
it has to be done as I have hit the C++ wall (e.g TMP complexity long
compile times etc. I think that will only get worse with C++0x
Concepts (which also dont seem to play too well with TMP).

regards
Andy Little

Matthias Buelow

unread,
Jun 4, 2008, 6:48:13 AM6/4/08
to
Vidar Hasfjord wrote:

> C++ has introduced a new era where the compiler is an execution
> platform in itself.

This isn't anything new; Lisp macros have been doing that for uhm..
decades, if I'm right. A C++ template is a special kind of compiler
macro where the expansion is controlled by the type parameters, whereas
a Lisp macro, for example, can use the full language to produce code to
substitute in its place, by any computation conceivable (including side
effects). I agree that compiler macros (including C++ templates) are a
useful tool for extending the language.

James Kanze

unread,
Jun 4, 2008, 7:43:05 AM6/4/08
to
On Jun 3, 12:27 pm, Walter Bright <wal...@digitalmars-nospamm.com>
wrote:

> James Kanze wrote:
> > In the (now distant) past, there was an argument for reducing
> > the number of characters. If you've ever heard a listing output
> > to a teletype, you'll understand. But I know of no programmer
> > today who develops code on a teletype. (But then, everyone I
> > know is in either western Europe or North America. Perhaps in
> > less priviledged regions.)

> I've had many programmers tell me that their style of
> programming is based on how much they can see on their
> screens. As screens have gotten bigger, their mental "unit of
> code" has increased to match. I know I used to make all my
> functions fit in 24 lines or less, now 60 lines is typical.

Help. I find that if a function is more than about 10 lines,
it's a warning sign that it's getting too complex. There are
exceptions, of course. A function which consists of a single
switch statement with a lot of entries, for example. The real
criteria is complexity, but in my own work, I find that ten
lines generally means that its time to watch out.

> Take a look at Scott's slides again. I don't see any
> reasonable way of formatting it to look decent on a screen.

That's not my argument. I'm certainly not going to argue that
C++ syntax is clear, concise or elegant. Just that less isn't
always better either---had there been a little more wordiness in
C's declaration syntax, we'd have a lot less problems today, for
example.

> >> I think that your suggestion itself is a perfect example of that:

> >>> table ~= 5;

> >> Yes, that uses less characters than "table.push_back(5);".
> >> However, why would that be any clearer and more
> >> understandable? On the contrary, it's more obfuscated.

> > From the looks of things, Walter would like APL. A language
> > known for read only programs.

> Since the D programming language looks nothing like APL (or
> Perl), I don't understand your comment at all.

Humor, mostly. But when you start arguing that something is
better just because it requires less characters to write, you're
definitly moving in the direction of APL.

> Also, the way arrays work in D, which includes using the ~ and
> ~= operators, are frequently cited by D users as one of the
> main reasons they like D. Do you really believe that ~ as
> concatenation and ~= for append is "obfuscation" ?

In C++, it definitely would be. In another language... I don't
know. I can see the need for a generic concatenation operator
(rather than just overloading +), and I don't see any good
spelling for it (unlike "or" for |). And it's probably frequent
enough that you can live with some arbitrariness (i.e. it not
being based on established mathematical use, like +). So I
guess it would fit in like << does for output in C++.

But even then, only as a special case. I certainly hope you
don't try to define single character special operators for every
single operation on a container.

> > The problem with verbosity, however, isn't the lenght (in
> > characters) of the words (tokens).

> It's both the length and the fact that there are so many
> tokens needed to be strung together to perform basic
> operations. What you're trying to accomplish gets lost in all
> the < > and ::.

I guess it depends on what you are trying to accomplish. That's
certainly the case for TMP. And I have nothing against some
shorter syntax, per se, if it's reasonably readable. Note,
however, that part of the added length comes from longer names
and namespaces. But I would certainly prefer vector< string >
to v<s>. I don't have a simple solution; for better or for
worse, we're dealing in a larger solution space than in the old
days, the amount of information that needs to be communicated
has gone up significantly, some redundancy is necessary for
human comprehension, and the result does require more characters
(each of which can only hold so much information).

James Kanze

unread,
Jun 4, 2008, 7:56:35 AM6/4/08
to
On Jun 4, 2:07 am, Walter Bright <wal...@digitalmars-nospamm.com>
wrote:

> Noah Roberts wrote:
> > Walter Bright wrote:
> >>> Well, I prefer something more like so:

> >>> template<typename S, typename T> // compute index of T in S
> >>> struct IndexOf
> >>> : mpl::distance
> >>> <
> >>> typename mpl::begin<S>::type
> >>> , typename mpl::find<S, T>::type
> >>> {};

> >>> But to each his own. Neither is particularly difficult to
> >>> understand.

> >> Suppose we could write it as:

> >> int IndexOf(S, T)
> >> {
> >> return distance(begin(S), find(S, T));
> >> }

> >> Would you prefer the latter? I sure would.

> > I really don't see what that has to do with anything

> I'm trying to show that complicated syntax is not fundamental
> to TMP.

Isn't part of the problem is that TMP is embedded in a runtime
programming language, and cannot use the natural syntax for many
of its constructs because that has already been pre-empted by
the runtime language?

> > unless you want to just bitch and moan, pretending that TMP
> > is the same as runtime programming.

> I don't know of a property of compile time programming that
> requires it to be more complex than runtime programming.

Perhaps the complexity is partially due to the fact that the two
are not rigorously separated, so you need special (and IMHO
awkward) syntax for the TMP, so that the compiler and the reader
can know what is compile time, and what is runtime.

kwikius

unread,
Jun 4, 2008, 8:09:14 AM6/4/08
to

"Walter Bright" <wal...@digitalmars-nospamm.com> wrote in message
news:tN2dndwxOI4nutvV...@comcast.com...

> Vidar Hasfjord wrote:
>> Having thought about this superficially my intuition is that the
>> compile-time domain and the runtime domain calls for different
>> programming paradigms.
>
> That means that the programmer needs to learn two languages instead of
> one. To ask for this one must have some pretty compelling benefits of it
> to justify it.

library versus application writing

>> In a way I think C++ just stumbled upon the
>> holy grail: It found the ideal domain for functional programming
>> languages; the compile-time domain.
>
> I don't understand why, for instance:
>
> int foo(int x)
> {
> x = 3 + x;
> for (int i = 0; i < 10; i++)
> x += 7;
> return x;
> }
>
> is inappropriate for compile time execution. Of course, it could be
> rewritten in FP style, but since I'm programming in C++, why shouldn't
> straightforward C++ work?
>

It would potentially make a type mutable (different) in different
translation units I think.

regards
Andy Little


Vidar Hasfjord

unread,
Jun 4, 2008, 8:18:25 AM6/4/08
to
On Jun 4, 11:48 am, Matthias Buelow <m...@incubus.de> wrote:
> Vidar Hasfjord wrote:
> > C++ has introduced a new era where the compiler is an execution
> > platform in itself.
>
> This isn't anything new;

True, that statement of mine was sloppy and incorrect. It should read
"Template meta-programming has brought C++ into a new era...". I was
thinking narrowly about the evolution of C++ and compiling C++
programs.

> Lisp macros have been doing that for uhm.. decades, if I'm right.

Yes, and Lisp's powerful yet small and regular language definition
should be an inspiration to every programming language designer.

Regards,
Vidar Hasfjord

Vidar Hasfjord

unread,
Jun 4, 2008, 9:37:12 AM6/4/08
to
On Jun 4, 6:27 am, Walter Bright <wal...@digitalmars-nospamm.com>
wrote:

> Vidar Hasfjord wrote:
> > Having thought about this superficially my intuition is that the
> > compile-time domain and the runtime domain calls for different
> > programming paradigms.
>
> That means that the programmer needs to learn two languages instead of
> one. To ask for this one must have some pretty compelling benefits of it
> to justify it.

I look at it as two paradigms instead of two languages; the functional
paradigm and the imperative paradigm. C++ supports both; as does D; so
I'm sure you agree that's a good thing.

The question is whether all features (paradigms) of the language
should be available both for meta-programming and ordinary
programming. There has been work done on C++ extensions that allows
the full language for meta-programming (see Metacode by Vandevoorde),
but I lean towards restricting compile-time processing to the side-
effect free parts only, i.e. a functional subset.

> I don't understand why, for instance:
>
> int foo(int x)
> {
> x = 3 + x;
> for (int i = 0; i < 10; i++)
> x += 7;
> return x;
> }
>
> is inappropriate for compile time execution.

Here I assume that you actually propose a very limited subset of
imperative features for compile-time processing; not that the whole
language should be available for processing at compile-time. A subset
of imperative features can be supported, as seen by constexpr function
in C++09 and by CTFE in D, but they are limited and required to live
by the rules of functional programming. For example, compile-time
functions must be 'pure', ie. the result must only depend on the
arguments, the function can have no side-effects and no state can
escape the function. I intuitively think this is good.

Or did you actually mean that you think the ideal would be something
akin to Metacode?

> Furthermore, because every step in evaluating TMP requires the
> construction of a unique template instantiation, this puts some rather
> onerous memory and time constraints on doing more complicated things
> with TMP. Perhaps these are merely technical limitations that will be
> surmounted by advancing compiler technology, but I just don't see how
> the result could be better than thousands of times slower than executing
> the equivalent at run time.

Yes, these are the challenges I alluded to in my post. While limited
imperative features such as CTFE can ease some of this, I think C++
compilers will start looking more like execution environments for
functional languages as they evolve to handle meta-programs
efficiently. A lot of work on efficient processing of functional
languages has been done with good results. How feasable it is to bring
the results of this work into the construction of a C++ compiler I
don't know; but I can imagine the scale of the challenge.

Regards,
Vidar Hasfjord

Pascal J. Bourguignon

unread,
Jun 4, 2008, 11:50:26 AM6/4/08
to
Walter Bright <wal...@digitalmars-nospamm.com> writes:
> I don't understand why, for instance:
>
> int foo(int x)
> {
> x = 3 + x;
> for (int i = 0; i < 10; i++)
> x += 7;
> return x;
> }
>
> is inappropriate for compile time execution. Of course, it could be
> rewritten in FP style, but since I'm programming in C++, why shouldn't
> straightforward C++ work?

First, notice that nothing prevents a C or C++ compiler to notice that
foo is a pure function, and therefore that any call such as:

int y=foo(42);

can be executed at compilation time.


The real reason why you would want to have some code executed at
compilation time is really to do metaprogramming.

Why code such as the above is inappropriate for compile time, in blub
languages, is because the types and data structures available in those
languages ARE NOT the types and data structure used by the compiler.

Well, you could 'try' to use string^W oops, there's no string data
type in C. Ok, let's try to use std::string in C++:

std::string genFun(std::string name,std::string body){
std::ostringstream s;
s<<"void "<<name<<"(){"<<endl;
s<<"cout<<\"Entering "<<name<<"\"<<endl;"<<endl;
s<<body<<endl;
s<<"cout<<\"Exiting "<<name<<"\"<<endl;"<<endl;
s<<"}"<<endl;
return(s.str());
}

and now we'd need some way to hook this function into the compiler,
let's assume a keyword 'macro' do do that:

macro genFun("example","cout<<\"In example function.\"<<endl;")

this would make the compiler run the expression following the keyword,
and replace the macro and expression with the resulting string, and
interpreting it in place.

Not very nice, is it.

What you are really longing for is Lisp with its s-expressions...


> Regardless of the merits of FP programming, regular C++ programming is
> not FP and it is not necessary for compile time evaluation to be FP.

Indeed, compiler hooks (macros) and programming style are totally orthogonal.


> C++ has opened the door on that, I agree. Where I don't agree is that
> C++ has stumbled on the ideal way of doing compile time programming,

That's the less that can be said about it...

> or that FP is ideal for it, or that compile time programming should be
> in a different language than runtime programming.
>
> I am not an expert in C++ TMP. But when I do examine examples of it,
> it always seems like a lot of effort is expended doing rather simple
> things.

Indeed.


> Furthermore, because every step in evaluating TMP requires the
> construction of a unique template instantiation, this puts some rather
> onerous memory and time constraints on doing more complicated things
> with TMP. Perhaps these are merely technical limitations that will be
> surmounted by advancing compiler technology, but I just don't see how
> the result could be better than thousands of times slower than
> executing the equivalent at run time.

Well, technically, since the solution has been know for about 50
years, it's not technical limitations, but psychological ones.

--
__Pascal Bourguignon__

James Kanze

unread,
Jun 4, 2008, 3:07:11 PM6/4/08
to
On Jun 3, 5:39 pm, Noah Roberts <u...@example.net> wrote:
> James Kanze wrote:
> > On Jun 2, 7:27 pm, rpbg...@yahoo.com (Roland Pibinger) wrote:
> >> On Sun, 1 Jun 2008 16:34:58 -0700 (PDT), plenty...@yahoo.com wrote:
> >>> I recall having the same experience, the *first* time I
> >>> looked at a C program, having before that seen only Pascal,
> >>> Modula-2, Basic and assembly. But I've seen C++ many times
> >>> now, albeit mostly my own which is deliberately readable.

> >> You can safely ignore this geek style 'template programming'
> >> because it will never reach the mundane area of real-world
> >> programming.

> > First, you can't ignore anything, because you never know where
> > it will crop up. And like most things, it will be more or less
> > readable, depending on who wrote it.

> > What is true is that at the application level, there is very
> > little need for meta-programming; it is mostly used in low level
> > libraries (like the standard library).

> Well, first of all, I don't think that the standard library, where it
> actually makes use of generic/meta programming techniques, is "low
> level". It is very much application level - stacks, lists,
> vectors...this isn't hardware talking stuff.

It's not talking to the hardware, but it is still very low
level. A vector is not (usually) an application level
abstraction, but rather a tool used in application level
abstractions.

> There is nothing low level about abstract data types. It is
> exactly the opposite of low level in my opinion.

It's about the lowest level you can get. What's below it?

> Second, I disagree that there's little need for it in the
> application level programming. We, where I work, actually use
> it a moderate amount and to great advantage. For instance, we
> are an engineering firm and use a data type that uses
> metaprogramming techniques to provide type safe dimensional
> analysis. Since adopting this it has already saved us
> numerous man hours in debugging.

But is the meta-programming in the application itself, or in the
lower level tools you use to implement it? (Not that I would
expect much metaprogramming in type safe dimensional analysis.)

> We use boost::units and some other stuff that I wrote on top
> of it. Other areas it is used is in a variety of generic
> functions that use enable_if to choose or disqualify template
> instantiations.

> So as one that is not afraid of TMP and uses it *in the
> application layer* I really have to disagree with those
> claiming it has no place there.

You can't really use templates too much in the application layer
anyway, because of the coupling they induce (unless all of your
compilers support export). And the whole point about being the
application level is that it is specific to the application;
it's not generic. What makes code the application level is that
it deals with concrete abstractions, like ClientOrder or
BookingInstruction (currently) or IPAddress (where I was
before). Just the opposite of template based generics.

Walter Bright

unread,
Jun 4, 2008, 8:14:59 PM6/4/08
to
Vidar Hasfjord wrote:
> Here I assume that you actually propose a very limited subset of
> imperative features for compile-time processing; not that the whole
> language should be available for processing at compile-time. A subset
> of imperative features can be supported, as seen by constexpr function
> in C++09 and by CTFE in D, but they are limited and required to live
> by the rules of functional programming. For example, compile-time
> functions must be 'pure', ie. the result must only depend on the
> arguments, the function can have no side-effects and no state can
> escape the function. I intuitively think this is good.

I think that a subset approach is fine, but a different language
approach is not. For example, in C++, you cannot write a factorial
function that works at both compile time and run time.


> Or did you actually mean that you think the ideal would be something
> akin to Metacode?

I don't know anything about Metacode.

Vidar Hasfjord

unread,
Jun 5, 2008, 12:02:14 AM6/5/08
to
On Jun 5, 1:14 am, Walter Bright <wal...@digitalmars-nospamm.com>
wrote:
> Vidar Hasfjord wrote:
> [...]

> I think that a subset approach is fine, but a different language
> approach is not. For example, in C++, you cannot write a factorial
> function that works at both compile time and run time.

I agree CTFE is convenient. I view it as part of the reasoning
abilities of the language that allows ordinary code to cross into the
compile-time domain. My observation is that code can only cross into
the compile-time domain when it adheres to principles of functional
programming; that it's output is solely dependent on its input and
that it has no external side-effects.

CTFE can't do computation on compile-time entities such as types. To
make imperative meta-programming pervasive you would need further
extensions to the language (such as Metacode).

> I don't know anything about Metacode.

Metacode is an experimental extension to C++ to allow imperative meta-
programming and reflection. I don't think it was ever formally
proposed, but you can find a presentation at the ISO site:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n1471.pdf

Regards,
Vidar Hasfjord

Walter Bright

unread,
Jun 5, 2008, 2:09:31 AM6/5/08
to
Vidar Hasfjord wrote:
> I agree CTFE is convenient. I view it as part of the reasoning
> abilities of the language that allows ordinary code to cross into the
> compile-time domain. My observation is that code can only cross into
> the compile-time domain when it adheres to principles of functional
> programming; that it's output is solely dependent on its input and
> that it has no external side-effects.

I generally agree with that, but I keep finding ways to expand the
domain of CTFE. But I don't think CTFE will ever be doing things like
spawning threads, writing files, etc., and I don't think it should even
if possible (because of security concerns).


> CTFE can't do computation on compile-time entities such as types. To
> make imperative meta-programming pervasive you would need further
> extensions to the language (such as Metacode).

You're quite right, CTFE operates on values, not types. To do types
needs some other facility. In the D programming language, one can create
arrays of types, and then operate on them at compile time using
conventional array notation and operators. This seems to work out rather
nicely, is easy to implement, and sidesteps the need to instantiate
large numbers of template classes.


> Metacode is an experimental extension to C++ to allow imperative meta-
> programming and reflection. I don't think it was ever formally
> proposed, but you can find a presentation at the ISO site:
>
> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n1471.pdf

Thanks for the pointer!

Vidar Hasfjord

unread,
Jun 5, 2008, 3:04:07 AM6/5/08
to
On Jun 3, 7:53 pm, Walter Bright <wal...@digitalmars-nospamm.com>
wrote:

I agree that meta-function applications should look more like
traditional function applications. I believe C++09 will allow this
short and sweet style:

template <typename S, typename T>

using index_of = distance <begin <S>, find <S, T>>;

("Template aliases for C++", N2258)

Regards,
Vidar Hasfjord

Michael DOUBEZ

unread,
Jun 5, 2008, 3:23:36 AM6/5/08
to
Juha Nieminen a écrit :
> Michael DOUBEZ wrote:
>> Another usual argument with using + for concatenation is that one expect
>> commutativity (a+b==b+a) but a.append(b)!= b.append(a) .
>
> OTOH, multiplication of matrices is not commutative, yet it may make
> sense to still support the * operator for matrix types...

But multiplication is not expected to be commutative other the matrix
group (or other non-Abelian rings). In math, the + sign is reserved for
commutative operations.

--
Michael

Noah Roberts

unread,
Jun 5, 2008, 11:58:18 AM6/5/08
to
James Kanze wrote:

> But is the meta-programming in the application itself, or in the
> lower level tools you use to implement it?

Well, since you are going to assert an arbitrary point of separation
between the two, seemly generated solely to and dependent on your
conclusion, obviously only main() counts as application programming and
no, it's not a template meta-program.

> (Not that I would
> expect much metaprogramming in type safe dimensional analysis.)

I would suggest you go look at boost's, and other versions like Quan, then.


> You can't really use templates too much in the application layer
> anyway, because of the coupling they induce (unless all of your
> compilers support export). And the whole point about being the
> application level is that it is specific to the application;
> it's not generic. What makes code the application level is that
> it deals with concrete abstractions, like ClientOrder or
> BookingInstruction (currently) or IPAddress (where I was
> before). Just the opposite of template based generics.

Hehehe, where do you get this stuff??

* templates induce coupling? :p
* IPAddress is "application specific"? :p
* You don't use templates in the application layer? :p

Noah Roberts

unread,
Jun 5, 2008, 12:04:25 PM6/5/08
to
Walter Bright wrote:
> Vidar Hasfjord wrote:
>> Here I assume that you actually propose a very limited subset of
>> imperative features for compile-time processing; not that the whole
>> language should be available for processing at compile-time. A subset
>> of imperative features can be supported, as seen by constexpr function
>> in C++09 and by CTFE in D, but they are limited and required to live
>> by the rules of functional programming. For example, compile-time
>> functions must be 'pure', ie. the result must only depend on the
>> arguments, the function can have no side-effects and no state can
>> escape the function. I intuitively think this is good.
>
> I think that a subset approach is fine, but a different language
> approach is not. For example, in C++, you cannot write a factorial
> function that works at both compile time and run time.

I think that would only introduce confusion, which is already there,
about the difference between the compiler program and the one the
compiler is compiling. Having a strong split, conceptually and
practically, between the two is important.

Furthermore, meta-programing is not conducive to the language proper (as
has been explained). This means that to make them the same language you
would need to push toward the meta-programing model, not the other way.
Personally, I don't want to have to write THAT much code in that
manner. If I did I'd be using LISP or something.

Walter Bright

unread,
Jun 5, 2008, 3:09:28 PM6/5/08
to
Noah Roberts wrote:
> Walter Bright wrote:
>> I think that a subset approach is fine, but a different language
>> approach is not. For example, in C++, you cannot write a factorial
>> function that works at both compile time and run time.
>
> I think that would only introduce confusion, which is already there,
> about the difference between the compiler program and the one the
> compiler is compiling. Having a strong split, conceptually and
> practically, between the two is important.

Is anyone confused by:

const int X = 5;
int a[X + 3];

? The array dimension is computed at compile time. I don't think it is
necessary to conceptually make a difference.

> Furthermore, meta-programing is not conducive to the language proper (as
> has been explained). This means that to make them the same language you
> would need to push toward the meta-programing model, not the other way.
> Personally, I don't want to have to write THAT much code in that
> manner. If I did I'd be using LISP or something.

I think we are all so used to the conventional limits of compile time
programming we don't even notice the severe restrictions. I know that
was (and still is) true for me.

Jerry Coffin

unread,
Jun 6, 2008, 12:15:31 AM6/6/08
to
In article <484793c5$0$30960$426a...@news.free.fr>,
michael...@free.fr says...

[ ... ]

> But multiplication is not expected to be commutative other the matrix
> group (or other non-Abelian rings). In math, the + sign is reserved for
> commutative operations.

Not so -- a group is defined over an operation. If that operation is not
commutative, the group is non-Abelian. It may be true that study of non-
Abelian groups tends more often to look at multiplication than addition,
but it is not true that the + sign is reserved for commutative
operations. It's often true (probably far more often than not) but not
always.

--
Later,
Jerry.

The universe is a figment of its own imagination.

Pascal J. Bourguignon

unread,
Jun 6, 2008, 4:32:04 AM6/6/08
to
Noah Roberts <us...@example.net> writes:

> Walter Bright wrote:
>> Vidar Hasfjord wrote:
>>> Here I assume that you actually propose a very limited subset of
>>> imperative features for compile-time processing; not that the whole
>>> language should be available for processing at compile-time. A subset
>>> of imperative features can be supported, as seen by constexpr function
>>> in C++09 and by CTFE in D, but they are limited and required to live
>>> by the rules of functional programming. For example, compile-time
>>> functions must be 'pure', ie. the result must only depend on the
>>> arguments, the function can have no side-effects and no state can
>>> escape the function. I intuitively think this is good.
>> I think that a subset approach is fine, but a different language
>> approach is not. For example, in C++, you cannot write a factorial
>> function that works at both compile time and run time.
>
> I think that would only introduce confusion, which is already there,
> about the difference between the compiler program and the one the
> compiler is compiling. Having a strong split, conceptually and
> practically, between the two is important.

As has been shown by 50 years of metaprogramming in lisp.


> Furthermore, meta-programing is not conducive to the language proper
> (as has been explained). This means that to make them the same
> language you would need to push toward the meta-programing model, not
> the other way. Personally, I don't want to have to write THAT much
> code in that manner. If I did I'd be using LISP or something.

Why not?

Why would you want to automatize the job of accountants or graphists,
but not your own?

--
__Pascal Bourguignon__

James Kanze

unread,
Jun 6, 2008, 5:53:58 AM6/6/08
to
On Jun 5, 5:58 pm, Noah Roberts <u...@example.net> wrote:
> James Kanze wrote:
> > But is the meta-programming in the application itself, or in the
> > lower level tools you use to implement it?

> Well, since you are going to assert an arbitrary point of
> separation between the two, seemly generated solely to and
> dependent on your conclusion, obviously only main() counts as
> application programming and no, it's not a template
> meta-program.

> > (Not that I would expect much metaprogramming in type safe
> > dimensional analysis.)

> I would suggest you go look at boost's, and other versions
> like Quan, then.

And where would they use meta-programming, except for
obfuscation? (Not all templates are metaprogramming.)

> > You can't really use templates too much in the application
> > layer anyway, because of the coupling they induce (unless
> > all of your compilers support export). And the whole point
> > about being the application level is that it is specific to
> > the application; it's not generic. What makes code the
> > application level is that it deals with concrete
> > abstractions, like ClientOrder or BookingInstruction
> > (currently) or IPAddress (where I was before). Just the
> > opposite of template based generics.

> Hehehe, where do you get this stuff??

Practical experience.

> * templates induce coupling? :p

And how, unless all of your compilers support export.

> * IPAddress is "application specific"? :p

It was in my application (dynamical allocation of IP addresses).

> * You don't use templates in the application layer? :p

They've been banned at the higher levels in most coding
guidelines I've seen, because of the coupling problems they
induce.

It's just a question of good software engineering. You don't
introduce complexity (or coupling) where it isn't needed. You
don't throw in or use features just because they're the in
thing. Templates (like many other things) have a cost. With
most current compilers, part of that cost is a significant
increase in coupling, which becomes very expensive the higher up
toward the application level you go. Whereas the benefits of
templates are mostly present at the lower levels. As always,
one might find some exceptions, but the in general, templates
aren't used much at the application level when engineering
criteria are used to decide.

--
James Kanze (GABI Software) email:james...@gmail.com

Conseils en informatique orient�e objet/
Beratung in objektorientierter Datenverarbeitung
9 place S�mard, 78210 St.-Cyr-l'�cole, France, +33 (0)1 30 23 00 34

kwikius

unread,
Jun 6, 2008, 9:36:16 AM6/6/08
to

"James Kanze" <james...@gmail.com> wrote in message
news:35b4bd62-2e24-4560-9fc6-

> They've been banned at the higher levels in most coding
> guidelines I've seen, because of the coupling problems they
> induce.

Interesting. std::string being a typedef for a class template, similarly
std::vector, std::list , std::map ,set

Are all these banned?

regards
Andy Little

Noah Roberts

unread,
Jun 6, 2008, 11:12:27 AM6/6/08
to
James Kanze wrote:

>> * templates induce coupling? :p
>
> And how, unless all of your compilers support export.

Header dependencies and code coupling are very, very different things.
Templates *reduce* coupling.

Noah Roberts

unread,
Jun 6, 2008, 11:22:00 AM6/6/08
to
Pascal J. Bourguignon wrote:
> Noah Roberts <us...@example.net> writes:

>> Furthermore, meta-programing is not conducive to the language proper
>> (as has been explained). This means that to make them the same
>> language you would need to push toward the meta-programing model, not
>> the other way. Personally, I don't want to have to write THAT much
>> code in that manner. If I did I'd be using LISP or something.
>
> Why not?
>
> Why would you want to automatize the job of accountants or graphists,
> but not your own?
>

You have a point, and I do where I can, but having to work without
things like assignment is difficult for me. The arguments for it have
been pretty good in this thread so I don't think that's something that
should change. So maybe it's laziness, maybe it's prudence...who knows.

James Kanze

unread,
Jun 6, 2008, 1:04:01 PM6/6/08
to
On Jun 6, 3:36 pm, "kwikius" <a...@servocomm.freeserve.co.uk> wrote:
> "James Kanze" <james.ka...@gmail.com> wrote in message

> news:35b4bd62-2e24-4560-9fc6-

> > They've been banned at the higher levels in most coding
> > guidelines I've seen, because of the coupling problems they
> > induce.

> Interesting. std::string being a typedef for a class template, similarly
> std::vector, std::list , std::map ,set

> Are all these banned?

If you consider them application level code, yes. In the places
I've worked, they've been considered part of the standard
library, and application programmers weren't allowed to modify
them.

James Kanze

unread,
Jun 6, 2008, 1:07:38 PM6/6/08
to
On Jun 6, 5:12 pm, Noah Roberts <u...@example.net> wrote:
> James Kanze wrote:
> >> * templates induce coupling? :p

> > And how, unless all of your compilers support export.

> Header dependencies and code coupling are very, very different things.

They're related, but yes: I should have made it clear that I was
talking about compiler dependencies, and not design coupling.

> Templates *reduce* coupling.

They can be used for design decoupling, especially in lower
level software. It's not automatic, though; a poorly designed
template can also increase coupling.

The important thing to realise is that they're a tool. Like
most (or even all) tools, they have a cost. If the advantages
of using the tool outweigh the cost, then you should use it. If
they don't, then you shouldn't.

kwikius

unread,
Jun 7, 2008, 5:44:44 AM6/7/08
to
On Jun 6, 6:04 pm, James Kanze <james.ka...@gmail.com> wrote:
> On Jun 6, 3:36 pm, "kwikius" <a...@servocomm.freeserve.co.uk> wrote:
>
> > "James Kanze" <james.ka...@gmail.com> wrote in message
> > news:35b4bd62-2e24-4560-9fc6-
> > > They've been banned at the higher levels in most coding
> > > guidelines I've seen, because of the coupling problems they
> > > induce.
> > Interesting. std::string being a typedef for a class template, similarly
> > std::vector, std::list , std::map ,set
> > Are all these banned?
>
> If you consider them application level code, yes.  In the places
> I've worked, they've been considered part of the standard
> library, and application programmers weren't allowed to modify
> them.

Theres a difference between application development and library
development. I have done both.

At the application level in quan, my physical quantities library ( and
I use quan a lot in my own applications) , only typedefs are used for
common quantities, which are provided by the library, the format is

quan::length::mm x;

quan::force_per_length::kN_per_m F1;

(I hope the intended quantities and units are obvious)

This is entirely similar to std::string, except that there are a large
number of quantities. Nevertheless no template parameters are used in
my own application code, though underneath there is a large amount of
template machinery.

From time to time I see pronouncements that templates are for experts
only, however this is FUD really as the only way to become expert is
to start from being a non expert.

regards
Andy Little

Ian Collins

unread,
Jun 7, 2008, 5:08:15 PM6/7/08
to
James Kanze wrote:
> On Jun 6, 5:12 pm, Noah Roberts <u...@example.net> wrote:
>> James Kanze wrote:
>>>> * templates induce coupling? :p
>
>>> And how, unless all of your compilers support export.
>
>> Header dependencies and code coupling are very, very different things.
>
> They're related, but yes: I should have made it clear that I was
> talking about compiler dependencies, and not design coupling.
>
Just to clarify, your objections are practical (tool limitations) rather
than philosophical?

If that is the case and you can't get a better hammer, use a bigger one.

I like to include build times as part one of my project requirements
(and yes, I do test it!). If the build times get too long, treat this
like any other design issue. Weigh the time/cost of design changes to
the code against design changes to the build environment. On past
projects, adding another machine to the build farm has been the more
cost effective option. This is probably more typical today with
plummeting hardware costs and rising labour costs.

--
Ian Collins.

James Kanze

unread,
Jun 8, 2008, 4:30:51 AM6/8/08
to
On Jun 7, 11:08 pm, Ian Collins <ian-n...@hotmail.com> wrote:
> James Kanze wrote:
> > On Jun 6, 5:12 pm, Noah Roberts <u...@example.net> wrote:
> >> James Kanze wrote:
> >>>> * templates induce coupling? :p

> >>> And how, unless all of your compilers support export.

> >> Header dependencies and code coupling are very, very
> >> different things.

> > They're related, but yes: I should have made it clear that I
> > was talking about compiler dependencies, and not design
> > coupling.

> Just to clarify, your objections are practical (tool
> limitations) rather than philosophical?

My objections are always practical, rather than philosophical.
I'm a practicing programmer, not a philosopher. Using templates
today has a very definite cost.

> If that is the case and you can't get a better hammer, use a
> bigger one.

In other words, C++ isn't the language I should be using for
large applications? From what I can see, it's not really a very
good language, but all of the others are worse.

Note that the standard actually addressed this particular
problem, at least partially, with export, which the compiler
implementors have pretty much ignored. Part of the reason, no
doubt, is that it mainly affects application level code. And
there's really not that much use for templates at that level;
they're generally more appropriate for low level library code.

(The fact that there is a dependency on the implementation of
std::vector isn't generally considered a problem: std::vector is
part of the compiler, and when you upgrade the compiler, you do
a clean build anyway, regardless of how long it takes.)

> I like to include build times as part one of my project
> requirements (and yes, I do test it!). If the build times get
> too long, treat this like any other design issue. Weigh the
> time/cost of design changes to the code against design changes
> to the build environment. On past projects, adding another
> machine to the build farm has been the more cost effective
> option. This is probably more typical today with plummeting
> hardware costs and rising labour costs.

The problem is less total build time (at least until it starts
reaching the point where you can't do a clean build over the
week-end); it is recompilation times due to a change. In large
applications, for example, header files are generally frozen
early, and only allowed to change exceptionally. Recompile
times aren't the only reason for this, of course, but they're
part of it.

As for adding a machine to the build park: throwing more
hardware at a problem is often the simplest and most economic
solution (although in this case, the problem is perhaps more
linked with IO throughput than with actual CPU power---and
adding a machine can actually make things worse, but increasing
network load). But practically, in most enterprises, it's part
of a different budget:-(.

--
James Kanze (GABI Software) email:james...@gmail.com

Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Ian Collins

unread,
Jun 8, 2008, 5:41:16 AM6/8/08
to
James Kanze wrote:
>
> As for adding a machine to the build park: throwing more
> hardware at a problem is often the simplest and most economic
> solution (although in this case, the problem is perhaps more
> linked with IO throughput than with actual CPU power---and
> adding a machine can actually make things worse, but increasing
> network load). But practically, in most enterprises, it's part
> of a different budget:-(.
>
On my last couple of C++ projects, I was fortunate to enough to be
responsible for both the build farm design and budget as well as the
software design. So neither problem arose :)

--
Ian Collins.

Walter Bright

unread,
Jun 8, 2008, 4:58:35 PM6/8/08
to
Ian Collins wrote:
> On my last couple of C++ projects, I was fortunate to enough to be
> responsible for both the build farm design and budget as well as the
> software design. So neither problem arose :)

Wow, I didn't know people actually used build farms for C++! How many
lines of code was that?

Ian Collins

unread,
Jun 9, 2008, 3:29:35 AM6/9/08
to

We never bothered to count.

I have been using distributed building for C and C++ for over a decade
now. All that's required is sensible compiler licensing and a decent
make system.

--
Ian Collins.

James Kanze

unread,
Jun 9, 2008, 8:10:51 AM6/9/08
to
On Jun 8, 10:58 pm, Walter Bright <wal...@digitalmars-nospamm.com>
wrote:

And how many different versions does he need? If you have
separate debug and release versions, for each program, on each
target platform, you can easily end up with ten or fifteen
complete builds. And with enough templates in the header files,
it doesn't take very many lines of source code (less than a
million, even) to end needed a build farm, just to be able to do
a clean build over the week-end.

Or course, you usually have the material anyway. Just tell the
programmers to not turn their machines off when they go home for
the week-end.

Noah Roberts

unread,
Jun 9, 2008, 11:17:48 AM6/9/08
to
James Kanze wrote:

> The important thing to realise is that they're a tool. Like
> most (or even all) tools, they have a cost. If the advantages
> of using the tool outweigh the cost, then you should use it. If
> they don't, then you shouldn't.

Well, I can agree with that but you seem to be making stuff up to argue
against using a tool. Like asserting, without basis, that templates are
only useful in "lower level code", only decouple in "lower level code",
and various other things that, quite frankly, make no sense at all.

You can't use screws where they are useful if you've got some sort of
weird prejudice against screwdrivers.

Ian Collins

unread,
Jun 9, 2008, 3:21:35 PM6/9/08
to
James Kanze wrote:
> On Jun 8, 10:58 pm, Walter Bright <wal...@digitalmars-nospamm.com>
> wrote:
>> Ian Collins wrote:
>>> On my last couple of C++ projects, I was fortunate to enough to be
>>> responsible for both the build farm design and budget as well as the
>>> software design. So neither problem arose :)
>
>> Wow, I didn't know people actually used build farms for C++!
>> How many lines of code was that?
>
> And how many different versions does he need? If you have
> separate debug and release versions, for each program, on each
> target platform, you can easily end up with ten or fifteen
> complete builds. And with enough templates in the header files,
> it doesn't take very many lines of source code (less than a
> million, even) to end needed a build farm, just to be able to do
> a clean build over the week-end.
>
This project was about 300K lines including tests. A distributed clean
build (which included a code generation phase) took about 12 minutes,
which was too long (10 was the design limit). Any longer and
productivity would have been hit enough to add another node.

--
Ian Collins.

Walter Bright

unread,
Jun 9, 2008, 3:59:00 PM6/9/08
to

I've looked into trying to make the C++ compiler multithreaded (so it
could use multi core computers) many times. There just isn't any way to
do it, compiling C++ is fundamentally a sequential operation. The only
thing you can do is farm out the separate source files for separate
builds. The limit achievable there is when there is one node per source
file.

My experiences with trying to accelerate C++ compilation led to many
design decisions in the D programming language. Each pass (lexing,
parsing, semantic analysis, etc.) is logically separate from the others,
meaning that each can be farmed out to a separate thread. The import
file reads can be asynchronous. The lexing, parsing, and semantic
analysis of an imported module is independent of where and how it is
imported.

While the D compiler is not currently multithreaded, the process is
inherently multithreadable, and I'll be very interested to see how fast
it can go with a multicore CPU.

ian-...@hotmail.com

unread,
Jun 9, 2008, 5:46:34 PM6/9/08
to
On Jun 10, 7:59 am, Walter Bright <wal...@digitalmars-nospamm.com>

wrote:
> Ian Collins wrote:
>
> > This project was about 300K lines including tests. A distributed clean
> > build (which included a code generation phase) took about 12 minutes,
> > which was too long (10 was the design limit). Any longer and
> > productivity would have been hit enough to add another node.
>
> I've looked into trying to make the C++ compiler multithreaded (so it
> could use multi core computers) many times. There just isn't any way to
> do it, compiling C++ is fundamentally a sequential operation. The only
> thing you can do is farm out the separate source files for separate
> builds. The limit achievable there is when there is one node per source
> file.
>
The promlem of distributed building is best soved by a combination of
the build system and the compiler. The build system is responsible for
farming out jobs to cores and the compiler has to be parallel build
aware. Template instantiation is one area where some form of locking
of generated instantiation files may be required.

The two I use are gcc/GNU make which supports parallel building and
Sun CC/dmake which supports parallel and distributed builing.

The number of jobs per core depends on the nature of the code and
should be tuned for each project. Over a number of C++ projects I
have found 2 to 4 jobs per core to be a sweet spot. The projects all
used the many small source file model which works best with parallel
(and more so, distributed) building.

Parallel or distributed building has to be designed in to your process
from day one. Poorly designed makefiles or code layout can loose you
many of the possible gains.

--
Ian

ian-...@hotmail.com

unread,
Jun 9, 2008, 6:39:32 PM6/9/08
to
On Jun 10, 3:17 am, Noah Roberts <u...@example.net> wrote:
> James Kanze wrote:
> > The important thing to realise is that they're a tool. Like
> > most (or even all) tools, they have a cost. If the advantages
> > of using the tool outweigh the cost, then you should use it. If
> > they don't, then you shouldn't.
>
> Well, I can agree with that but you seem to be making stuff up to argue
> against using a tool. Like asserting, without basis, that templates are
> only useful in "lower level code", only decouple in "lower level code",
> and various other things that, quite frankly, make no sense at all.
>
I think James is pretty clear in his mention of a cost/benefit trade-
off.

If your process is designed for rapid building to offset the cost of
extra coupling then the advantages of templates may outweigh the
cost. If a clean build of your project takes a long time, the
productivity cost will outweigh any benefits.

--

Ian.

James Kanze

unread,
Jun 10, 2008, 5:16:34 AM6/10/08
to
On Jun 9, 9:59 pm, Walter Bright <wal...@digitalmars-nospamm.com>

The input must be scanned sequentially, I'm pretty sure, since a
#define along the way can clearly affect how the following
source is read. And I rather suspect that it must also be
parsed sequentially, since the grammar is not context
free---whether a symbol is the name of a type, the name of a
template, or something else, affects parsing. But once you've
got your parse trees, couldn't you parallelize the processing of
each function: low-level optimization and code generation?

James Kanze

unread,
Jun 10, 2008, 5:24:19 AM6/10/08
to

The clean build isn't the problem. You can schedule that
overnight, or for a weekend. (For my library, a clean build for
all of the versions I support under Unix takes something like
eight hours. Which doesn't bother me too much.) The problem is
the incremental builds when someone bug-fixes something in the
implementation. For non-templates, that means recompiling a
single .cc file; for templates, recompiling all source files
which include the header. A difference between maybe 5 seconds,
and a couple of minutes. Which is a very significant difference
if you're sitting in front of the computer, waiting for it to
finish.

--
James Kanze (GABI Software) email:james...@gmail.com

Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Ian Collins

unread,
Jun 10, 2008, 5:36:42 AM6/10/08
to
James Kanze wrote:
> On Jun 10, 12:39 am, ian-n...@hotmail.com wrote:
>
>> If your process is designed for rapid building to offset the
>> cost of extra coupling then the advantages of templates may
>> outweigh the cost. If a clean build of your project takes a
>> long time, the productivity cost will outweigh any benefits.
>
> The clean build isn't the problem. You can schedule that
> overnight, or for a weekend. (For my library, a clean build for
> all of the versions I support under Unix takes something like
> eight hours. Which doesn't bother me too much.) The problem is
> the incremental builds when someone bug-fixes something in the
> implementation. For non-templates, that means recompiling a
> single .cc file; for templates, recompiling all source files
> which include the header. A difference between maybe 5 seconds,
> and a couple of minutes. Which is a very significant difference
> if you're sitting in front of the computer, waiting for it to
> finish.
>
You can say the same for a change to any header. There's always
something else to look at for a couple of minutes..

--
Ian Collins.

Matthias Buelow

unread,
Jun 10, 2008, 5:44:31 AM6/10/08
to
Walter Bright wrote:

> My experiences with trying to accelerate C++ compilation led to many
> design decisions in the D programming language. Each pass (lexing,
> parsing, semantic analysis, etc.) is logically separate from the others,

Arguably, this is just a workaround for the basic problem that C++ (and
presumably D, aswell) is a language where the program must be completely
recompiled and linked before execution. Incremental development where
new code can be directly loaded and tested in a running object image is
imho a more productive model for large program development.

Walter Bright

unread,
Jun 10, 2008, 6:05:08 AM6/10/08
to
James Kanze wrote:
> The input must be scanned sequentially, I'm pretty sure, since a
> #define along the way can clearly affect how the following
> source is read.

Token pasting is another feature that mucks up all hope of doing things
non-sequentially.

> And I rather suspect that it must also be
> parsed sequentially, since the grammar is not context
> free---whether a symbol is the name of a type, the name of a
> template, or something else, affects parsing.

Take a look at the rules for looking up names. What names the compiler
'sees' depends very much on a sequential view of the input, which
affects overloading, which affects ...

> But once you've
> got your parse trees, couldn't you parallelize the processing of
> each function: low-level optimization and code generation?

Yes, you could probably do that in parallel for each function, though
you'd have to do a complex merge process to turn the result into a
single object file. I decided that wasn't worth the effort, because the
bulk of the time spent was in the front end which wasn't parallelizable.
The big gains would be in asynchronously processing all those header files.

P.S. Even worse for C++ is that header files must be reprocessed for
every source file compilation. So, if you have m source files, each with
a header file, and every source file winds up #include'ing every header
(a normal, if regrettable, situation), compilation times are O(m*m). The
D programming language is designed so that import files compile
independently of where they are imported, so compilation times are O(m).

P.P.S. Yes, I know all about precompiled headers in C++, but there is no
way to make pch perfectly language conformant. You have to accept some
deviation from the standard to use them.

Walter Bright

unread,
Jun 10, 2008, 6:12:47 AM6/10/08
to

Back when vertebrates were just emerging from the slime, when I was
working on compilers for Symantec, the request came in for the linker to
acquire incremental linking ability because the competition's linker
could do incremental builds. When I pointed out that our linker could do
a full link faster than the incremental linkers could do an incremental
link, the point became moot.

Back to the present, I suggest that if the full build can be made fast
enough, there is no reason for incremental builds. I think Borland also
made that point well with their original Turbo Pascal release.

Ian Collins

unread,
Jun 10, 2008, 6:17:32 AM6/10/08
to

A model which isn't unusual in C or C++ development, consider device
drivers and other loadable modules or plugins.

--
Ian Collins.

Walter Bright

unread,
Jun 10, 2008, 6:24:10 AM6/10/08
to
James Kanze wrote:
> The clean build isn't the problem. You can schedule that
> overnight, or for a weekend. (For my library, a clean build for
> all of the versions I support under Unix takes something like
> eight hours. Which doesn't bother me too much.) The problem is
> the incremental builds when someone bug-fixes something in the
> implementation. For non-templates, that means recompiling a
> single .cc file; for templates, recompiling all source files
> which include the header. A difference between maybe 5 seconds,
> and a couple of minutes. Which is a very significant difference
> if you're sitting in front of the computer, waiting for it to
> finish.

A full build of the dmd compiler (using dmc++) takes 18 seconds on an
Intel 1.6 GHz machine <g>. 33 seconds for g++ on AMD 64 4000.

Walter Bright

unread,
Jun 10, 2008, 6:25:59 AM6/10/08
to
Ian Collins wrote:
> You can say the same for a change to any header. There's always
> something else to look at for a couple of minutes..

Nearly instant rebuilds are a transformative experience for development.
Going off for 2 minutes to get coffee, read slashdot, etc., gets one out
of the 'zone'.

Ian Collins

unread,
Jun 10, 2008, 6:46:18 AM6/10/08
to

My "something else" was the next problem or test.

--
Ian Collins.

co...@mailvault.com

unread,
Jun 10, 2008, 1:47:59 PM6/10/08
to
On Jun 10, 4:24 am, Walter Bright <wal...@digitalmars-nospamm.com>
wrote:

Do you give any thought to bringing either of those compilers on-
line?
I think it would be a good idea. I know of two C++ compilers that
have taken small steps toward being available on-line.

Brian Wood
Ebenezer Enterprises
www.webEbenezer.net

Walter Bright

unread,
Jun 10, 2008, 4:52:10 PM6/10/08
to

I've tried many times to multitask. I'll have the test suite running in
one window, a compile in a second, and edit documentation in a third.
All closely related, but I find that inevitably I get confabulated
switching mental contexts between them and screw things up.

Walter Bright

unread,
Jun 10, 2008, 4:54:41 PM6/10/08
to

I'm familiar with Comeau's online C++ compiler, but you cannot link or
run the result, so I don't really see the point in it.

Noah Roberts

unread,
Jun 10, 2008, 6:11:57 PM6/10/08
to
James Kanze wrote:

> The clean build isn't the problem. You can schedule that
> overnight, or for a weekend. (For my library, a clean build for
> all of the versions I support under Unix takes something like
> eight hours. Which doesn't bother me too much.) The problem is
> the incremental builds when someone bug-fixes something in the
> implementation. For non-templates, that means recompiling a
> single .cc file; for templates, recompiling all source files
> which include the header. A difference between maybe 5 seconds,
> and a couple of minutes. Which is a very significant difference
> if you're sitting in front of the computer, waiting for it to
> finish.

See the "Stable Dependencies Principle" and the "Stable Abstractions
Principle".

http://www.objectmentor.com/resources/articles/stability.pdf

"Thus, the software that encapsulates the *high level design model* of
the system should be placed into stable packages."

- Emphasis added -

"[The Stable Abstractions Principle] says that a stable package should
also be abstract so that its stability does not prevent it from being
extended."

Robert C. Martin's article on stability principles pretty much stands
against everything you've said in this thread to date. Templates are
the epitome of abstraction. Perhaps if you were not so anti-template
you'd do some looking into how to make the best use of them and you
would not be arguing about changing templates causing long builds; you'd
be well aware that you simply don't change templates that often.

Does this mean you'll never find a bug in a template? Of course not.
But if you find yourself often having to alter or fix templates that are
permeating your entire source tree, instead of a few modules, then the
problem is poor design and testing practices...it is not the fault of
templates.

Of course, you need to go back and read about the other design
principles that Martin describes in order to see the entire reasoning
behind why you put the *high level code* in your stable, abstract
packages. I'm not begging an authority, Martin's stuff just happens to
be very good and the reasoning stands on its own.

The principles of OOD translate very well to Generic Programming.

Michael Furman

unread,
Jun 10, 2008, 10:54:50 PM6/10/08
to
James Kanze wrote:
> ....

>
> The clean build isn't the problem. You can schedule that
> overnight, or for a weekend. (For my library, a clean build for
> all of the versions I support under Unix takes something like
> eight hours. Which doesn't bother me too much.) The problem is
> the incremental builds when someone bug-fixes something in the
> implementation. For non-templates, that means recompiling a
> single .cc file; for templates, recompiling all source files
> which include the header. A difference between maybe 5 seconds,
> and a couple of minutes. Which is a very significant difference
> if you're sitting in front of the computer, waiting for it to
> finish.

I love when compilation takes more then a couple of seconds: I have
extra time to think! Sometimes it ends with killing the compilation
and doing something else, rather then trying the result.

Michael Furman

Ian Collins

unread,
Jun 11, 2008, 12:35:16 AM6/11/08
to

I know, it's a male thing :(

My builds always run the unit tests, so that's one less window to worry
about.

This all goes to show that what ever you can do the improve build times
is worth the effort!

--
Ian Collins.

It is loading more messages.
0 new messages